May 8 00:37:03.759502 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed May 7 22:54:21 -00 2025 May 8 00:37:03.759523 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=86cfbfcc89a9c46f6cbba5bdb3509d1ce1367f0c93b0b0e4c6bdcad1a2064c90 May 8 00:37:03.759530 kernel: Disabled fast string operations May 8 00:37:03.759535 kernel: BIOS-provided physical RAM map: May 8 00:37:03.759539 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable May 8 00:37:03.759543 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved May 8 00:37:03.759549 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved May 8 00:37:03.759554 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable May 8 00:37:03.759558 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data May 8 00:37:03.759563 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS May 8 00:37:03.759567 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable May 8 00:37:03.759571 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved May 8 00:37:03.759576 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved May 8 00:37:03.759580 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved May 8 00:37:03.759587 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved May 8 00:37:03.759592 kernel: NX (Execute Disable) protection: active May 8 00:37:03.759606 kernel: APIC: Static calls initialized May 8 00:37:03.759616 kernel: SMBIOS 2.7 present. May 8 00:37:03.759621 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 May 8 00:37:03.759627 kernel: vmware: hypercall mode: 0x00 May 8 00:37:03.759632 kernel: Hypervisor detected: VMware May 8 00:37:03.759636 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz May 8 00:37:03.759644 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz May 8 00:37:03.759649 kernel: vmware: using clock offset of 5061854457 ns May 8 00:37:03.759654 kernel: tsc: Detected 3408.000 MHz processor May 8 00:37:03.759659 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 8 00:37:03.759665 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 8 00:37:03.759670 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 May 8 00:37:03.759675 kernel: total RAM covered: 3072M May 8 00:37:03.759680 kernel: Found optimal setting for mtrr clean up May 8 00:37:03.759686 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G May 8 00:37:03.759692 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs May 8 00:37:03.759698 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 8 00:37:03.759703 kernel: Using GB pages for direct mapping May 8 00:37:03.759708 kernel: ACPI: Early table checksum verification disabled May 8 00:37:03.759712 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) May 8 00:37:03.759718 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) May 8 00:37:03.759723 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) May 8 00:37:03.759728 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) May 8 00:37:03.759733 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 May 8 00:37:03.759741 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 May 8 00:37:03.759749 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) May 8 00:37:03.759755 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) May 8 00:37:03.759760 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) May 8 00:37:03.759765 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) May 8 00:37:03.759772 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) May 8 00:37:03.759777 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) May 8 00:37:03.759783 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] May 8 00:37:03.759788 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] May 8 00:37:03.759799 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] May 8 00:37:03.759805 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] May 8 00:37:03.759862 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] May 8 00:37:03.759867 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] May 8 00:37:03.759873 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] May 8 00:37:03.759880 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] May 8 00:37:03.759891 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] May 8 00:37:03.759899 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] May 8 00:37:03.759904 kernel: system APIC only can use physical flat May 8 00:37:03.759910 kernel: APIC: Switched APIC routing to: physical flat May 8 00:37:03.759915 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 8 00:37:03.759920 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 May 8 00:37:03.759925 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 May 8 00:37:03.759931 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 May 8 00:37:03.759936 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 May 8 00:37:03.759945 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 May 8 00:37:03.759950 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 May 8 00:37:03.759956 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 May 8 00:37:03.759961 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 May 8 00:37:03.759966 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 May 8 00:37:03.759971 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 May 8 00:37:03.759977 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 May 8 00:37:03.759986 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 May 8 00:37:03.759995 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 May 8 00:37:03.760003 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 May 8 00:37:03.760014 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 May 8 00:37:03.760020 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 May 8 00:37:03.760025 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 May 8 00:37:03.760030 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 May 8 00:37:03.760036 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 May 8 00:37:03.760041 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 May 8 00:37:03.760046 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 May 8 00:37:03.760051 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 May 8 00:37:03.760056 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 May 8 00:37:03.760061 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 May 8 00:37:03.760066 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 May 8 00:37:03.760073 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 May 8 00:37:03.760078 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 May 8 00:37:03.760083 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 May 8 00:37:03.760089 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 May 8 00:37:03.760094 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 May 8 00:37:03.760099 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 May 8 00:37:03.760104 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 May 8 00:37:03.760109 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 May 8 00:37:03.760115 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 May 8 00:37:03.760120 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 May 8 00:37:03.760126 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 May 8 00:37:03.760131 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 May 8 00:37:03.760137 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 May 8 00:37:03.760142 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 May 8 00:37:03.760147 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 May 8 00:37:03.760152 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 May 8 00:37:03.760157 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 May 8 00:37:03.760163 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 May 8 00:37:03.760168 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 May 8 00:37:03.760173 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 May 8 00:37:03.760179 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 May 8 00:37:03.760184 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 May 8 00:37:03.760189 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 May 8 00:37:03.760195 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 May 8 00:37:03.760200 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 May 8 00:37:03.760205 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 May 8 00:37:03.760210 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 May 8 00:37:03.760215 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 May 8 00:37:03.760220 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 May 8 00:37:03.760225 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 May 8 00:37:03.760232 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 May 8 00:37:03.760237 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 May 8 00:37:03.760242 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 May 8 00:37:03.760251 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 May 8 00:37:03.760258 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 May 8 00:37:03.760263 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 May 8 00:37:03.760269 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 May 8 00:37:03.760274 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 May 8 00:37:03.760280 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 May 8 00:37:03.760286 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 May 8 00:37:03.760292 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 May 8 00:37:03.760297 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 May 8 00:37:03.760303 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 May 8 00:37:03.760308 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 May 8 00:37:03.760314 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 May 8 00:37:03.760320 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 May 8 00:37:03.760325 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 May 8 00:37:03.760331 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 May 8 00:37:03.760336 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 May 8 00:37:03.760343 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 May 8 00:37:03.760348 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 May 8 00:37:03.760354 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 May 8 00:37:03.760359 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 May 8 00:37:03.760365 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 May 8 00:37:03.760371 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 May 8 00:37:03.760376 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 May 8 00:37:03.760382 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 May 8 00:37:03.760387 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 May 8 00:37:03.760393 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 May 8 00:37:03.760399 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 May 8 00:37:03.760405 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 May 8 00:37:03.760410 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 May 8 00:37:03.760416 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 May 8 00:37:03.760421 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 May 8 00:37:03.760427 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 May 8 00:37:03.760432 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 May 8 00:37:03.760438 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 May 8 00:37:03.760443 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 May 8 00:37:03.760449 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 May 8 00:37:03.760456 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 May 8 00:37:03.760462 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 May 8 00:37:03.760467 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 May 8 00:37:03.760473 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 May 8 00:37:03.760478 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 May 8 00:37:03.760484 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 May 8 00:37:03.760489 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 May 8 00:37:03.760495 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 May 8 00:37:03.760500 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 May 8 00:37:03.760506 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 May 8 00:37:03.760513 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 May 8 00:37:03.760518 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 May 8 00:37:03.760524 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 May 8 00:37:03.760529 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 May 8 00:37:03.760535 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 May 8 00:37:03.760540 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 May 8 00:37:03.760546 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 May 8 00:37:03.760551 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 May 8 00:37:03.760557 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 May 8 00:37:03.760562 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 May 8 00:37:03.760568 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 May 8 00:37:03.760574 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 May 8 00:37:03.760580 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 May 8 00:37:03.760586 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 May 8 00:37:03.760591 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 May 8 00:37:03.763895 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 May 8 00:37:03.763918 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 May 8 00:37:03.763924 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 May 8 00:37:03.763930 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 May 8 00:37:03.763936 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 May 8 00:37:03.763944 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 May 8 00:37:03.763959 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 May 8 00:37:03.763970 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 May 8 00:37:03.763976 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 8 00:37:03.763983 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 8 00:37:03.763989 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug May 8 00:37:03.763995 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] May 8 00:37:03.764001 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] May 8 00:37:03.764009 kernel: Zone ranges: May 8 00:37:03.764016 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 8 00:37:03.764024 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] May 8 00:37:03.764030 kernel: Normal empty May 8 00:37:03.764036 kernel: Movable zone start for each node May 8 00:37:03.764041 kernel: Early memory node ranges May 8 00:37:03.764048 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] May 8 00:37:03.764058 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] May 8 00:37:03.764068 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] May 8 00:37:03.764074 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] May 8 00:37:03.764079 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 8 00:37:03.764085 kernel: On node 0, zone DMA: 98 pages in unavailable ranges May 8 00:37:03.764093 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges May 8 00:37:03.764098 kernel: ACPI: PM-Timer IO Port: 0x1008 May 8 00:37:03.764104 kernel: system APIC only can use physical flat May 8 00:37:03.764110 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) May 8 00:37:03.764116 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) May 8 00:37:03.764122 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) May 8 00:37:03.764127 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) May 8 00:37:03.764133 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) May 8 00:37:03.764138 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) May 8 00:37:03.764146 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) May 8 00:37:03.764151 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) May 8 00:37:03.764157 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) May 8 00:37:03.764163 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) May 8 00:37:03.764168 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) May 8 00:37:03.764174 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) May 8 00:37:03.764179 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) May 8 00:37:03.764185 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) May 8 00:37:03.764191 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) May 8 00:37:03.764196 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) May 8 00:37:03.764203 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) May 8 00:37:03.764209 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) May 8 00:37:03.764215 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) May 8 00:37:03.764221 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) May 8 00:37:03.764226 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) May 8 00:37:03.764232 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) May 8 00:37:03.764241 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) May 8 00:37:03.764247 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) May 8 00:37:03.764252 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) May 8 00:37:03.764259 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) May 8 00:37:03.764265 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) May 8 00:37:03.764271 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) May 8 00:37:03.764276 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) May 8 00:37:03.764284 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) May 8 00:37:03.764294 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) May 8 00:37:03.764303 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) May 8 00:37:03.764311 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) May 8 00:37:03.764317 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) May 8 00:37:03.764324 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) May 8 00:37:03.764330 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) May 8 00:37:03.764336 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) May 8 00:37:03.764342 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) May 8 00:37:03.764350 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) May 8 00:37:03.764356 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) May 8 00:37:03.764361 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) May 8 00:37:03.764367 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) May 8 00:37:03.764373 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) May 8 00:37:03.764378 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) May 8 00:37:03.764386 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) May 8 00:37:03.764394 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) May 8 00:37:03.764405 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) May 8 00:37:03.764412 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) May 8 00:37:03.764417 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) May 8 00:37:03.764423 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) May 8 00:37:03.764428 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) May 8 00:37:03.764434 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) May 8 00:37:03.764440 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) May 8 00:37:03.764448 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) May 8 00:37:03.764454 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) May 8 00:37:03.764459 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) May 8 00:37:03.764465 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) May 8 00:37:03.764470 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) May 8 00:37:03.764476 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) May 8 00:37:03.764482 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) May 8 00:37:03.764487 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) May 8 00:37:03.764493 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) May 8 00:37:03.764499 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) May 8 00:37:03.764506 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) May 8 00:37:03.764511 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) May 8 00:37:03.764517 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) May 8 00:37:03.764523 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) May 8 00:37:03.764528 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) May 8 00:37:03.764534 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) May 8 00:37:03.764540 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) May 8 00:37:03.764545 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) May 8 00:37:03.764551 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) May 8 00:37:03.764559 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) May 8 00:37:03.764565 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) May 8 00:37:03.764572 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) May 8 00:37:03.764581 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) May 8 00:37:03.764591 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) May 8 00:37:03.764610 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) May 8 00:37:03.764617 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) May 8 00:37:03.764631 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) May 8 00:37:03.764640 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) May 8 00:37:03.764646 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) May 8 00:37:03.764654 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) May 8 00:37:03.764662 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) May 8 00:37:03.764668 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) May 8 00:37:03.764673 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) May 8 00:37:03.764680 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) May 8 00:37:03.764685 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) May 8 00:37:03.764691 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) May 8 00:37:03.764697 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) May 8 00:37:03.764702 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) May 8 00:37:03.764709 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) May 8 00:37:03.764715 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) May 8 00:37:03.764721 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) May 8 00:37:03.764726 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) May 8 00:37:03.764732 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) May 8 00:37:03.764738 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) May 8 00:37:03.764744 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) May 8 00:37:03.764750 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) May 8 00:37:03.764755 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) May 8 00:37:03.764762 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) May 8 00:37:03.764769 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) May 8 00:37:03.764775 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) May 8 00:37:03.764780 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) May 8 00:37:03.764786 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) May 8 00:37:03.764792 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) May 8 00:37:03.764797 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) May 8 00:37:03.764803 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) May 8 00:37:03.764809 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) May 8 00:37:03.764814 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) May 8 00:37:03.764822 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) May 8 00:37:03.764827 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) May 8 00:37:03.764833 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) May 8 00:37:03.764839 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) May 8 00:37:03.764844 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) May 8 00:37:03.764850 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) May 8 00:37:03.764855 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) May 8 00:37:03.764862 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) May 8 00:37:03.764868 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) May 8 00:37:03.764874 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) May 8 00:37:03.764881 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) May 8 00:37:03.764886 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) May 8 00:37:03.764892 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) May 8 00:37:03.764898 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) May 8 00:37:03.764903 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) May 8 00:37:03.764909 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) May 8 00:37:03.764915 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) May 8 00:37:03.764921 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) May 8 00:37:03.764926 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 May 8 00:37:03.764934 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) May 8 00:37:03.764939 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 8 00:37:03.764945 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 May 8 00:37:03.764951 kernel: TSC deadline timer available May 8 00:37:03.764957 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs May 8 00:37:03.764963 kernel: [mem 0x80000000-0xefffffff] available for PCI devices May 8 00:37:03.764968 kernel: Booting paravirtualized kernel on VMware hypervisor May 8 00:37:03.764974 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 8 00:37:03.764980 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 May 8 00:37:03.764987 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u262144 May 8 00:37:03.764994 kernel: pcpu-alloc: s197096 r8192 d32280 u262144 alloc=1*2097152 May 8 00:37:03.765000 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 May 8 00:37:03.765006 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 May 8 00:37:03.765011 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 May 8 00:37:03.765017 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 May 8 00:37:03.765023 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 May 8 00:37:03.765036 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 May 8 00:37:03.765043 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 May 8 00:37:03.765050 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 May 8 00:37:03.765058 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 May 8 00:37:03.765066 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 May 8 00:37:03.765073 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 May 8 00:37:03.765078 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 May 8 00:37:03.765084 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 May 8 00:37:03.765092 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 May 8 00:37:03.765102 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 May 8 00:37:03.765112 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 May 8 00:37:03.765121 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=86cfbfcc89a9c46f6cbba5bdb3509d1ce1367f0c93b0b0e4c6bdcad1a2064c90 May 8 00:37:03.765128 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 00:37:03.765135 kernel: random: crng init done May 8 00:37:03.765143 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes May 8 00:37:03.765149 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes May 8 00:37:03.765155 kernel: printk: log_buf_len min size: 262144 bytes May 8 00:37:03.765161 kernel: printk: log_buf_len: 1048576 bytes May 8 00:37:03.765168 kernel: printk: early log buf free: 239648(91%) May 8 00:37:03.765180 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 00:37:03.765192 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 8 00:37:03.765204 kernel: Fallback order for Node 0: 0 May 8 00:37:03.765210 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 May 8 00:37:03.765217 kernel: Policy zone: DMA32 May 8 00:37:03.765225 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 00:37:03.765234 kernel: Memory: 1936400K/2096628K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42856K init, 2336K bss, 159968K reserved, 0K cma-reserved) May 8 00:37:03.765242 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 May 8 00:37:03.765248 kernel: ftrace: allocating 37944 entries in 149 pages May 8 00:37:03.765255 kernel: ftrace: allocated 149 pages with 4 groups May 8 00:37:03.765266 kernel: Dynamic Preempt: voluntary May 8 00:37:03.765275 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 00:37:03.765282 kernel: rcu: RCU event tracing is enabled. May 8 00:37:03.765289 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. May 8 00:37:03.765301 kernel: Trampoline variant of Tasks RCU enabled. May 8 00:37:03.765312 kernel: Rude variant of Tasks RCU enabled. May 8 00:37:03.765323 kernel: Tracing variant of Tasks RCU enabled. May 8 00:37:03.765333 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 00:37:03.765345 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 May 8 00:37:03.765354 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 May 8 00:37:03.765360 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. May 8 00:37:03.765366 kernel: Console: colour VGA+ 80x25 May 8 00:37:03.765373 kernel: printk: console [tty0] enabled May 8 00:37:03.765379 kernel: printk: console [ttyS0] enabled May 8 00:37:03.765387 kernel: ACPI: Core revision 20230628 May 8 00:37:03.765394 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns May 8 00:37:03.765400 kernel: APIC: Switch to symmetric I/O mode setup May 8 00:37:03.765406 kernel: x2apic enabled May 8 00:37:03.765412 kernel: APIC: Switched APIC routing to: physical x2apic May 8 00:37:03.765422 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 8 00:37:03.765428 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns May 8 00:37:03.765435 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) May 8 00:37:03.765441 kernel: Disabled fast string operations May 8 00:37:03.765451 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 8 00:37:03.765461 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 8 00:37:03.765471 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 8 00:37:03.765482 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit May 8 00:37:03.765489 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall May 8 00:37:03.765495 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS May 8 00:37:03.765501 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 8 00:37:03.765507 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT May 8 00:37:03.765513 kernel: RETBleed: Mitigation: Enhanced IBRS May 8 00:37:03.765522 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 8 00:37:03.765528 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 8 00:37:03.765534 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 8 00:37:03.765540 kernel: SRBDS: Unknown: Dependent on hypervisor status May 8 00:37:03.765547 kernel: GDS: Unknown: Dependent on hypervisor status May 8 00:37:03.765553 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 8 00:37:03.765559 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 8 00:37:03.765565 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 8 00:37:03.765571 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 8 00:37:03.765578 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 8 00:37:03.765585 kernel: Freeing SMP alternatives memory: 32K May 8 00:37:03.765591 kernel: pid_max: default: 131072 minimum: 1024 May 8 00:37:03.766717 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 8 00:37:03.766735 kernel: landlock: Up and running. May 8 00:37:03.766742 kernel: SELinux: Initializing. May 8 00:37:03.766749 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 8 00:37:03.766755 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 8 00:37:03.766762 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) May 8 00:37:03.766777 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 8 00:37:03.766789 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 8 00:37:03.766797 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 8 00:37:03.766803 kernel: Performance Events: Skylake events, core PMU driver. May 8 00:37:03.766809 kernel: core: CPUID marked event: 'cpu cycles' unavailable May 8 00:37:03.766816 kernel: core: CPUID marked event: 'instructions' unavailable May 8 00:37:03.766822 kernel: core: CPUID marked event: 'bus cycles' unavailable May 8 00:37:03.766828 kernel: core: CPUID marked event: 'cache references' unavailable May 8 00:37:03.766835 kernel: core: CPUID marked event: 'cache misses' unavailable May 8 00:37:03.766841 kernel: core: CPUID marked event: 'branch instructions' unavailable May 8 00:37:03.766848 kernel: core: CPUID marked event: 'branch misses' unavailable May 8 00:37:03.766854 kernel: ... version: 1 May 8 00:37:03.766860 kernel: ... bit width: 48 May 8 00:37:03.766870 kernel: ... generic registers: 4 May 8 00:37:03.766881 kernel: ... value mask: 0000ffffffffffff May 8 00:37:03.766891 kernel: ... max period: 000000007fffffff May 8 00:37:03.766900 kernel: ... fixed-purpose events: 0 May 8 00:37:03.766912 kernel: ... event mask: 000000000000000f May 8 00:37:03.766924 kernel: signal: max sigframe size: 1776 May 8 00:37:03.766933 kernel: rcu: Hierarchical SRCU implementation. May 8 00:37:03.766943 kernel: rcu: Max phase no-delay instances is 400. May 8 00:37:03.766953 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 8 00:37:03.766963 kernel: smp: Bringing up secondary CPUs ... May 8 00:37:03.766972 kernel: smpboot: x86: Booting SMP configuration: May 8 00:37:03.766982 kernel: .... node #0, CPUs: #1 May 8 00:37:03.766992 kernel: Disabled fast string operations May 8 00:37:03.767004 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 May 8 00:37:03.767014 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 May 8 00:37:03.767024 kernel: smp: Brought up 1 node, 2 CPUs May 8 00:37:03.767034 kernel: smpboot: Max logical packages: 128 May 8 00:37:03.767044 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) May 8 00:37:03.767053 kernel: devtmpfs: initialized May 8 00:37:03.767065 kernel: x86/mm: Memory block size: 128MB May 8 00:37:03.767075 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) May 8 00:37:03.767085 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 00:37:03.767096 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) May 8 00:37:03.767109 kernel: pinctrl core: initialized pinctrl subsystem May 8 00:37:03.767118 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 00:37:03.767127 kernel: audit: initializing netlink subsys (disabled) May 8 00:37:03.767137 kernel: audit: type=2000 audit(1746664621.068:1): state=initialized audit_enabled=0 res=1 May 8 00:37:03.767147 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 00:37:03.767158 kernel: thermal_sys: Registered thermal governor 'user_space' May 8 00:37:03.767170 kernel: cpuidle: using governor menu May 8 00:37:03.767181 kernel: Simple Boot Flag at 0x36 set to 0x80 May 8 00:37:03.767193 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 00:37:03.767207 kernel: dca service started, version 1.12.1 May 8 00:37:03.767219 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) May 8 00:37:03.767230 kernel: PCI: Using configuration type 1 for base access May 8 00:37:03.767240 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 8 00:37:03.767253 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 8 00:37:03.767264 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 8 00:37:03.767276 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 8 00:37:03.767288 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 8 00:37:03.767299 kernel: ACPI: Added _OSI(Module Device) May 8 00:37:03.767313 kernel: ACPI: Added _OSI(Processor Device) May 8 00:37:03.767324 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 00:37:03.767332 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 00:37:03.767341 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 00:37:03.767351 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored May 8 00:37:03.767364 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 8 00:37:03.767375 kernel: ACPI: Interpreter enabled May 8 00:37:03.767387 kernel: ACPI: PM: (supports S0 S1 S5) May 8 00:37:03.767396 kernel: ACPI: Using IOAPIC for interrupt routing May 8 00:37:03.767407 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 8 00:37:03.767419 kernel: PCI: Using E820 reservations for host bridge windows May 8 00:37:03.767429 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F May 8 00:37:03.767440 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) May 8 00:37:03.767571 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:37:03.767650 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] May 8 00:37:03.767707 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] May 8 00:37:03.767720 kernel: PCI host bridge to bus 0000:00 May 8 00:37:03.767792 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 8 00:37:03.767849 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] May 8 00:37:03.767913 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 8 00:37:03.767962 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 8 00:37:03.768010 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] May 8 00:37:03.768057 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] May 8 00:37:03.768124 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 May 8 00:37:03.768186 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 May 8 00:37:03.768243 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 May 8 00:37:03.768302 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a May 8 00:37:03.768367 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] May 8 00:37:03.768425 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 8 00:37:03.768482 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 8 00:37:03.768538 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 8 00:37:03.768591 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 8 00:37:03.770707 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 May 8 00:37:03.770769 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI May 8 00:37:03.770825 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB May 8 00:37:03.770883 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 May 8 00:37:03.770942 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] May 8 00:37:03.771028 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] May 8 00:37:03.771088 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 May 8 00:37:03.771142 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] May 8 00:37:03.771201 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] May 8 00:37:03.771254 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] May 8 00:37:03.771311 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] May 8 00:37:03.771380 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 8 00:37:03.771457 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 May 8 00:37:03.771517 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.771573 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold May 8 00:37:03.771643 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.771701 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold May 8 00:37:03.771763 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.771820 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold May 8 00:37:03.771878 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.771932 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold May 8 00:37:03.771997 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.772053 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold May 8 00:37:03.772150 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.772210 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold May 8 00:37:03.772283 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.772370 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold May 8 00:37:03.772440 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.772496 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold May 8 00:37:03.772559 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.772713 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold May 8 00:37:03.772777 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.772833 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold May 8 00:37:03.772893 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.772957 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold May 8 00:37:03.773017 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.773072 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold May 8 00:37:03.773129 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.773183 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold May 8 00:37:03.773241 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.773299 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold May 8 00:37:03.773355 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.773409 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold May 8 00:37:03.773466 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.773521 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold May 8 00:37:03.773578 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.773652 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold May 8 00:37:03.773723 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.773791 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold May 8 00:37:03.773883 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.773970 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold May 8 00:37:03.774053 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.774114 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold May 8 00:37:03.774177 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.774232 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold May 8 00:37:03.774291 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.774353 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold May 8 00:37:03.774416 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.774483 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold May 8 00:37:03.774549 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.774685 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold May 8 00:37:03.774747 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.774803 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold May 8 00:37:03.774881 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.774949 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold May 8 00:37:03.775024 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.775091 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold May 8 00:37:03.775153 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.775214 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold May 8 00:37:03.775277 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.775336 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold May 8 00:37:03.775403 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.775467 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold May 8 00:37:03.775526 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.775580 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold May 8 00:37:03.775736 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.775800 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold May 8 00:37:03.775859 kernel: pci_bus 0000:01: extended config space not accessible May 8 00:37:03.775926 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 8 00:37:03.775989 kernel: pci_bus 0000:02: extended config space not accessible May 8 00:37:03.776003 kernel: acpiphp: Slot [32] registered May 8 00:37:03.776012 kernel: acpiphp: Slot [33] registered May 8 00:37:03.776018 kernel: acpiphp: Slot [34] registered May 8 00:37:03.776024 kernel: acpiphp: Slot [35] registered May 8 00:37:03.776030 kernel: acpiphp: Slot [36] registered May 8 00:37:03.776036 kernel: acpiphp: Slot [37] registered May 8 00:37:03.776045 kernel: acpiphp: Slot [38] registered May 8 00:37:03.776052 kernel: acpiphp: Slot [39] registered May 8 00:37:03.776062 kernel: acpiphp: Slot [40] registered May 8 00:37:03.776069 kernel: acpiphp: Slot [41] registered May 8 00:37:03.776075 kernel: acpiphp: Slot [42] registered May 8 00:37:03.776081 kernel: acpiphp: Slot [43] registered May 8 00:37:03.776087 kernel: acpiphp: Slot [44] registered May 8 00:37:03.776093 kernel: acpiphp: Slot [45] registered May 8 00:37:03.776100 kernel: acpiphp: Slot [46] registered May 8 00:37:03.776110 kernel: acpiphp: Slot [47] registered May 8 00:37:03.776118 kernel: acpiphp: Slot [48] registered May 8 00:37:03.776125 kernel: acpiphp: Slot [49] registered May 8 00:37:03.776130 kernel: acpiphp: Slot [50] registered May 8 00:37:03.776136 kernel: acpiphp: Slot [51] registered May 8 00:37:03.776142 kernel: acpiphp: Slot [52] registered May 8 00:37:03.776149 kernel: acpiphp: Slot [53] registered May 8 00:37:03.776160 kernel: acpiphp: Slot [54] registered May 8 00:37:03.776166 kernel: acpiphp: Slot [55] registered May 8 00:37:03.776172 kernel: acpiphp: Slot [56] registered May 8 00:37:03.776180 kernel: acpiphp: Slot [57] registered May 8 00:37:03.776186 kernel: acpiphp: Slot [58] registered May 8 00:37:03.776196 kernel: acpiphp: Slot [59] registered May 8 00:37:03.776205 kernel: acpiphp: Slot [60] registered May 8 00:37:03.776213 kernel: acpiphp: Slot [61] registered May 8 00:37:03.776219 kernel: acpiphp: Slot [62] registered May 8 00:37:03.776225 kernel: acpiphp: Slot [63] registered May 8 00:37:03.776301 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) May 8 00:37:03.776369 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] May 8 00:37:03.776439 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] May 8 00:37:03.776493 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] May 8 00:37:03.776545 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) May 8 00:37:03.776638 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) May 8 00:37:03.776704 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) May 8 00:37:03.776760 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) May 8 00:37:03.776818 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) May 8 00:37:03.776891 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 May 8 00:37:03.776953 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] May 8 00:37:03.777026 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] May 8 00:37:03.777088 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] May 8 00:37:03.777153 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold May 8 00:37:03.777208 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' May 8 00:37:03.777266 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] May 8 00:37:03.777320 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] May 8 00:37:03.777384 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] May 8 00:37:03.777440 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] May 8 00:37:03.777518 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] May 8 00:37:03.777594 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] May 8 00:37:03.777798 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] May 8 00:37:03.777855 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] May 8 00:37:03.777909 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] May 8 00:37:03.777966 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] May 8 00:37:03.778019 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] May 8 00:37:03.778075 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] May 8 00:37:03.778129 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] May 8 00:37:03.778182 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] May 8 00:37:03.778237 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] May 8 00:37:03.778290 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] May 8 00:37:03.778343 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] May 8 00:37:03.778402 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] May 8 00:37:03.778455 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] May 8 00:37:03.778510 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] May 8 00:37:03.778566 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] May 8 00:37:03.778664 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] May 8 00:37:03.778926 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] May 8 00:37:03.779037 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] May 8 00:37:03.779093 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] May 8 00:37:03.779154 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] May 8 00:37:03.779222 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 May 8 00:37:03.779278 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] May 8 00:37:03.779334 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] May 8 00:37:03.779408 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] May 8 00:37:03.779464 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] May 8 00:37:03.779518 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] May 8 00:37:03.779573 kernel: pci 0000:0b:00.0: supports D1 D2 May 8 00:37:03.781726 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 8 00:37:03.781831 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' May 8 00:37:03.781892 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] May 8 00:37:03.781949 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] May 8 00:37:03.782010 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] May 8 00:37:03.782067 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] May 8 00:37:03.782128 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] May 8 00:37:03.782183 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] May 8 00:37:03.782238 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] May 8 00:37:03.782293 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] May 8 00:37:03.782347 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] May 8 00:37:03.782404 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] May 8 00:37:03.782460 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] May 8 00:37:03.782521 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] May 8 00:37:03.782575 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] May 8 00:37:03.784078 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] May 8 00:37:03.784146 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] May 8 00:37:03.784203 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] May 8 00:37:03.784258 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] May 8 00:37:03.784320 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] May 8 00:37:03.784381 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] May 8 00:37:03.784435 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] May 8 00:37:03.784490 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] May 8 00:37:03.784542 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] May 8 00:37:03.784595 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] May 8 00:37:03.784793 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] May 8 00:37:03.784847 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] May 8 00:37:03.784903 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] May 8 00:37:03.784985 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] May 8 00:37:03.785079 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] May 8 00:37:03.785134 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] May 8 00:37:03.785188 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] May 8 00:37:03.785244 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] May 8 00:37:03.785296 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] May 8 00:37:03.785349 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] May 8 00:37:03.785407 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] May 8 00:37:03.785463 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] May 8 00:37:03.785517 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] May 8 00:37:03.785569 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] May 8 00:37:03.785630 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] May 8 00:37:03.785686 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] May 8 00:37:03.785739 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] May 8 00:37:03.785795 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] May 8 00:37:03.785850 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] May 8 00:37:03.785903 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] May 8 00:37:03.785956 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] May 8 00:37:03.786012 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] May 8 00:37:03.786066 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] May 8 00:37:03.786119 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] May 8 00:37:03.786173 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] May 8 00:37:03.786229 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] May 8 00:37:03.786282 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] May 8 00:37:03.786338 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] May 8 00:37:03.786391 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] May 8 00:37:03.786443 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] May 8 00:37:03.786498 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] May 8 00:37:03.786551 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] May 8 00:37:03.786613 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] May 8 00:37:03.786673 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] May 8 00:37:03.786728 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] May 8 00:37:03.786782 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] May 8 00:37:03.786845 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] May 8 00:37:03.786898 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] May 8 00:37:03.786953 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] May 8 00:37:03.787005 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] May 8 00:37:03.787057 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] May 8 00:37:03.787115 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] May 8 00:37:03.787168 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] May 8 00:37:03.787221 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] May 8 00:37:03.787276 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] May 8 00:37:03.787329 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] May 8 00:37:03.787386 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] May 8 00:37:03.787441 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] May 8 00:37:03.787494 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] May 8 00:37:03.787549 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] May 8 00:37:03.787678 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] May 8 00:37:03.787754 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] May 8 00:37:03.787808 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] May 8 00:37:03.787863 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] May 8 00:37:03.787922 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] May 8 00:37:03.787995 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] May 8 00:37:03.788005 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 May 8 00:37:03.788015 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 May 8 00:37:03.788021 kernel: ACPI: PCI: Interrupt link LNKB disabled May 8 00:37:03.788027 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 8 00:37:03.788033 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 May 8 00:37:03.788040 kernel: iommu: Default domain type: Translated May 8 00:37:03.788046 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 8 00:37:03.788052 kernel: PCI: Using ACPI for IRQ routing May 8 00:37:03.788058 kernel: PCI: pci_cache_line_size set to 64 bytes May 8 00:37:03.788064 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] May 8 00:37:03.788072 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] May 8 00:37:03.788127 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device May 8 00:37:03.788183 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible May 8 00:37:03.788236 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 8 00:37:03.788246 kernel: vgaarb: loaded May 8 00:37:03.788252 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 May 8 00:37:03.788259 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter May 8 00:37:03.788265 kernel: clocksource: Switched to clocksource tsc-early May 8 00:37:03.788271 kernel: VFS: Disk quotas dquot_6.6.0 May 8 00:37:03.789628 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 00:37:03.789638 kernel: pnp: PnP ACPI init May 8 00:37:03.789712 kernel: system 00:00: [io 0x1000-0x103f] has been reserved May 8 00:37:03.789766 kernel: system 00:00: [io 0x1040-0x104f] has been reserved May 8 00:37:03.789815 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved May 8 00:37:03.789868 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved May 8 00:37:03.789919 kernel: pnp 00:06: [dma 2] May 8 00:37:03.789976 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved May 8 00:37:03.790026 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved May 8 00:37:03.790073 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved May 8 00:37:03.790082 kernel: pnp: PnP ACPI: found 8 devices May 8 00:37:03.790088 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 8 00:37:03.790095 kernel: NET: Registered PF_INET protocol family May 8 00:37:03.790101 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 00:37:03.790109 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 8 00:37:03.790116 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 00:37:03.790122 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 8 00:37:03.790128 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 8 00:37:03.790134 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 8 00:37:03.790140 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 8 00:37:03.790147 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 8 00:37:03.790153 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 00:37:03.790159 kernel: NET: Registered PF_XDP protocol family May 8 00:37:03.790219 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 8 00:37:03.790277 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 May 8 00:37:03.790333 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 8 00:37:03.790389 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 8 00:37:03.790444 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 8 00:37:03.790499 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 May 8 00:37:03.790568 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 May 8 00:37:03.790668 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 May 8 00:37:03.790734 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 May 8 00:37:03.790790 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 May 8 00:37:03.790845 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 May 8 00:37:03.790901 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 May 8 00:37:03.790960 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 May 8 00:37:03.791017 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 May 8 00:37:03.791072 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 May 8 00:37:03.791127 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 May 8 00:37:03.791181 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 May 8 00:37:03.791236 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 May 8 00:37:03.791293 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 May 8 00:37:03.791348 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 May 8 00:37:03.791402 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 May 8 00:37:03.791456 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 May 8 00:37:03.791510 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 May 8 00:37:03.791565 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] May 8 00:37:03.791632 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] May 8 00:37:03.791687 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] May 8 00:37:03.791739 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.791793 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] May 8 00:37:03.791845 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.791898 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] May 8 00:37:03.791951 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.792006 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] May 8 00:37:03.792058 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.792112 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] May 8 00:37:03.792164 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.792216 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] May 8 00:37:03.792268 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.792321 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] May 8 00:37:03.792385 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.792440 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] May 8 00:37:03.792496 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.792549 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] May 8 00:37:03.792617 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.792693 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] May 8 00:37:03.792748 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.792802 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] May 8 00:37:03.792856 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.792909 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] May 8 00:37:03.792965 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.793024 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] May 8 00:37:03.793084 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.793138 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] May 8 00:37:03.793191 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.793245 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] May 8 00:37:03.793299 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.793377 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] May 8 00:37:03.793439 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.793493 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] May 8 00:37:03.793545 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.793611 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] May 8 00:37:03.793671 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.793725 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] May 8 00:37:03.793777 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.793831 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] May 8 00:37:03.793886 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.793939 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] May 8 00:37:03.793991 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.794043 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] May 8 00:37:03.794096 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.794148 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] May 8 00:37:03.794199 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.794252 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] May 8 00:37:03.794304 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.794359 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] May 8 00:37:03.794412 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.794465 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] May 8 00:37:03.794517 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.794570 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] May 8 00:37:03.794632 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.794685 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] May 8 00:37:03.794738 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.794791 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] May 8 00:37:03.794845 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.794899 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] May 8 00:37:03.794951 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.795004 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] May 8 00:37:03.795057 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.795109 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] May 8 00:37:03.795162 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.795214 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] May 8 00:37:03.795266 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.795318 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] May 8 00:37:03.795377 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.795430 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] May 8 00:37:03.795482 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.795535 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] May 8 00:37:03.795587 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.795779 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] May 8 00:37:03.795833 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.795885 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] May 8 00:37:03.795937 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.795993 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] May 8 00:37:03.796044 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.796096 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] May 8 00:37:03.796147 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.796199 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] May 8 00:37:03.796251 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.796304 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] May 8 00:37:03.796356 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.796410 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 8 00:37:03.796465 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] May 8 00:37:03.796519 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] May 8 00:37:03.796571 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] May 8 00:37:03.796636 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] May 8 00:37:03.796695 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] May 8 00:37:03.796750 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] May 8 00:37:03.796803 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] May 8 00:37:03.796855 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] May 8 00:37:03.796908 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] May 8 00:37:03.796965 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] May 8 00:37:03.797019 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] May 8 00:37:03.797071 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] May 8 00:37:03.797123 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] May 8 00:37:03.797176 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] May 8 00:37:03.797230 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] May 8 00:37:03.797282 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] May 8 00:37:03.797333 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] May 8 00:37:03.797386 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] May 8 00:37:03.797441 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] May 8 00:37:03.797493 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] May 8 00:37:03.797546 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] May 8 00:37:03.797631 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] May 8 00:37:03.797686 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] May 8 00:37:03.797742 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] May 8 00:37:03.797795 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] May 8 00:37:03.797847 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] May 8 00:37:03.797899 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] May 8 00:37:03.797951 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] May 8 00:37:03.798003 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] May 8 00:37:03.798055 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] May 8 00:37:03.798107 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] May 8 00:37:03.798160 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] May 8 00:37:03.798215 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] May 8 00:37:03.798272 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] May 8 00:37:03.798325 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] May 8 00:37:03.798384 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] May 8 00:37:03.798437 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] May 8 00:37:03.798491 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] May 8 00:37:03.798545 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] May 8 00:37:03.798604 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] May 8 00:37:03.798664 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] May 8 00:37:03.798723 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] May 8 00:37:03.798780 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] May 8 00:37:03.798832 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] May 8 00:37:03.798885 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] May 8 00:37:03.798938 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] May 8 00:37:03.798991 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] May 8 00:37:03.799042 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] May 8 00:37:03.799095 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] May 8 00:37:03.799147 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] May 8 00:37:03.799200 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] May 8 00:37:03.799253 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] May 8 00:37:03.799308 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] May 8 00:37:03.799375 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] May 8 00:37:03.799444 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] May 8 00:37:03.799498 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] May 8 00:37:03.799551 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] May 8 00:37:03.799658 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] May 8 00:37:03.799723 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] May 8 00:37:03.799792 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] May 8 00:37:03.799849 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] May 8 00:37:03.799905 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] May 8 00:37:03.799958 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] May 8 00:37:03.800021 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] May 8 00:37:03.800079 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] May 8 00:37:03.800132 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] May 8 00:37:03.800184 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] May 8 00:37:03.800236 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] May 8 00:37:03.800302 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] May 8 00:37:03.800356 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] May 8 00:37:03.800408 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] May 8 00:37:03.800464 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] May 8 00:37:03.800517 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] May 8 00:37:03.800570 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] May 8 00:37:03.800700 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] May 8 00:37:03.800757 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] May 8 00:37:03.800815 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] May 8 00:37:03.800877 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] May 8 00:37:03.800931 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] May 8 00:37:03.800984 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] May 8 00:37:03.801040 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] May 8 00:37:03.801093 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] May 8 00:37:03.801146 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] May 8 00:37:03.801198 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] May 8 00:37:03.801251 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] May 8 00:37:03.801303 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] May 8 00:37:03.801355 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] May 8 00:37:03.801411 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] May 8 00:37:03.801484 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] May 8 00:37:03.801559 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] May 8 00:37:03.803723 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] May 8 00:37:03.803822 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] May 8 00:37:03.803907 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] May 8 00:37:03.803990 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] May 8 00:37:03.804069 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] May 8 00:37:03.804151 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] May 8 00:37:03.804232 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] May 8 00:37:03.804312 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] May 8 00:37:03.804397 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] May 8 00:37:03.804483 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] May 8 00:37:03.804562 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] May 8 00:37:03.804719 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] May 8 00:37:03.804805 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] May 8 00:37:03.804887 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] May 8 00:37:03.804970 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] May 8 00:37:03.805053 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] May 8 00:37:03.805133 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] May 8 00:37:03.805218 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] May 8 00:37:03.805299 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] May 8 00:37:03.805392 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] May 8 00:37:03.805474 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] May 8 00:37:03.805557 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] May 8 00:37:03.805869 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] May 8 00:37:03.805930 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] May 8 00:37:03.805979 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] May 8 00:37:03.806026 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] May 8 00:37:03.806443 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] May 8 00:37:03.806505 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] May 8 00:37:03.806567 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] May 8 00:37:03.806666 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] May 8 00:37:03.807058 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] May 8 00:37:03.807115 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] May 8 00:37:03.807164 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] May 8 00:37:03.807213 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] May 8 00:37:03.807285 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] May 8 00:37:03.807348 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] May 8 00:37:03.807404 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] May 8 00:37:03.807453 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] May 8 00:37:03.807501 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] May 8 00:37:03.807557 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] May 8 00:37:03.807626 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] May 8 00:37:03.807683 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] May 8 00:37:03.807737 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] May 8 00:37:03.807805 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] May 8 00:37:03.807855 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] May 8 00:37:03.807909 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] May 8 00:37:03.807958 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] May 8 00:37:03.808011 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] May 8 00:37:03.808063 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] May 8 00:37:03.808116 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] May 8 00:37:03.808165 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] May 8 00:37:03.808218 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] May 8 00:37:03.808267 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] May 8 00:37:03.808323 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] May 8 00:37:03.808398 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] May 8 00:37:03.808456 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] May 8 00:37:03.808505 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] May 8 00:37:03.808553 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] May 8 00:37:03.809030 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] May 8 00:37:03.809090 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] May 8 00:37:03.809145 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] May 8 00:37:03.809203 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] May 8 00:37:03.810546 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] May 8 00:37:03.810627 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] May 8 00:37:03.810689 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] May 8 00:37:03.810740 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] May 8 00:37:03.810793 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] May 8 00:37:03.810847 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] May 8 00:37:03.810900 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] May 8 00:37:03.810950 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] May 8 00:37:03.811003 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] May 8 00:37:03.811052 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] May 8 00:37:03.811104 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] May 8 00:37:03.811156 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] May 8 00:37:03.811209 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] May 8 00:37:03.811258 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] May 8 00:37:03.811306 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] May 8 00:37:03.811360 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] May 8 00:37:03.811410 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] May 8 00:37:03.811458 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] May 8 00:37:03.811518 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] May 8 00:37:03.811568 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] May 8 00:37:03.811663 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] May 8 00:37:03.811716 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] May 8 00:37:03.811770 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] May 8 00:37:03.811824 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] May 8 00:37:03.811876 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] May 8 00:37:03.811930 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] May 8 00:37:03.811979 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] May 8 00:37:03.812050 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] May 8 00:37:03.812100 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] May 8 00:37:03.812154 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] May 8 00:37:03.812209 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] May 8 00:37:03.812266 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] May 8 00:37:03.812316 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] May 8 00:37:03.812365 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] May 8 00:37:03.812419 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] May 8 00:37:03.812468 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] May 8 00:37:03.812519 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] May 8 00:37:03.812575 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] May 8 00:37:03.812648 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] May 8 00:37:03.812704 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] May 8 00:37:03.812754 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] May 8 00:37:03.812809 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] May 8 00:37:03.812863 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] May 8 00:37:03.812917 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] May 8 00:37:03.812979 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] May 8 00:37:03.813034 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] May 8 00:37:03.813085 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] May 8 00:37:03.813140 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] May 8 00:37:03.813189 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] May 8 00:37:03.813253 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 8 00:37:03.813263 kernel: PCI: CLS 32 bytes, default 64 May 8 00:37:03.813270 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 8 00:37:03.813277 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns May 8 00:37:03.813284 kernel: clocksource: Switched to clocksource tsc May 8 00:37:03.813291 kernel: Initialise system trusted keyrings May 8 00:37:03.813299 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 8 00:37:03.813306 kernel: Key type asymmetric registered May 8 00:37:03.813314 kernel: Asymmetric key parser 'x509' registered May 8 00:37:03.813320 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 8 00:37:03.813327 kernel: io scheduler mq-deadline registered May 8 00:37:03.813334 kernel: io scheduler kyber registered May 8 00:37:03.813340 kernel: io scheduler bfq registered May 8 00:37:03.813402 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 May 8 00:37:03.813458 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.813514 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 May 8 00:37:03.813586 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.813695 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 May 8 00:37:03.813766 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.813822 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 May 8 00:37:03.813880 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.813936 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 May 8 00:37:03.813990 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.814048 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 May 8 00:37:03.814104 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.814161 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 May 8 00:37:03.814215 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.814270 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 May 8 00:37:03.814327 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.814382 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 May 8 00:37:03.814436 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.814491 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 May 8 00:37:03.814544 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.814925 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 May 8 00:37:03.814990 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.815049 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 May 8 00:37:03.815103 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.815158 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 May 8 00:37:03.815212 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.815265 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 May 8 00:37:03.815321 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.815376 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 May 8 00:37:03.815883 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.815955 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 May 8 00:37:03.816013 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.816073 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 May 8 00:37:03.816128 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.816182 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 May 8 00:37:03.816235 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.816290 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 May 8 00:37:03.816344 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.816405 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 May 8 00:37:03.816460 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.816518 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 May 8 00:37:03.816573 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.816709 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 May 8 00:37:03.816765 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.818690 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 May 8 00:37:03.818753 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.818811 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 May 8 00:37:03.818866 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.818922 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 May 8 00:37:03.818975 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.819030 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 May 8 00:37:03.819095 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.819152 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 May 8 00:37:03.819206 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.819265 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 May 8 00:37:03.819319 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.819394 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 May 8 00:37:03.819449 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.819503 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 May 8 00:37:03.819557 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.819636 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 May 8 00:37:03.819693 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.819750 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 May 8 00:37:03.819805 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.819815 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 8 00:37:03.819822 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 00:37:03.819828 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 8 00:37:03.819835 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 May 8 00:37:03.819841 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 8 00:37:03.819850 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 8 00:37:03.819905 kernel: rtc_cmos 00:01: registered as rtc0 May 8 00:37:03.819955 kernel: rtc_cmos 00:01: setting system clock to 2025-05-08T00:37:03 UTC (1746664623) May 8 00:37:03.819965 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 8 00:37:03.820011 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram May 8 00:37:03.820020 kernel: intel_pstate: CPU model not supported May 8 00:37:03.820026 kernel: NET: Registered PF_INET6 protocol family May 8 00:37:03.820033 kernel: Segment Routing with IPv6 May 8 00:37:03.820041 kernel: In-situ OAM (IOAM) with IPv6 May 8 00:37:03.820048 kernel: NET: Registered PF_PACKET protocol family May 8 00:37:03.820055 kernel: Key type dns_resolver registered May 8 00:37:03.820062 kernel: IPI shorthand broadcast: enabled May 8 00:37:03.820068 kernel: sched_clock: Marking stable (953004015, 227844060)->(1244553610, -63705535) May 8 00:37:03.820074 kernel: registered taskstats version 1 May 8 00:37:03.820081 kernel: Loading compiled-in X.509 certificates May 8 00:37:03.820087 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 75e4e434c57439d3f2eaf7797bbbcdd698dafd0e' May 8 00:37:03.820094 kernel: Key type .fscrypt registered May 8 00:37:03.820102 kernel: Key type fscrypt-provisioning registered May 8 00:37:03.820108 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 00:37:03.820115 kernel: ima: Allocated hash algorithm: sha1 May 8 00:37:03.820121 kernel: ima: No architecture policies found May 8 00:37:03.820128 kernel: clk: Disabling unused clocks May 8 00:37:03.820134 kernel: Freeing unused kernel image (initmem) memory: 42856K May 8 00:37:03.820141 kernel: Write protecting the kernel read-only data: 36864k May 8 00:37:03.820148 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 8 00:37:03.820155 kernel: Run /init as init process May 8 00:37:03.820162 kernel: with arguments: May 8 00:37:03.820168 kernel: /init May 8 00:37:03.820175 kernel: with environment: May 8 00:37:03.820181 kernel: HOME=/ May 8 00:37:03.820187 kernel: TERM=linux May 8 00:37:03.820194 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 00:37:03.820202 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 00:37:03.820211 systemd[1]: Detected virtualization vmware. May 8 00:37:03.820219 systemd[1]: Detected architecture x86-64. May 8 00:37:03.820226 systemd[1]: Running in initrd. May 8 00:37:03.820232 systemd[1]: No hostname configured, using default hostname. May 8 00:37:03.820239 systemd[1]: Hostname set to . May 8 00:37:03.820246 systemd[1]: Initializing machine ID from random generator. May 8 00:37:03.820252 systemd[1]: Queued start job for default target initrd.target. May 8 00:37:03.820259 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:37:03.820266 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:37:03.820275 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 8 00:37:03.820282 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:37:03.820288 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 8 00:37:03.820295 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 8 00:37:03.820304 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 8 00:37:03.820311 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 8 00:37:03.820319 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:37:03.820326 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:37:03.820333 systemd[1]: Reached target paths.target - Path Units. May 8 00:37:03.820340 systemd[1]: Reached target slices.target - Slice Units. May 8 00:37:03.820346 systemd[1]: Reached target swap.target - Swaps. May 8 00:37:03.820353 systemd[1]: Reached target timers.target - Timer Units. May 8 00:37:03.820360 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:37:03.820366 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:37:03.820373 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 00:37:03.820381 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 8 00:37:03.820388 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:37:03.820395 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:37:03.820401 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:37:03.820408 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:37:03.820415 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 8 00:37:03.820422 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:37:03.820429 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 8 00:37:03.820437 systemd[1]: Starting systemd-fsck-usr.service... May 8 00:37:03.820444 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:37:03.820450 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:37:03.820457 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:37:03.820464 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 8 00:37:03.820471 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:37:03.820490 systemd-journald[215]: Collecting audit messages is disabled. May 8 00:37:03.820508 systemd[1]: Finished systemd-fsck-usr.service. May 8 00:37:03.820515 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:37:03.820523 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 00:37:03.820531 kernel: Bridge firewalling registered May 8 00:37:03.820537 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:37:03.820544 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:37:03.820551 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:37:03.820558 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:37:03.820565 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:37:03.820572 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:37:03.820581 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:37:03.820587 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 8 00:37:03.820595 systemd-journald[215]: Journal started May 8 00:37:03.823416 systemd-journald[215]: Runtime Journal (/run/log/journal/a2a57cb6683c4be9873d909dcb0fbeed) is 4.8M, max 38.6M, 33.8M free. May 8 00:37:03.755333 systemd-modules-load[216]: Inserted module 'overlay' May 8 00:37:03.782627 systemd-modules-load[216]: Inserted module 'br_netfilter' May 8 00:37:03.825625 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:37:03.825405 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:37:03.825617 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:37:03.828670 dracut-cmdline[236]: dracut-dracut-053 May 8 00:37:03.831340 dracut-cmdline[236]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=86cfbfcc89a9c46f6cbba5bdb3509d1ce1367f0c93b0b0e4c6bdcad1a2064c90 May 8 00:37:03.834038 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:37:03.839132 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:37:03.840424 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:37:03.864419 systemd-resolved[271]: Positive Trust Anchors: May 8 00:37:03.864432 systemd-resolved[271]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:37:03.864458 systemd-resolved[271]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:37:03.866454 systemd-resolved[271]: Defaulting to hostname 'linux'. May 8 00:37:03.867247 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:37:03.867437 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:37:03.888621 kernel: SCSI subsystem initialized May 8 00:37:03.894616 kernel: Loading iSCSI transport class v2.0-870. May 8 00:37:03.902621 kernel: iscsi: registered transport (tcp) May 8 00:37:03.916624 kernel: iscsi: registered transport (qla4xxx) May 8 00:37:03.916670 kernel: QLogic iSCSI HBA Driver May 8 00:37:03.937556 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 8 00:37:03.940785 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 8 00:37:03.957848 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 00:37:03.957901 kernel: device-mapper: uevent: version 1.0.3 May 8 00:37:03.958991 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 8 00:37:03.994619 kernel: raid6: avx2x4 gen() 42033 MB/s May 8 00:37:04.011617 kernel: raid6: avx2x2 gen() 37431 MB/s May 8 00:37:04.029202 kernel: raid6: avx2x1 gen() 25775 MB/s May 8 00:37:04.029258 kernel: raid6: using algorithm avx2x4 gen() 42033 MB/s May 8 00:37:04.046899 kernel: raid6: .... xor() 15842 MB/s, rmw enabled May 8 00:37:04.046956 kernel: raid6: using avx2x2 recovery algorithm May 8 00:37:04.063622 kernel: xor: automatically using best checksumming function avx May 8 00:37:04.174619 kernel: Btrfs loaded, zoned=no, fsverity=no May 8 00:37:04.182573 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 8 00:37:04.186769 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:37:04.199742 systemd-udevd[433]: Using default interface naming scheme 'v255'. May 8 00:37:04.202499 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:37:04.209755 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 8 00:37:04.216930 dracut-pre-trigger[438]: rd.md=0: removing MD RAID activation May 8 00:37:04.235295 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:37:04.238724 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:37:04.315709 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:37:04.320757 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 8 00:37:04.330636 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 8 00:37:04.331298 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:37:04.331722 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:37:04.332144 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:37:04.335757 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 8 00:37:04.350503 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 8 00:37:04.390613 kernel: VMware PVSCSI driver - version 1.0.7.0-k May 8 00:37:04.397404 kernel: vmw_pvscsi: using 64bit dma May 8 00:37:04.397446 kernel: vmw_pvscsi: max_id: 16 May 8 00:37:04.397455 kernel: vmw_pvscsi: setting ring_pages to 8 May 8 00:37:04.397463 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI May 8 00:37:04.406613 kernel: vmw_pvscsi: enabling reqCallThreshold May 8 00:37:04.406649 kernel: vmw_pvscsi: driver-based request coalescing enabled May 8 00:37:04.406658 kernel: vmw_pvscsi: using MSI-X May 8 00:37:04.411288 kernel: cryptd: max_cpu_qlen set to 1000 May 8 00:37:04.411324 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 May 8 00:37:04.411352 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 May 8 00:37:04.428200 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 May 8 00:37:04.428298 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 May 8 00:37:04.428375 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps May 8 00:37:04.428448 kernel: AVX2 version of gcm_enc/dec engaged. May 8 00:37:04.415182 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:37:04.419383 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:37:04.419616 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:37:04.419720 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:37:04.419750 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:37:04.423308 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:37:04.430924 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:37:04.431779 kernel: AES CTR mode by8 optimization enabled May 8 00:37:04.434727 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 May 8 00:37:04.439648 kernel: libata version 3.00 loaded. May 8 00:37:04.442807 kernel: ata_piix 0000:00:07.1: version 2.13 May 8 00:37:04.444207 kernel: scsi host1: ata_piix May 8 00:37:04.444317 kernel: scsi host2: ata_piix May 8 00:37:04.444429 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 May 8 00:37:04.444445 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 May 8 00:37:04.452419 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:37:04.457768 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) May 8 00:37:04.459394 kernel: sd 0:0:0:0: [sda] Write Protect is off May 8 00:37:04.459470 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 May 8 00:37:04.459553 kernel: sd 0:0:0:0: [sda] Cache data unavailable May 8 00:37:04.459640 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through May 8 00:37:04.459707 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:37:04.459720 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 8 00:37:04.461744 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:37:04.475505 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:37:04.613677 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 May 8 00:37:04.619646 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 May 8 00:37:04.645621 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray May 8 00:37:04.658330 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 8 00:37:04.658343 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (485) May 8 00:37:04.658350 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 8 00:37:04.661912 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. May 8 00:37:04.664861 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. May 8 00:37:04.665609 kernel: BTRFS: device fsid 28014d97-e6d7-4db4-b1d9-76a980e09972 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (477) May 8 00:37:04.668553 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. May 8 00:37:04.670900 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. May 8 00:37:04.671155 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. May 8 00:37:04.675705 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 8 00:37:04.705631 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:37:04.710164 kernel: GPT:disk_guids don't match. May 8 00:37:04.710213 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 00:37:04.710222 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:37:04.715626 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:37:05.715135 disk-uuid[589]: The operation has completed successfully. May 8 00:37:05.715848 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:37:05.752835 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 00:37:05.752890 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 8 00:37:05.766695 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 8 00:37:05.768369 sh[610]: Success May 8 00:37:05.776610 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 8 00:37:05.826321 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 8 00:37:05.827242 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 8 00:37:05.827499 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 8 00:37:05.848015 kernel: BTRFS info (device dm-0): first mount of filesystem 28014d97-e6d7-4db4-b1d9-76a980e09972 May 8 00:37:05.848068 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 8 00:37:05.848091 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 8 00:37:05.848105 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 8 00:37:05.848856 kernel: BTRFS info (device dm-0): using free space tree May 8 00:37:05.857618 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 8 00:37:05.859712 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 8 00:37:05.874691 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... May 8 00:37:05.875905 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 8 00:37:05.890751 kernel: BTRFS info (device sda6): first mount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:37:05.890783 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:37:05.890792 kernel: BTRFS info (device sda6): using free space tree May 8 00:37:05.895607 kernel: BTRFS info (device sda6): enabling ssd optimizations May 8 00:37:05.901536 systemd[1]: mnt-oem.mount: Deactivated successfully. May 8 00:37:05.903674 kernel: BTRFS info (device sda6): last unmount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:37:05.909133 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 8 00:37:05.912692 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 8 00:37:05.934133 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. May 8 00:37:05.940792 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 8 00:37:06.000357 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:37:06.003654 ignition[670]: Ignition 2.19.0 May 8 00:37:06.003767 ignition[670]: Stage: fetch-offline May 8 00:37:06.005742 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:37:06.003790 ignition[670]: no configs at "/usr/lib/ignition/base.d" May 8 00:37:06.003796 ignition[670]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:37:06.003852 ignition[670]: parsed url from cmdline: "" May 8 00:37:06.003854 ignition[670]: no config URL provided May 8 00:37:06.003856 ignition[670]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:37:06.003861 ignition[670]: no config at "/usr/lib/ignition/user.ign" May 8 00:37:06.004284 ignition[670]: config successfully fetched May 8 00:37:06.004301 ignition[670]: parsing config with SHA512: 3ed07f39835fece6e97a36a5f86b387b5a89fb34ad2634218656ceee71d4fe703b5ee88936467004a66b66816b96097bfe65219db2dfd9ae5ee761102e89a5e1 May 8 00:37:06.008266 unknown[670]: fetched base config from "system" May 8 00:37:06.008394 unknown[670]: fetched user config from "vmware" May 8 00:37:06.008821 ignition[670]: fetch-offline: fetch-offline passed May 8 00:37:06.009000 ignition[670]: Ignition finished successfully May 8 00:37:06.009788 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:37:06.019169 systemd-networkd[802]: lo: Link UP May 8 00:37:06.019176 systemd-networkd[802]: lo: Gained carrier May 8 00:37:06.020205 systemd-networkd[802]: Enumeration completed May 8 00:37:06.020422 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:37:06.020581 systemd-networkd[802]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. May 8 00:37:06.020590 systemd[1]: Reached target network.target - Network. May 8 00:37:06.020700 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 8 00:37:06.024628 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated May 8 00:37:06.024827 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps May 8 00:37:06.022527 systemd-networkd[802]: ens192: Link UP May 8 00:37:06.022529 systemd-networkd[802]: ens192: Gained carrier May 8 00:37:06.026072 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 8 00:37:06.035370 ignition[806]: Ignition 2.19.0 May 8 00:37:06.035379 ignition[806]: Stage: kargs May 8 00:37:06.035491 ignition[806]: no configs at "/usr/lib/ignition/base.d" May 8 00:37:06.035498 ignition[806]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:37:06.036126 ignition[806]: kargs: kargs passed May 8 00:37:06.036162 ignition[806]: Ignition finished successfully May 8 00:37:06.037497 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 8 00:37:06.041792 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 8 00:37:06.050140 ignition[813]: Ignition 2.19.0 May 8 00:37:06.050150 ignition[813]: Stage: disks May 8 00:37:06.050259 ignition[813]: no configs at "/usr/lib/ignition/base.d" May 8 00:37:06.050266 ignition[813]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:37:06.050816 ignition[813]: disks: disks passed May 8 00:37:06.050849 ignition[813]: Ignition finished successfully May 8 00:37:06.051569 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 8 00:37:06.052083 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 8 00:37:06.052194 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 00:37:06.052301 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:37:06.052393 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:37:06.052483 systemd[1]: Reached target basic.target - Basic System. May 8 00:37:06.057720 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 8 00:37:06.068596 systemd-fsck[821]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 8 00:37:06.069818 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 8 00:37:06.075679 systemd[1]: Mounting sysroot.mount - /sysroot... May 8 00:37:06.132642 kernel: EXT4-fs (sda9): mounted filesystem 36960c89-ba45-4808-a41c-bf61ce9470a3 r/w with ordered data mode. Quota mode: none. May 8 00:37:06.132636 systemd[1]: Mounted sysroot.mount - /sysroot. May 8 00:37:06.132992 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 8 00:37:06.144687 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:37:06.146457 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 8 00:37:06.146924 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 8 00:37:06.146966 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 00:37:06.146987 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:37:06.149961 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 8 00:37:06.151007 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 8 00:37:06.153631 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (829) May 8 00:37:06.155819 kernel: BTRFS info (device sda6): first mount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:37:06.155842 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:37:06.155851 kernel: BTRFS info (device sda6): using free space tree May 8 00:37:06.160663 kernel: BTRFS info (device sda6): enabling ssd optimizations May 8 00:37:06.161362 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:37:06.178301 initrd-setup-root[853]: cut: /sysroot/etc/passwd: No such file or directory May 8 00:37:06.180879 initrd-setup-root[860]: cut: /sysroot/etc/group: No such file or directory May 8 00:37:06.183260 initrd-setup-root[867]: cut: /sysroot/etc/shadow: No such file or directory May 8 00:37:06.185341 initrd-setup-root[874]: cut: /sysroot/etc/gshadow: No such file or directory May 8 00:37:06.256758 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 8 00:37:06.261689 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 8 00:37:06.263104 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 8 00:37:06.267611 kernel: BTRFS info (device sda6): last unmount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:37:06.279831 ignition[941]: INFO : Ignition 2.19.0 May 8 00:37:06.279831 ignition[941]: INFO : Stage: mount May 8 00:37:06.279831 ignition[941]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:37:06.279831 ignition[941]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:37:06.280399 ignition[941]: INFO : mount: mount passed May 8 00:37:06.280399 ignition[941]: INFO : Ignition finished successfully May 8 00:37:06.280981 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 8 00:37:06.285692 systemd[1]: Starting ignition-files.service - Ignition (files)... May 8 00:37:06.285884 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 8 00:37:06.844856 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 8 00:37:06.849741 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:37:06.943632 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (953) May 8 00:37:06.953667 kernel: BTRFS info (device sda6): first mount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:37:06.953703 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:37:06.955904 kernel: BTRFS info (device sda6): using free space tree May 8 00:37:07.010618 kernel: BTRFS info (device sda6): enabling ssd optimizations May 8 00:37:07.019034 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:37:07.035247 ignition[970]: INFO : Ignition 2.19.0 May 8 00:37:07.035247 ignition[970]: INFO : Stage: files May 8 00:37:07.035247 ignition[970]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:37:07.035247 ignition[970]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:37:07.035829 ignition[970]: DEBUG : files: compiled without relabeling support, skipping May 8 00:37:07.040623 ignition[970]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 00:37:07.040623 ignition[970]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 00:37:07.083433 ignition[970]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 00:37:07.083802 ignition[970]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 00:37:07.084178 unknown[970]: wrote ssh authorized keys file for user: core May 8 00:37:07.084481 ignition[970]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 00:37:07.103655 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 8 00:37:07.103655 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 8 00:37:07.144169 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 8 00:37:07.310415 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 8 00:37:07.310415 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 8 00:37:07.311005 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 8 00:37:07.311005 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 00:37:07.311005 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 00:37:07.311005 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:37:07.311005 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:37:07.311005 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:37:07.311005 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:37:07.311005 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:37:07.312941 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:37:07.312941 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 8 00:37:07.312941 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 8 00:37:07.312941 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 8 00:37:07.312941 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 8 00:37:07.826163 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 8 00:37:08.063803 systemd-networkd[802]: ens192: Gained IPv6LL May 8 00:37:08.125637 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 8 00:37:08.125637 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" May 8 00:37:08.126216 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" May 8 00:37:08.126216 ignition[970]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 8 00:37:08.126216 ignition[970]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:37:08.126216 ignition[970]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:37:08.126216 ignition[970]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 8 00:37:08.126216 ignition[970]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 8 00:37:08.126216 ignition[970]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:37:08.127192 ignition[970]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:37:08.127192 ignition[970]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 8 00:37:08.127192 ignition[970]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 8 00:37:08.202915 ignition[970]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:37:08.206348 ignition[970]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:37:08.206348 ignition[970]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 8 00:37:08.206348 ignition[970]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 8 00:37:08.206348 ignition[970]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 8 00:37:08.206348 ignition[970]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 00:37:08.206348 ignition[970]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 00:37:08.206348 ignition[970]: INFO : files: files passed May 8 00:37:08.206348 ignition[970]: INFO : Ignition finished successfully May 8 00:37:08.207722 systemd[1]: Finished ignition-files.service - Ignition (files). May 8 00:37:08.216730 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 8 00:37:08.219048 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 8 00:37:08.219362 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 00:37:08.219446 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 8 00:37:08.226841 initrd-setup-root-after-ignition[1000]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:37:08.226841 initrd-setup-root-after-ignition[1000]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 8 00:37:08.227480 initrd-setup-root-after-ignition[1004]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:37:08.228437 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:37:08.228850 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 8 00:37:08.231756 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 8 00:37:08.251710 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 00:37:08.251769 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 8 00:37:08.252176 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 8 00:37:08.252307 systemd[1]: Reached target initrd.target - Initrd Default Target. May 8 00:37:08.252508 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 8 00:37:08.252978 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 8 00:37:08.262793 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:37:08.266685 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 8 00:37:08.272155 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 8 00:37:08.272455 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:37:08.272640 systemd[1]: Stopped target timers.target - Timer Units. May 8 00:37:08.272775 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 00:37:08.272841 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:37:08.273060 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 8 00:37:08.273274 systemd[1]: Stopped target basic.target - Basic System. May 8 00:37:08.273451 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 8 00:37:08.273650 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:37:08.273865 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 8 00:37:08.274062 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 8 00:37:08.274394 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:37:08.274619 systemd[1]: Stopped target sysinit.target - System Initialization. May 8 00:37:08.274818 systemd[1]: Stopped target local-fs.target - Local File Systems. May 8 00:37:08.275015 systemd[1]: Stopped target swap.target - Swaps. May 8 00:37:08.275181 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 00:37:08.275239 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 8 00:37:08.275519 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 8 00:37:08.275697 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:37:08.275878 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 8 00:37:08.275920 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:37:08.276069 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 00:37:08.276128 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 8 00:37:08.276368 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 00:37:08.276429 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:37:08.276680 systemd[1]: Stopped target paths.target - Path Units. May 8 00:37:08.276829 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 00:37:08.280619 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:37:08.280792 systemd[1]: Stopped target slices.target - Slice Units. May 8 00:37:08.280993 systemd[1]: Stopped target sockets.target - Socket Units. May 8 00:37:08.281181 systemd[1]: iscsid.socket: Deactivated successfully. May 8 00:37:08.281244 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:37:08.281454 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 00:37:08.281499 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:37:08.281734 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 00:37:08.281793 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:37:08.282043 systemd[1]: ignition-files.service: Deactivated successfully. May 8 00:37:08.282098 systemd[1]: Stopped ignition-files.service - Ignition (files). May 8 00:37:08.290726 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 8 00:37:08.292086 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 8 00:37:08.292193 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 00:37:08.292280 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:37:08.292469 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 00:37:08.292547 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:37:08.295538 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 00:37:08.295587 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 8 00:37:08.301283 ignition[1024]: INFO : Ignition 2.19.0 May 8 00:37:08.301283 ignition[1024]: INFO : Stage: umount May 8 00:37:08.301283 ignition[1024]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:37:08.301283 ignition[1024]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:37:08.304024 ignition[1024]: INFO : umount: umount passed May 8 00:37:08.304024 ignition[1024]: INFO : Ignition finished successfully May 8 00:37:08.302532 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 00:37:08.302607 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 8 00:37:08.302849 systemd[1]: Stopped target network.target - Network. May 8 00:37:08.302937 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 00:37:08.302981 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 8 00:37:08.303104 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 00:37:08.303125 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 8 00:37:08.303229 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 00:37:08.303249 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 8 00:37:08.303351 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 8 00:37:08.303370 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 8 00:37:08.303541 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 8 00:37:08.303690 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 8 00:37:08.306512 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 00:37:08.307412 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 00:37:08.307466 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 8 00:37:08.308029 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 00:37:08.308061 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 8 00:37:08.311733 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 8 00:37:08.311818 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 00:37:08.311846 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:37:08.311965 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. May 8 00:37:08.312005 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. May 8 00:37:08.312168 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:37:08.312380 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 00:37:08.312432 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 8 00:37:08.315834 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:37:08.315867 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:37:08.316673 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 00:37:08.316698 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 8 00:37:08.316798 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 8 00:37:08.316819 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:37:08.320201 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 00:37:08.320251 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 8 00:37:08.323113 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 00:37:08.323220 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:37:08.323592 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 00:37:08.323639 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 8 00:37:08.323920 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 00:37:08.323942 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:37:08.324157 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 00:37:08.324186 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 8 00:37:08.324545 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 00:37:08.324574 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 8 00:37:08.324953 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:37:08.324981 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:37:08.333704 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 8 00:37:08.333840 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 00:37:08.333878 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:37:08.334035 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 8 00:37:08.334064 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:37:08.334212 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 00:37:08.334241 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:37:08.334390 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:37:08.334418 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:37:08.337531 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 00:37:08.337622 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 8 00:37:08.410447 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 00:37:08.410524 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 8 00:37:08.411027 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 8 00:37:08.411180 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 00:37:08.411230 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 8 00:37:08.416717 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 8 00:37:08.428172 systemd[1]: Switching root. May 8 00:37:08.459484 systemd-journald[215]: Journal stopped May 8 00:37:03.759502 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed May 7 22:54:21 -00 2025 May 8 00:37:03.759523 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=86cfbfcc89a9c46f6cbba5bdb3509d1ce1367f0c93b0b0e4c6bdcad1a2064c90 May 8 00:37:03.759530 kernel: Disabled fast string operations May 8 00:37:03.759535 kernel: BIOS-provided physical RAM map: May 8 00:37:03.759539 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable May 8 00:37:03.759543 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved May 8 00:37:03.759549 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved May 8 00:37:03.759554 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable May 8 00:37:03.759558 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data May 8 00:37:03.759563 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS May 8 00:37:03.759567 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable May 8 00:37:03.759571 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved May 8 00:37:03.759576 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved May 8 00:37:03.759580 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved May 8 00:37:03.759587 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved May 8 00:37:03.759592 kernel: NX (Execute Disable) protection: active May 8 00:37:03.759606 kernel: APIC: Static calls initialized May 8 00:37:03.759616 kernel: SMBIOS 2.7 present. May 8 00:37:03.759621 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 May 8 00:37:03.759627 kernel: vmware: hypercall mode: 0x00 May 8 00:37:03.759632 kernel: Hypervisor detected: VMware May 8 00:37:03.759636 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz May 8 00:37:03.759644 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz May 8 00:37:03.759649 kernel: vmware: using clock offset of 5061854457 ns May 8 00:37:03.759654 kernel: tsc: Detected 3408.000 MHz processor May 8 00:37:03.759659 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 8 00:37:03.759665 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 8 00:37:03.759670 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 May 8 00:37:03.759675 kernel: total RAM covered: 3072M May 8 00:37:03.759680 kernel: Found optimal setting for mtrr clean up May 8 00:37:03.759686 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G May 8 00:37:03.759692 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs May 8 00:37:03.759698 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 8 00:37:03.759703 kernel: Using GB pages for direct mapping May 8 00:37:03.759708 kernel: ACPI: Early table checksum verification disabled May 8 00:37:03.759712 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) May 8 00:37:03.759718 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) May 8 00:37:03.759723 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) May 8 00:37:03.759728 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) May 8 00:37:03.759733 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 May 8 00:37:03.759741 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 May 8 00:37:03.759749 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) May 8 00:37:03.759755 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) May 8 00:37:03.759760 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) May 8 00:37:03.759765 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) May 8 00:37:03.759772 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) May 8 00:37:03.759777 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) May 8 00:37:03.759783 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] May 8 00:37:03.759788 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] May 8 00:37:03.759799 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] May 8 00:37:03.759805 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] May 8 00:37:03.759862 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] May 8 00:37:03.759867 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] May 8 00:37:03.759873 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] May 8 00:37:03.759880 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] May 8 00:37:03.759891 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] May 8 00:37:03.759899 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] May 8 00:37:03.759904 kernel: system APIC only can use physical flat May 8 00:37:03.759910 kernel: APIC: Switched APIC routing to: physical flat May 8 00:37:03.759915 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 8 00:37:03.759920 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 May 8 00:37:03.759925 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 May 8 00:37:03.759931 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 May 8 00:37:03.759936 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 May 8 00:37:03.759945 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 May 8 00:37:03.759950 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 May 8 00:37:03.759956 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 May 8 00:37:03.759961 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 May 8 00:37:03.759966 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 May 8 00:37:03.759971 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 May 8 00:37:03.759977 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 May 8 00:37:03.759986 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 May 8 00:37:03.759995 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 May 8 00:37:03.760003 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 May 8 00:37:03.760014 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 May 8 00:37:03.760020 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 May 8 00:37:03.760025 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 May 8 00:37:03.760030 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 May 8 00:37:03.760036 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 May 8 00:37:03.760041 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 May 8 00:37:03.760046 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 May 8 00:37:03.760051 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 May 8 00:37:03.760056 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 May 8 00:37:03.760061 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 May 8 00:37:03.760066 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 May 8 00:37:03.760073 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 May 8 00:37:03.760078 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 May 8 00:37:03.760083 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 May 8 00:37:03.760089 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 May 8 00:37:03.760094 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 May 8 00:37:03.760099 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 May 8 00:37:03.760104 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 May 8 00:37:03.760109 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 May 8 00:37:03.760115 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 May 8 00:37:03.760120 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 May 8 00:37:03.760126 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 May 8 00:37:03.760131 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 May 8 00:37:03.760137 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 May 8 00:37:03.760142 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 May 8 00:37:03.760147 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 May 8 00:37:03.760152 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 May 8 00:37:03.760157 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 May 8 00:37:03.760163 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 May 8 00:37:03.760168 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 May 8 00:37:03.760173 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 May 8 00:37:03.760179 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 May 8 00:37:03.760184 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 May 8 00:37:03.760189 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 May 8 00:37:03.760195 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 May 8 00:37:03.760200 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 May 8 00:37:03.760205 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 May 8 00:37:03.760210 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 May 8 00:37:03.760215 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 May 8 00:37:03.760220 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 May 8 00:37:03.760225 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 May 8 00:37:03.760232 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 May 8 00:37:03.760237 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 May 8 00:37:03.760242 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 May 8 00:37:03.760251 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 May 8 00:37:03.760258 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 May 8 00:37:03.760263 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 May 8 00:37:03.760269 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 May 8 00:37:03.760274 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 May 8 00:37:03.760280 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 May 8 00:37:03.760286 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 May 8 00:37:03.760292 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 May 8 00:37:03.760297 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 May 8 00:37:03.760303 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 May 8 00:37:03.760308 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 May 8 00:37:03.760314 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 May 8 00:37:03.760320 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 May 8 00:37:03.760325 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 May 8 00:37:03.760331 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 May 8 00:37:03.760336 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 May 8 00:37:03.760343 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 May 8 00:37:03.760348 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 May 8 00:37:03.760354 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 May 8 00:37:03.760359 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 May 8 00:37:03.760365 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 May 8 00:37:03.760371 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 May 8 00:37:03.760376 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 May 8 00:37:03.760382 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 May 8 00:37:03.760387 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 May 8 00:37:03.760393 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 May 8 00:37:03.760399 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 May 8 00:37:03.760405 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 May 8 00:37:03.760410 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 May 8 00:37:03.760416 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 May 8 00:37:03.760421 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 May 8 00:37:03.760427 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 May 8 00:37:03.760432 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 May 8 00:37:03.760438 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 May 8 00:37:03.760443 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 May 8 00:37:03.760449 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 May 8 00:37:03.760456 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 May 8 00:37:03.760462 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 May 8 00:37:03.760467 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 May 8 00:37:03.760473 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 May 8 00:37:03.760478 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 May 8 00:37:03.760484 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 May 8 00:37:03.760489 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 May 8 00:37:03.760495 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 May 8 00:37:03.760500 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 May 8 00:37:03.760506 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 May 8 00:37:03.760513 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 May 8 00:37:03.760518 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 May 8 00:37:03.760524 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 May 8 00:37:03.760529 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 May 8 00:37:03.760535 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 May 8 00:37:03.760540 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 May 8 00:37:03.760546 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 May 8 00:37:03.760551 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 May 8 00:37:03.760557 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 May 8 00:37:03.760562 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 May 8 00:37:03.760568 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 May 8 00:37:03.760574 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 May 8 00:37:03.760580 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 May 8 00:37:03.760586 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 May 8 00:37:03.760591 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 May 8 00:37:03.763895 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 May 8 00:37:03.763918 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 May 8 00:37:03.763924 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 May 8 00:37:03.763930 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 May 8 00:37:03.763936 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 May 8 00:37:03.763944 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 May 8 00:37:03.763959 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 May 8 00:37:03.763970 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 May 8 00:37:03.763976 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 8 00:37:03.763983 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 8 00:37:03.763989 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug May 8 00:37:03.763995 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] May 8 00:37:03.764001 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] May 8 00:37:03.764009 kernel: Zone ranges: May 8 00:37:03.764016 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 8 00:37:03.764024 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] May 8 00:37:03.764030 kernel: Normal empty May 8 00:37:03.764036 kernel: Movable zone start for each node May 8 00:37:03.764041 kernel: Early memory node ranges May 8 00:37:03.764048 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] May 8 00:37:03.764058 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] May 8 00:37:03.764068 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] May 8 00:37:03.764074 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] May 8 00:37:03.764079 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 8 00:37:03.764085 kernel: On node 0, zone DMA: 98 pages in unavailable ranges May 8 00:37:03.764093 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges May 8 00:37:03.764098 kernel: ACPI: PM-Timer IO Port: 0x1008 May 8 00:37:03.764104 kernel: system APIC only can use physical flat May 8 00:37:03.764110 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) May 8 00:37:03.764116 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) May 8 00:37:03.764122 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) May 8 00:37:03.764127 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) May 8 00:37:03.764133 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) May 8 00:37:03.764138 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) May 8 00:37:03.764146 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) May 8 00:37:03.764151 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) May 8 00:37:03.764157 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) May 8 00:37:03.764163 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) May 8 00:37:03.764168 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) May 8 00:37:03.764174 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) May 8 00:37:03.764179 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) May 8 00:37:03.764185 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) May 8 00:37:03.764191 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) May 8 00:37:03.764196 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) May 8 00:37:03.764203 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) May 8 00:37:03.764209 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) May 8 00:37:03.764215 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) May 8 00:37:03.764221 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) May 8 00:37:03.764226 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) May 8 00:37:03.764232 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) May 8 00:37:03.764241 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) May 8 00:37:03.764247 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) May 8 00:37:03.764252 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) May 8 00:37:03.764259 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) May 8 00:37:03.764265 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) May 8 00:37:03.764271 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) May 8 00:37:03.764276 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) May 8 00:37:03.764284 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) May 8 00:37:03.764294 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) May 8 00:37:03.764303 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) May 8 00:37:03.764311 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) May 8 00:37:03.764317 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) May 8 00:37:03.764324 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) May 8 00:37:03.764330 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) May 8 00:37:03.764336 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) May 8 00:37:03.764342 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) May 8 00:37:03.764350 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) May 8 00:37:03.764356 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) May 8 00:37:03.764361 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) May 8 00:37:03.764367 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) May 8 00:37:03.764373 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) May 8 00:37:03.764378 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) May 8 00:37:03.764386 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) May 8 00:37:03.764394 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) May 8 00:37:03.764405 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) May 8 00:37:03.764412 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) May 8 00:37:03.764417 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) May 8 00:37:03.764423 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) May 8 00:37:03.764428 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) May 8 00:37:03.764434 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) May 8 00:37:03.764440 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) May 8 00:37:03.764448 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) May 8 00:37:03.764454 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) May 8 00:37:03.764459 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) May 8 00:37:03.764465 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) May 8 00:37:03.764470 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) May 8 00:37:03.764476 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) May 8 00:37:03.764482 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) May 8 00:37:03.764487 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) May 8 00:37:03.764493 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) May 8 00:37:03.764499 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) May 8 00:37:03.764506 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) May 8 00:37:03.764511 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) May 8 00:37:03.764517 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) May 8 00:37:03.764523 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) May 8 00:37:03.764528 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) May 8 00:37:03.764534 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) May 8 00:37:03.764540 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) May 8 00:37:03.764545 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) May 8 00:37:03.764551 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) May 8 00:37:03.764559 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) May 8 00:37:03.764565 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) May 8 00:37:03.764572 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) May 8 00:37:03.764581 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) May 8 00:37:03.764591 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) May 8 00:37:03.764610 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) May 8 00:37:03.764617 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) May 8 00:37:03.764631 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) May 8 00:37:03.764640 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) May 8 00:37:03.764646 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) May 8 00:37:03.764654 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) May 8 00:37:03.764662 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) May 8 00:37:03.764668 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) May 8 00:37:03.764673 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) May 8 00:37:03.764680 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) May 8 00:37:03.764685 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) May 8 00:37:03.764691 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) May 8 00:37:03.764697 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) May 8 00:37:03.764702 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) May 8 00:37:03.764709 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) May 8 00:37:03.764715 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) May 8 00:37:03.764721 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) May 8 00:37:03.764726 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) May 8 00:37:03.764732 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) May 8 00:37:03.764738 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) May 8 00:37:03.764744 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) May 8 00:37:03.764750 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) May 8 00:37:03.764755 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) May 8 00:37:03.764762 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) May 8 00:37:03.764769 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) May 8 00:37:03.764775 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) May 8 00:37:03.764780 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) May 8 00:37:03.764786 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) May 8 00:37:03.764792 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) May 8 00:37:03.764797 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) May 8 00:37:03.764803 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) May 8 00:37:03.764809 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) May 8 00:37:03.764814 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) May 8 00:37:03.764822 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) May 8 00:37:03.764827 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) May 8 00:37:03.764833 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) May 8 00:37:03.764839 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) May 8 00:37:03.764844 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) May 8 00:37:03.764850 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) May 8 00:37:03.764855 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) May 8 00:37:03.764862 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) May 8 00:37:03.764868 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) May 8 00:37:03.764874 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) May 8 00:37:03.764881 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) May 8 00:37:03.764886 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) May 8 00:37:03.764892 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) May 8 00:37:03.764898 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) May 8 00:37:03.764903 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) May 8 00:37:03.764909 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) May 8 00:37:03.764915 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) May 8 00:37:03.764921 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) May 8 00:37:03.764926 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 May 8 00:37:03.764934 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) May 8 00:37:03.764939 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 8 00:37:03.764945 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 May 8 00:37:03.764951 kernel: TSC deadline timer available May 8 00:37:03.764957 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs May 8 00:37:03.764963 kernel: [mem 0x80000000-0xefffffff] available for PCI devices May 8 00:37:03.764968 kernel: Booting paravirtualized kernel on VMware hypervisor May 8 00:37:03.764974 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 8 00:37:03.764980 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 May 8 00:37:03.764987 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u262144 May 8 00:37:03.764994 kernel: pcpu-alloc: s197096 r8192 d32280 u262144 alloc=1*2097152 May 8 00:37:03.765000 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 May 8 00:37:03.765006 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 May 8 00:37:03.765011 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 May 8 00:37:03.765017 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 May 8 00:37:03.765023 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 May 8 00:37:03.765036 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 May 8 00:37:03.765043 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 May 8 00:37:03.765050 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 May 8 00:37:03.765058 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 May 8 00:37:03.765066 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 May 8 00:37:03.765073 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 May 8 00:37:03.765078 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 May 8 00:37:03.765084 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 May 8 00:37:03.765092 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 May 8 00:37:03.765102 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 May 8 00:37:03.765112 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 May 8 00:37:03.765121 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=86cfbfcc89a9c46f6cbba5bdb3509d1ce1367f0c93b0b0e4c6bdcad1a2064c90 May 8 00:37:03.765128 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 00:37:03.765135 kernel: random: crng init done May 8 00:37:03.765143 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes May 8 00:37:03.765149 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes May 8 00:37:03.765155 kernel: printk: log_buf_len min size: 262144 bytes May 8 00:37:03.765161 kernel: printk: log_buf_len: 1048576 bytes May 8 00:37:03.765168 kernel: printk: early log buf free: 239648(91%) May 8 00:37:03.765180 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 00:37:03.765192 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 8 00:37:03.765204 kernel: Fallback order for Node 0: 0 May 8 00:37:03.765210 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 May 8 00:37:03.765217 kernel: Policy zone: DMA32 May 8 00:37:03.765225 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 00:37:03.765234 kernel: Memory: 1936400K/2096628K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42856K init, 2336K bss, 159968K reserved, 0K cma-reserved) May 8 00:37:03.765242 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 May 8 00:37:03.765248 kernel: ftrace: allocating 37944 entries in 149 pages May 8 00:37:03.765255 kernel: ftrace: allocated 149 pages with 4 groups May 8 00:37:03.765266 kernel: Dynamic Preempt: voluntary May 8 00:37:03.765275 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 00:37:03.765282 kernel: rcu: RCU event tracing is enabled. May 8 00:37:03.765289 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. May 8 00:37:03.765301 kernel: Trampoline variant of Tasks RCU enabled. May 8 00:37:03.765312 kernel: Rude variant of Tasks RCU enabled. May 8 00:37:03.765323 kernel: Tracing variant of Tasks RCU enabled. May 8 00:37:03.765333 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 00:37:03.765345 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 May 8 00:37:03.765354 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 May 8 00:37:03.765360 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. May 8 00:37:03.765366 kernel: Console: colour VGA+ 80x25 May 8 00:37:03.765373 kernel: printk: console [tty0] enabled May 8 00:37:03.765379 kernel: printk: console [ttyS0] enabled May 8 00:37:03.765387 kernel: ACPI: Core revision 20230628 May 8 00:37:03.765394 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns May 8 00:37:03.765400 kernel: APIC: Switch to symmetric I/O mode setup May 8 00:37:03.765406 kernel: x2apic enabled May 8 00:37:03.765412 kernel: APIC: Switched APIC routing to: physical x2apic May 8 00:37:03.765422 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 8 00:37:03.765428 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns May 8 00:37:03.765435 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) May 8 00:37:03.765441 kernel: Disabled fast string operations May 8 00:37:03.765451 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 8 00:37:03.765461 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 8 00:37:03.765471 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 8 00:37:03.765482 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit May 8 00:37:03.765489 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall May 8 00:37:03.765495 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS May 8 00:37:03.765501 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 8 00:37:03.765507 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT May 8 00:37:03.765513 kernel: RETBleed: Mitigation: Enhanced IBRS May 8 00:37:03.765522 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 8 00:37:03.765528 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 8 00:37:03.765534 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 8 00:37:03.765540 kernel: SRBDS: Unknown: Dependent on hypervisor status May 8 00:37:03.765547 kernel: GDS: Unknown: Dependent on hypervisor status May 8 00:37:03.765553 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 8 00:37:03.765559 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 8 00:37:03.765565 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 8 00:37:03.765571 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 8 00:37:03.765578 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 8 00:37:03.765585 kernel: Freeing SMP alternatives memory: 32K May 8 00:37:03.765591 kernel: pid_max: default: 131072 minimum: 1024 May 8 00:37:03.766717 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 8 00:37:03.766735 kernel: landlock: Up and running. May 8 00:37:03.766742 kernel: SELinux: Initializing. May 8 00:37:03.766749 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 8 00:37:03.766755 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 8 00:37:03.766762 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) May 8 00:37:03.766777 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 8 00:37:03.766789 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 8 00:37:03.766797 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 8 00:37:03.766803 kernel: Performance Events: Skylake events, core PMU driver. May 8 00:37:03.766809 kernel: core: CPUID marked event: 'cpu cycles' unavailable May 8 00:37:03.766816 kernel: core: CPUID marked event: 'instructions' unavailable May 8 00:37:03.766822 kernel: core: CPUID marked event: 'bus cycles' unavailable May 8 00:37:03.766828 kernel: core: CPUID marked event: 'cache references' unavailable May 8 00:37:03.766835 kernel: core: CPUID marked event: 'cache misses' unavailable May 8 00:37:03.766841 kernel: core: CPUID marked event: 'branch instructions' unavailable May 8 00:37:03.766848 kernel: core: CPUID marked event: 'branch misses' unavailable May 8 00:37:03.766854 kernel: ... version: 1 May 8 00:37:03.766860 kernel: ... bit width: 48 May 8 00:37:03.766870 kernel: ... generic registers: 4 May 8 00:37:03.766881 kernel: ... value mask: 0000ffffffffffff May 8 00:37:03.766891 kernel: ... max period: 000000007fffffff May 8 00:37:03.766900 kernel: ... fixed-purpose events: 0 May 8 00:37:03.766912 kernel: ... event mask: 000000000000000f May 8 00:37:03.766924 kernel: signal: max sigframe size: 1776 May 8 00:37:03.766933 kernel: rcu: Hierarchical SRCU implementation. May 8 00:37:03.766943 kernel: rcu: Max phase no-delay instances is 400. May 8 00:37:03.766953 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 8 00:37:03.766963 kernel: smp: Bringing up secondary CPUs ... May 8 00:37:03.766972 kernel: smpboot: x86: Booting SMP configuration: May 8 00:37:03.766982 kernel: .... node #0, CPUs: #1 May 8 00:37:03.766992 kernel: Disabled fast string operations May 8 00:37:03.767004 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 May 8 00:37:03.767014 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 May 8 00:37:03.767024 kernel: smp: Brought up 1 node, 2 CPUs May 8 00:37:03.767034 kernel: smpboot: Max logical packages: 128 May 8 00:37:03.767044 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) May 8 00:37:03.767053 kernel: devtmpfs: initialized May 8 00:37:03.767065 kernel: x86/mm: Memory block size: 128MB May 8 00:37:03.767075 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) May 8 00:37:03.767085 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 00:37:03.767096 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) May 8 00:37:03.767109 kernel: pinctrl core: initialized pinctrl subsystem May 8 00:37:03.767118 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 00:37:03.767127 kernel: audit: initializing netlink subsys (disabled) May 8 00:37:03.767137 kernel: audit: type=2000 audit(1746664621.068:1): state=initialized audit_enabled=0 res=1 May 8 00:37:03.767147 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 00:37:03.767158 kernel: thermal_sys: Registered thermal governor 'user_space' May 8 00:37:03.767170 kernel: cpuidle: using governor menu May 8 00:37:03.767181 kernel: Simple Boot Flag at 0x36 set to 0x80 May 8 00:37:03.767193 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 00:37:03.767207 kernel: dca service started, version 1.12.1 May 8 00:37:03.767219 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) May 8 00:37:03.767230 kernel: PCI: Using configuration type 1 for base access May 8 00:37:03.767240 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 8 00:37:03.767253 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 8 00:37:03.767264 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 8 00:37:03.767276 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 8 00:37:03.767288 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 8 00:37:03.767299 kernel: ACPI: Added _OSI(Module Device) May 8 00:37:03.767313 kernel: ACPI: Added _OSI(Processor Device) May 8 00:37:03.767324 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 00:37:03.767332 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 00:37:03.767341 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 00:37:03.767351 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored May 8 00:37:03.767364 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 8 00:37:03.767375 kernel: ACPI: Interpreter enabled May 8 00:37:03.767387 kernel: ACPI: PM: (supports S0 S1 S5) May 8 00:37:03.767396 kernel: ACPI: Using IOAPIC for interrupt routing May 8 00:37:03.767407 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 8 00:37:03.767419 kernel: PCI: Using E820 reservations for host bridge windows May 8 00:37:03.767429 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F May 8 00:37:03.767440 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) May 8 00:37:03.767571 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:37:03.767650 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] May 8 00:37:03.767707 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] May 8 00:37:03.767720 kernel: PCI host bridge to bus 0000:00 May 8 00:37:03.767792 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 8 00:37:03.767849 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] May 8 00:37:03.767913 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 8 00:37:03.767962 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 8 00:37:03.768010 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] May 8 00:37:03.768057 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] May 8 00:37:03.768124 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 May 8 00:37:03.768186 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 May 8 00:37:03.768243 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 May 8 00:37:03.768302 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a May 8 00:37:03.768367 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] May 8 00:37:03.768425 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 8 00:37:03.768482 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 8 00:37:03.768538 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 8 00:37:03.768591 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 8 00:37:03.770707 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 May 8 00:37:03.770769 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI May 8 00:37:03.770825 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB May 8 00:37:03.770883 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 May 8 00:37:03.770942 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] May 8 00:37:03.771028 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] May 8 00:37:03.771088 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 May 8 00:37:03.771142 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] May 8 00:37:03.771201 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] May 8 00:37:03.771254 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] May 8 00:37:03.771311 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] May 8 00:37:03.771380 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 8 00:37:03.771457 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 May 8 00:37:03.771517 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.771573 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold May 8 00:37:03.771643 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.771701 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold May 8 00:37:03.771763 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.771820 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold May 8 00:37:03.771878 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.771932 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold May 8 00:37:03.771997 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.772053 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold May 8 00:37:03.772150 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.772210 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold May 8 00:37:03.772283 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.772370 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold May 8 00:37:03.772440 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.772496 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold May 8 00:37:03.772559 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.772713 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold May 8 00:37:03.772777 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.772833 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold May 8 00:37:03.772893 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.772957 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold May 8 00:37:03.773017 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.773072 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold May 8 00:37:03.773129 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.773183 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold May 8 00:37:03.773241 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.773299 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold May 8 00:37:03.773355 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.773409 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold May 8 00:37:03.773466 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.773521 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold May 8 00:37:03.773578 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.773652 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold May 8 00:37:03.773723 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.773791 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold May 8 00:37:03.773883 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.773970 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold May 8 00:37:03.774053 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.774114 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold May 8 00:37:03.774177 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.774232 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold May 8 00:37:03.774291 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.774353 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold May 8 00:37:03.774416 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.774483 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold May 8 00:37:03.774549 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.774685 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold May 8 00:37:03.774747 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.774803 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold May 8 00:37:03.774881 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.774949 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold May 8 00:37:03.775024 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.775091 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold May 8 00:37:03.775153 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.775214 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold May 8 00:37:03.775277 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.775336 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold May 8 00:37:03.775403 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.775467 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold May 8 00:37:03.775526 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.775580 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold May 8 00:37:03.775736 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 May 8 00:37:03.775800 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold May 8 00:37:03.775859 kernel: pci_bus 0000:01: extended config space not accessible May 8 00:37:03.775926 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 8 00:37:03.775989 kernel: pci_bus 0000:02: extended config space not accessible May 8 00:37:03.776003 kernel: acpiphp: Slot [32] registered May 8 00:37:03.776012 kernel: acpiphp: Slot [33] registered May 8 00:37:03.776018 kernel: acpiphp: Slot [34] registered May 8 00:37:03.776024 kernel: acpiphp: Slot [35] registered May 8 00:37:03.776030 kernel: acpiphp: Slot [36] registered May 8 00:37:03.776036 kernel: acpiphp: Slot [37] registered May 8 00:37:03.776045 kernel: acpiphp: Slot [38] registered May 8 00:37:03.776052 kernel: acpiphp: Slot [39] registered May 8 00:37:03.776062 kernel: acpiphp: Slot [40] registered May 8 00:37:03.776069 kernel: acpiphp: Slot [41] registered May 8 00:37:03.776075 kernel: acpiphp: Slot [42] registered May 8 00:37:03.776081 kernel: acpiphp: Slot [43] registered May 8 00:37:03.776087 kernel: acpiphp: Slot [44] registered May 8 00:37:03.776093 kernel: acpiphp: Slot [45] registered May 8 00:37:03.776100 kernel: acpiphp: Slot [46] registered May 8 00:37:03.776110 kernel: acpiphp: Slot [47] registered May 8 00:37:03.776118 kernel: acpiphp: Slot [48] registered May 8 00:37:03.776125 kernel: acpiphp: Slot [49] registered May 8 00:37:03.776130 kernel: acpiphp: Slot [50] registered May 8 00:37:03.776136 kernel: acpiphp: Slot [51] registered May 8 00:37:03.776142 kernel: acpiphp: Slot [52] registered May 8 00:37:03.776149 kernel: acpiphp: Slot [53] registered May 8 00:37:03.776160 kernel: acpiphp: Slot [54] registered May 8 00:37:03.776166 kernel: acpiphp: Slot [55] registered May 8 00:37:03.776172 kernel: acpiphp: Slot [56] registered May 8 00:37:03.776180 kernel: acpiphp: Slot [57] registered May 8 00:37:03.776186 kernel: acpiphp: Slot [58] registered May 8 00:37:03.776196 kernel: acpiphp: Slot [59] registered May 8 00:37:03.776205 kernel: acpiphp: Slot [60] registered May 8 00:37:03.776213 kernel: acpiphp: Slot [61] registered May 8 00:37:03.776219 kernel: acpiphp: Slot [62] registered May 8 00:37:03.776225 kernel: acpiphp: Slot [63] registered May 8 00:37:03.776301 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) May 8 00:37:03.776369 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] May 8 00:37:03.776439 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] May 8 00:37:03.776493 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] May 8 00:37:03.776545 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) May 8 00:37:03.776638 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) May 8 00:37:03.776704 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) May 8 00:37:03.776760 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) May 8 00:37:03.776818 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) May 8 00:37:03.776891 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 May 8 00:37:03.776953 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] May 8 00:37:03.777026 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] May 8 00:37:03.777088 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] May 8 00:37:03.777153 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold May 8 00:37:03.777208 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' May 8 00:37:03.777266 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] May 8 00:37:03.777320 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] May 8 00:37:03.777384 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] May 8 00:37:03.777440 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] May 8 00:37:03.777518 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] May 8 00:37:03.777594 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] May 8 00:37:03.777798 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] May 8 00:37:03.777855 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] May 8 00:37:03.777909 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] May 8 00:37:03.777966 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] May 8 00:37:03.778019 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] May 8 00:37:03.778075 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] May 8 00:37:03.778129 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] May 8 00:37:03.778182 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] May 8 00:37:03.778237 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] May 8 00:37:03.778290 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] May 8 00:37:03.778343 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] May 8 00:37:03.778402 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] May 8 00:37:03.778455 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] May 8 00:37:03.778510 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] May 8 00:37:03.778566 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] May 8 00:37:03.778664 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] May 8 00:37:03.778926 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] May 8 00:37:03.779037 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] May 8 00:37:03.779093 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] May 8 00:37:03.779154 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] May 8 00:37:03.779222 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 May 8 00:37:03.779278 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] May 8 00:37:03.779334 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] May 8 00:37:03.779408 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] May 8 00:37:03.779464 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] May 8 00:37:03.779518 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] May 8 00:37:03.779573 kernel: pci 0000:0b:00.0: supports D1 D2 May 8 00:37:03.781726 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 8 00:37:03.781831 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' May 8 00:37:03.781892 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] May 8 00:37:03.781949 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] May 8 00:37:03.782010 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] May 8 00:37:03.782067 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] May 8 00:37:03.782128 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] May 8 00:37:03.782183 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] May 8 00:37:03.782238 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] May 8 00:37:03.782293 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] May 8 00:37:03.782347 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] May 8 00:37:03.782404 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] May 8 00:37:03.782460 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] May 8 00:37:03.782521 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] May 8 00:37:03.782575 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] May 8 00:37:03.784078 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] May 8 00:37:03.784146 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] May 8 00:37:03.784203 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] May 8 00:37:03.784258 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] May 8 00:37:03.784320 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] May 8 00:37:03.784381 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] May 8 00:37:03.784435 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] May 8 00:37:03.784490 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] May 8 00:37:03.784542 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] May 8 00:37:03.784595 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] May 8 00:37:03.784793 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] May 8 00:37:03.784847 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] May 8 00:37:03.784903 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] May 8 00:37:03.784985 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] May 8 00:37:03.785079 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] May 8 00:37:03.785134 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] May 8 00:37:03.785188 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] May 8 00:37:03.785244 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] May 8 00:37:03.785296 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] May 8 00:37:03.785349 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] May 8 00:37:03.785407 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] May 8 00:37:03.785463 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] May 8 00:37:03.785517 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] May 8 00:37:03.785569 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] May 8 00:37:03.785630 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] May 8 00:37:03.785686 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] May 8 00:37:03.785739 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] May 8 00:37:03.785795 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] May 8 00:37:03.785850 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] May 8 00:37:03.785903 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] May 8 00:37:03.785956 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] May 8 00:37:03.786012 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] May 8 00:37:03.786066 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] May 8 00:37:03.786119 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] May 8 00:37:03.786173 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] May 8 00:37:03.786229 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] May 8 00:37:03.786282 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] May 8 00:37:03.786338 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] May 8 00:37:03.786391 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] May 8 00:37:03.786443 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] May 8 00:37:03.786498 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] May 8 00:37:03.786551 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] May 8 00:37:03.786613 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] May 8 00:37:03.786673 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] May 8 00:37:03.786728 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] May 8 00:37:03.786782 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] May 8 00:37:03.786845 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] May 8 00:37:03.786898 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] May 8 00:37:03.786953 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] May 8 00:37:03.787005 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] May 8 00:37:03.787057 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] May 8 00:37:03.787115 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] May 8 00:37:03.787168 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] May 8 00:37:03.787221 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] May 8 00:37:03.787276 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] May 8 00:37:03.787329 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] May 8 00:37:03.787386 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] May 8 00:37:03.787441 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] May 8 00:37:03.787494 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] May 8 00:37:03.787549 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] May 8 00:37:03.787678 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] May 8 00:37:03.787754 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] May 8 00:37:03.787808 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] May 8 00:37:03.787863 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] May 8 00:37:03.787922 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] May 8 00:37:03.787995 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] May 8 00:37:03.788005 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 May 8 00:37:03.788015 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 May 8 00:37:03.788021 kernel: ACPI: PCI: Interrupt link LNKB disabled May 8 00:37:03.788027 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 8 00:37:03.788033 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 May 8 00:37:03.788040 kernel: iommu: Default domain type: Translated May 8 00:37:03.788046 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 8 00:37:03.788052 kernel: PCI: Using ACPI for IRQ routing May 8 00:37:03.788058 kernel: PCI: pci_cache_line_size set to 64 bytes May 8 00:37:03.788064 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] May 8 00:37:03.788072 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] May 8 00:37:03.788127 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device May 8 00:37:03.788183 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible May 8 00:37:03.788236 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 8 00:37:03.788246 kernel: vgaarb: loaded May 8 00:37:03.788252 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 May 8 00:37:03.788259 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter May 8 00:37:03.788265 kernel: clocksource: Switched to clocksource tsc-early May 8 00:37:03.788271 kernel: VFS: Disk quotas dquot_6.6.0 May 8 00:37:03.789628 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 00:37:03.789638 kernel: pnp: PnP ACPI init May 8 00:37:03.789712 kernel: system 00:00: [io 0x1000-0x103f] has been reserved May 8 00:37:03.789766 kernel: system 00:00: [io 0x1040-0x104f] has been reserved May 8 00:37:03.789815 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved May 8 00:37:03.789868 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved May 8 00:37:03.789919 kernel: pnp 00:06: [dma 2] May 8 00:37:03.789976 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved May 8 00:37:03.790026 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved May 8 00:37:03.790073 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved May 8 00:37:03.790082 kernel: pnp: PnP ACPI: found 8 devices May 8 00:37:03.790088 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 8 00:37:03.790095 kernel: NET: Registered PF_INET protocol family May 8 00:37:03.790101 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 00:37:03.790109 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 8 00:37:03.790116 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 00:37:03.790122 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 8 00:37:03.790128 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 8 00:37:03.790134 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 8 00:37:03.790140 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 8 00:37:03.790147 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 8 00:37:03.790153 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 00:37:03.790159 kernel: NET: Registered PF_XDP protocol family May 8 00:37:03.790219 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 8 00:37:03.790277 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 May 8 00:37:03.790333 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 8 00:37:03.790389 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 8 00:37:03.790444 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 8 00:37:03.790499 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 May 8 00:37:03.790568 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 May 8 00:37:03.790668 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 May 8 00:37:03.790734 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 May 8 00:37:03.790790 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 May 8 00:37:03.790845 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 May 8 00:37:03.790901 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 May 8 00:37:03.790960 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 May 8 00:37:03.791017 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 May 8 00:37:03.791072 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 May 8 00:37:03.791127 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 May 8 00:37:03.791181 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 May 8 00:37:03.791236 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 May 8 00:37:03.791293 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 May 8 00:37:03.791348 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 May 8 00:37:03.791402 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 May 8 00:37:03.791456 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 May 8 00:37:03.791510 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 May 8 00:37:03.791565 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] May 8 00:37:03.791632 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] May 8 00:37:03.791687 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] May 8 00:37:03.791739 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.791793 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] May 8 00:37:03.791845 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.791898 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] May 8 00:37:03.791951 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.792006 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] May 8 00:37:03.792058 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.792112 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] May 8 00:37:03.792164 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.792216 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] May 8 00:37:03.792268 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.792321 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] May 8 00:37:03.792385 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.792440 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] May 8 00:37:03.792496 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.792549 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] May 8 00:37:03.792617 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.792693 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] May 8 00:37:03.792748 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.792802 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] May 8 00:37:03.792856 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.792909 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] May 8 00:37:03.792965 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.793024 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] May 8 00:37:03.793084 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.793138 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] May 8 00:37:03.793191 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.793245 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] May 8 00:37:03.793299 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.793377 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] May 8 00:37:03.793439 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.793493 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] May 8 00:37:03.793545 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.793611 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] May 8 00:37:03.793671 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.793725 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] May 8 00:37:03.793777 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.793831 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] May 8 00:37:03.793886 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.793939 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] May 8 00:37:03.793991 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.794043 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] May 8 00:37:03.794096 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.794148 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] May 8 00:37:03.794199 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.794252 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] May 8 00:37:03.794304 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.794359 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] May 8 00:37:03.794412 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.794465 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] May 8 00:37:03.794517 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.794570 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] May 8 00:37:03.794632 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.794685 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] May 8 00:37:03.794738 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.794791 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] May 8 00:37:03.794845 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.794899 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] May 8 00:37:03.794951 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.795004 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] May 8 00:37:03.795057 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.795109 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] May 8 00:37:03.795162 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.795214 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] May 8 00:37:03.795266 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.795318 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] May 8 00:37:03.795377 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.795430 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] May 8 00:37:03.795482 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.795535 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] May 8 00:37:03.795587 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.795779 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] May 8 00:37:03.795833 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.795885 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] May 8 00:37:03.795937 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.795993 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] May 8 00:37:03.796044 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.796096 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] May 8 00:37:03.796147 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.796199 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] May 8 00:37:03.796251 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.796304 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] May 8 00:37:03.796356 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] May 8 00:37:03.796410 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 8 00:37:03.796465 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] May 8 00:37:03.796519 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] May 8 00:37:03.796571 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] May 8 00:37:03.796636 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] May 8 00:37:03.796695 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] May 8 00:37:03.796750 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] May 8 00:37:03.796803 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] May 8 00:37:03.796855 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] May 8 00:37:03.796908 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] May 8 00:37:03.796965 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] May 8 00:37:03.797019 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] May 8 00:37:03.797071 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] May 8 00:37:03.797123 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] May 8 00:37:03.797176 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] May 8 00:37:03.797230 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] May 8 00:37:03.797282 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] May 8 00:37:03.797333 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] May 8 00:37:03.797386 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] May 8 00:37:03.797441 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] May 8 00:37:03.797493 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] May 8 00:37:03.797546 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] May 8 00:37:03.797631 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] May 8 00:37:03.797686 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] May 8 00:37:03.797742 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] May 8 00:37:03.797795 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] May 8 00:37:03.797847 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] May 8 00:37:03.797899 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] May 8 00:37:03.797951 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] May 8 00:37:03.798003 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] May 8 00:37:03.798055 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] May 8 00:37:03.798107 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] May 8 00:37:03.798160 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] May 8 00:37:03.798215 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] May 8 00:37:03.798272 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] May 8 00:37:03.798325 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] May 8 00:37:03.798384 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] May 8 00:37:03.798437 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] May 8 00:37:03.798491 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] May 8 00:37:03.798545 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] May 8 00:37:03.798604 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] May 8 00:37:03.798664 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] May 8 00:37:03.798723 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] May 8 00:37:03.798780 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] May 8 00:37:03.798832 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] May 8 00:37:03.798885 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] May 8 00:37:03.798938 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] May 8 00:37:03.798991 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] May 8 00:37:03.799042 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] May 8 00:37:03.799095 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] May 8 00:37:03.799147 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] May 8 00:37:03.799200 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] May 8 00:37:03.799253 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] May 8 00:37:03.799308 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] May 8 00:37:03.799375 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] May 8 00:37:03.799444 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] May 8 00:37:03.799498 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] May 8 00:37:03.799551 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] May 8 00:37:03.799658 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] May 8 00:37:03.799723 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] May 8 00:37:03.799792 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] May 8 00:37:03.799849 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] May 8 00:37:03.799905 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] May 8 00:37:03.799958 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] May 8 00:37:03.800021 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] May 8 00:37:03.800079 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] May 8 00:37:03.800132 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] May 8 00:37:03.800184 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] May 8 00:37:03.800236 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] May 8 00:37:03.800302 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] May 8 00:37:03.800356 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] May 8 00:37:03.800408 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] May 8 00:37:03.800464 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] May 8 00:37:03.800517 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] May 8 00:37:03.800570 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] May 8 00:37:03.800700 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] May 8 00:37:03.800757 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] May 8 00:37:03.800815 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] May 8 00:37:03.800877 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] May 8 00:37:03.800931 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] May 8 00:37:03.800984 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] May 8 00:37:03.801040 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] May 8 00:37:03.801093 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] May 8 00:37:03.801146 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] May 8 00:37:03.801198 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] May 8 00:37:03.801251 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] May 8 00:37:03.801303 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] May 8 00:37:03.801355 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] May 8 00:37:03.801411 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] May 8 00:37:03.801484 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] May 8 00:37:03.801559 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] May 8 00:37:03.803723 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] May 8 00:37:03.803822 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] May 8 00:37:03.803907 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] May 8 00:37:03.803990 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] May 8 00:37:03.804069 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] May 8 00:37:03.804151 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] May 8 00:37:03.804232 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] May 8 00:37:03.804312 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] May 8 00:37:03.804397 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] May 8 00:37:03.804483 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] May 8 00:37:03.804562 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] May 8 00:37:03.804719 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] May 8 00:37:03.804805 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] May 8 00:37:03.804887 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] May 8 00:37:03.804970 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] May 8 00:37:03.805053 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] May 8 00:37:03.805133 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] May 8 00:37:03.805218 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] May 8 00:37:03.805299 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] May 8 00:37:03.805392 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] May 8 00:37:03.805474 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] May 8 00:37:03.805557 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] May 8 00:37:03.805869 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] May 8 00:37:03.805930 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] May 8 00:37:03.805979 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] May 8 00:37:03.806026 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] May 8 00:37:03.806443 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] May 8 00:37:03.806505 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] May 8 00:37:03.806567 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] May 8 00:37:03.806666 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] May 8 00:37:03.807058 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] May 8 00:37:03.807115 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] May 8 00:37:03.807164 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] May 8 00:37:03.807213 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] May 8 00:37:03.807285 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] May 8 00:37:03.807348 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] May 8 00:37:03.807404 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] May 8 00:37:03.807453 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] May 8 00:37:03.807501 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] May 8 00:37:03.807557 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] May 8 00:37:03.807626 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] May 8 00:37:03.807683 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] May 8 00:37:03.807737 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] May 8 00:37:03.807805 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] May 8 00:37:03.807855 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] May 8 00:37:03.807909 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] May 8 00:37:03.807958 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] May 8 00:37:03.808011 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] May 8 00:37:03.808063 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] May 8 00:37:03.808116 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] May 8 00:37:03.808165 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] May 8 00:37:03.808218 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] May 8 00:37:03.808267 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] May 8 00:37:03.808323 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] May 8 00:37:03.808398 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] May 8 00:37:03.808456 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] May 8 00:37:03.808505 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] May 8 00:37:03.808553 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] May 8 00:37:03.809030 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] May 8 00:37:03.809090 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] May 8 00:37:03.809145 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] May 8 00:37:03.809203 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] May 8 00:37:03.810546 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] May 8 00:37:03.810627 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] May 8 00:37:03.810689 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] May 8 00:37:03.810740 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] May 8 00:37:03.810793 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] May 8 00:37:03.810847 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] May 8 00:37:03.810900 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] May 8 00:37:03.810950 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] May 8 00:37:03.811003 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] May 8 00:37:03.811052 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] May 8 00:37:03.811104 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] May 8 00:37:03.811156 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] May 8 00:37:03.811209 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] May 8 00:37:03.811258 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] May 8 00:37:03.811306 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] May 8 00:37:03.811360 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] May 8 00:37:03.811410 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] May 8 00:37:03.811458 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] May 8 00:37:03.811518 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] May 8 00:37:03.811568 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] May 8 00:37:03.811663 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] May 8 00:37:03.811716 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] May 8 00:37:03.811770 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] May 8 00:37:03.811824 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] May 8 00:37:03.811876 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] May 8 00:37:03.811930 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] May 8 00:37:03.811979 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] May 8 00:37:03.812050 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] May 8 00:37:03.812100 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] May 8 00:37:03.812154 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] May 8 00:37:03.812209 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] May 8 00:37:03.812266 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] May 8 00:37:03.812316 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] May 8 00:37:03.812365 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] May 8 00:37:03.812419 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] May 8 00:37:03.812468 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] May 8 00:37:03.812519 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] May 8 00:37:03.812575 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] May 8 00:37:03.812648 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] May 8 00:37:03.812704 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] May 8 00:37:03.812754 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] May 8 00:37:03.812809 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] May 8 00:37:03.812863 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] May 8 00:37:03.812917 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] May 8 00:37:03.812979 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] May 8 00:37:03.813034 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] May 8 00:37:03.813085 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] May 8 00:37:03.813140 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] May 8 00:37:03.813189 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] May 8 00:37:03.813253 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 8 00:37:03.813263 kernel: PCI: CLS 32 bytes, default 64 May 8 00:37:03.813270 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 8 00:37:03.813277 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns May 8 00:37:03.813284 kernel: clocksource: Switched to clocksource tsc May 8 00:37:03.813291 kernel: Initialise system trusted keyrings May 8 00:37:03.813299 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 8 00:37:03.813306 kernel: Key type asymmetric registered May 8 00:37:03.813314 kernel: Asymmetric key parser 'x509' registered May 8 00:37:03.813320 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 8 00:37:03.813327 kernel: io scheduler mq-deadline registered May 8 00:37:03.813334 kernel: io scheduler kyber registered May 8 00:37:03.813340 kernel: io scheduler bfq registered May 8 00:37:03.813402 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 May 8 00:37:03.813458 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.813514 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 May 8 00:37:03.813586 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.813695 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 May 8 00:37:03.813766 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.813822 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 May 8 00:37:03.813880 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.813936 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 May 8 00:37:03.813990 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.814048 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 May 8 00:37:03.814104 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.814161 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 May 8 00:37:03.814215 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.814270 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 May 8 00:37:03.814327 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.814382 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 May 8 00:37:03.814436 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.814491 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 May 8 00:37:03.814544 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.814925 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 May 8 00:37:03.814990 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.815049 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 May 8 00:37:03.815103 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.815158 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 May 8 00:37:03.815212 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.815265 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 May 8 00:37:03.815321 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.815376 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 May 8 00:37:03.815883 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.815955 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 May 8 00:37:03.816013 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.816073 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 May 8 00:37:03.816128 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.816182 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 May 8 00:37:03.816235 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.816290 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 May 8 00:37:03.816344 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.816405 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 May 8 00:37:03.816460 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.816518 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 May 8 00:37:03.816573 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.816709 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 May 8 00:37:03.816765 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.818690 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 May 8 00:37:03.818753 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.818811 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 May 8 00:37:03.818866 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.818922 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 May 8 00:37:03.818975 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.819030 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 May 8 00:37:03.819095 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.819152 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 May 8 00:37:03.819206 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.819265 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 May 8 00:37:03.819319 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.819394 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 May 8 00:37:03.819449 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.819503 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 May 8 00:37:03.819557 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.819636 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 May 8 00:37:03.819693 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.819750 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 May 8 00:37:03.819805 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:37:03.819815 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 8 00:37:03.819822 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 00:37:03.819828 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 8 00:37:03.819835 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 May 8 00:37:03.819841 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 8 00:37:03.819850 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 8 00:37:03.819905 kernel: rtc_cmos 00:01: registered as rtc0 May 8 00:37:03.819955 kernel: rtc_cmos 00:01: setting system clock to 2025-05-08T00:37:03 UTC (1746664623) May 8 00:37:03.819965 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 8 00:37:03.820011 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram May 8 00:37:03.820020 kernel: intel_pstate: CPU model not supported May 8 00:37:03.820026 kernel: NET: Registered PF_INET6 protocol family May 8 00:37:03.820033 kernel: Segment Routing with IPv6 May 8 00:37:03.820041 kernel: In-situ OAM (IOAM) with IPv6 May 8 00:37:03.820048 kernel: NET: Registered PF_PACKET protocol family May 8 00:37:03.820055 kernel: Key type dns_resolver registered May 8 00:37:03.820062 kernel: IPI shorthand broadcast: enabled May 8 00:37:03.820068 kernel: sched_clock: Marking stable (953004015, 227844060)->(1244553610, -63705535) May 8 00:37:03.820074 kernel: registered taskstats version 1 May 8 00:37:03.820081 kernel: Loading compiled-in X.509 certificates May 8 00:37:03.820087 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 75e4e434c57439d3f2eaf7797bbbcdd698dafd0e' May 8 00:37:03.820094 kernel: Key type .fscrypt registered May 8 00:37:03.820102 kernel: Key type fscrypt-provisioning registered May 8 00:37:03.820108 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 00:37:03.820115 kernel: ima: Allocated hash algorithm: sha1 May 8 00:37:03.820121 kernel: ima: No architecture policies found May 8 00:37:03.820128 kernel: clk: Disabling unused clocks May 8 00:37:03.820134 kernel: Freeing unused kernel image (initmem) memory: 42856K May 8 00:37:03.820141 kernel: Write protecting the kernel read-only data: 36864k May 8 00:37:03.820148 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 8 00:37:03.820155 kernel: Run /init as init process May 8 00:37:03.820162 kernel: with arguments: May 8 00:37:03.820168 kernel: /init May 8 00:37:03.820175 kernel: with environment: May 8 00:37:03.820181 kernel: HOME=/ May 8 00:37:03.820187 kernel: TERM=linux May 8 00:37:03.820194 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 00:37:03.820202 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 00:37:03.820211 systemd[1]: Detected virtualization vmware. May 8 00:37:03.820219 systemd[1]: Detected architecture x86-64. May 8 00:37:03.820226 systemd[1]: Running in initrd. May 8 00:37:03.820232 systemd[1]: No hostname configured, using default hostname. May 8 00:37:03.820239 systemd[1]: Hostname set to . May 8 00:37:03.820246 systemd[1]: Initializing machine ID from random generator. May 8 00:37:03.820252 systemd[1]: Queued start job for default target initrd.target. May 8 00:37:03.820259 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:37:03.820266 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:37:03.820275 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 8 00:37:03.820282 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:37:03.820288 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 8 00:37:03.820295 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 8 00:37:03.820304 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 8 00:37:03.820311 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 8 00:37:03.820319 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:37:03.820326 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:37:03.820333 systemd[1]: Reached target paths.target - Path Units. May 8 00:37:03.820340 systemd[1]: Reached target slices.target - Slice Units. May 8 00:37:03.820346 systemd[1]: Reached target swap.target - Swaps. May 8 00:37:03.820353 systemd[1]: Reached target timers.target - Timer Units. May 8 00:37:03.820360 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:37:03.820366 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:37:03.820373 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 00:37:03.820381 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 8 00:37:03.820388 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:37:03.820395 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:37:03.820401 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:37:03.820408 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:37:03.820415 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 8 00:37:03.820422 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:37:03.820429 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 8 00:37:03.820437 systemd[1]: Starting systemd-fsck-usr.service... May 8 00:37:03.820444 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:37:03.820450 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:37:03.820457 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:37:03.820464 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 8 00:37:03.820471 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:37:03.820490 systemd-journald[215]: Collecting audit messages is disabled. May 8 00:37:03.820508 systemd[1]: Finished systemd-fsck-usr.service. May 8 00:37:03.820515 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:37:03.820523 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 00:37:03.820531 kernel: Bridge firewalling registered May 8 00:37:03.820537 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:37:03.820544 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:37:03.820551 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:37:03.820558 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:37:03.820565 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:37:03.820572 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:37:03.820581 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:37:03.820587 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 8 00:37:03.820595 systemd-journald[215]: Journal started May 8 00:37:03.823416 systemd-journald[215]: Runtime Journal (/run/log/journal/a2a57cb6683c4be9873d909dcb0fbeed) is 4.8M, max 38.6M, 33.8M free. May 8 00:37:03.755333 systemd-modules-load[216]: Inserted module 'overlay' May 8 00:37:03.782627 systemd-modules-load[216]: Inserted module 'br_netfilter' May 8 00:37:03.825625 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:37:03.825405 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:37:03.825617 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:37:03.828670 dracut-cmdline[236]: dracut-dracut-053 May 8 00:37:03.831340 dracut-cmdline[236]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=86cfbfcc89a9c46f6cbba5bdb3509d1ce1367f0c93b0b0e4c6bdcad1a2064c90 May 8 00:37:03.834038 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:37:03.839132 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:37:03.840424 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:37:03.864419 systemd-resolved[271]: Positive Trust Anchors: May 8 00:37:03.864432 systemd-resolved[271]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:37:03.864458 systemd-resolved[271]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:37:03.866454 systemd-resolved[271]: Defaulting to hostname 'linux'. May 8 00:37:03.867247 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:37:03.867437 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:37:03.888621 kernel: SCSI subsystem initialized May 8 00:37:03.894616 kernel: Loading iSCSI transport class v2.0-870. May 8 00:37:03.902621 kernel: iscsi: registered transport (tcp) May 8 00:37:03.916624 kernel: iscsi: registered transport (qla4xxx) May 8 00:37:03.916670 kernel: QLogic iSCSI HBA Driver May 8 00:37:03.937556 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 8 00:37:03.940785 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 8 00:37:03.957848 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 00:37:03.957901 kernel: device-mapper: uevent: version 1.0.3 May 8 00:37:03.958991 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 8 00:37:03.994619 kernel: raid6: avx2x4 gen() 42033 MB/s May 8 00:37:04.011617 kernel: raid6: avx2x2 gen() 37431 MB/s May 8 00:37:04.029202 kernel: raid6: avx2x1 gen() 25775 MB/s May 8 00:37:04.029258 kernel: raid6: using algorithm avx2x4 gen() 42033 MB/s May 8 00:37:04.046899 kernel: raid6: .... xor() 15842 MB/s, rmw enabled May 8 00:37:04.046956 kernel: raid6: using avx2x2 recovery algorithm May 8 00:37:04.063622 kernel: xor: automatically using best checksumming function avx May 8 00:37:04.174619 kernel: Btrfs loaded, zoned=no, fsverity=no May 8 00:37:04.182573 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 8 00:37:04.186769 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:37:04.199742 systemd-udevd[433]: Using default interface naming scheme 'v255'. May 8 00:37:04.202499 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:37:04.209755 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 8 00:37:04.216930 dracut-pre-trigger[438]: rd.md=0: removing MD RAID activation May 8 00:37:04.235295 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:37:04.238724 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:37:04.315709 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:37:04.320757 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 8 00:37:04.330636 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 8 00:37:04.331298 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:37:04.331722 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:37:04.332144 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:37:04.335757 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 8 00:37:04.350503 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 8 00:37:04.390613 kernel: VMware PVSCSI driver - version 1.0.7.0-k May 8 00:37:04.397404 kernel: vmw_pvscsi: using 64bit dma May 8 00:37:04.397446 kernel: vmw_pvscsi: max_id: 16 May 8 00:37:04.397455 kernel: vmw_pvscsi: setting ring_pages to 8 May 8 00:37:04.397463 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI May 8 00:37:04.406613 kernel: vmw_pvscsi: enabling reqCallThreshold May 8 00:37:04.406649 kernel: vmw_pvscsi: driver-based request coalescing enabled May 8 00:37:04.406658 kernel: vmw_pvscsi: using MSI-X May 8 00:37:04.411288 kernel: cryptd: max_cpu_qlen set to 1000 May 8 00:37:04.411324 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 May 8 00:37:04.411352 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 May 8 00:37:04.428200 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 May 8 00:37:04.428298 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 May 8 00:37:04.428375 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps May 8 00:37:04.428448 kernel: AVX2 version of gcm_enc/dec engaged. May 8 00:37:04.415182 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:37:04.419383 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:37:04.419616 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:37:04.419720 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:37:04.419750 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:37:04.423308 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:37:04.430924 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:37:04.431779 kernel: AES CTR mode by8 optimization enabled May 8 00:37:04.434727 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 May 8 00:37:04.439648 kernel: libata version 3.00 loaded. May 8 00:37:04.442807 kernel: ata_piix 0000:00:07.1: version 2.13 May 8 00:37:04.444207 kernel: scsi host1: ata_piix May 8 00:37:04.444317 kernel: scsi host2: ata_piix May 8 00:37:04.444429 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 May 8 00:37:04.444445 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 May 8 00:37:04.452419 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:37:04.457768 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) May 8 00:37:04.459394 kernel: sd 0:0:0:0: [sda] Write Protect is off May 8 00:37:04.459470 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 May 8 00:37:04.459553 kernel: sd 0:0:0:0: [sda] Cache data unavailable May 8 00:37:04.459640 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through May 8 00:37:04.459707 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:37:04.459720 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 8 00:37:04.461744 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:37:04.475505 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:37:04.613677 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 May 8 00:37:04.619646 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 May 8 00:37:04.645621 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray May 8 00:37:04.658330 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 8 00:37:04.658343 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (485) May 8 00:37:04.658350 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 8 00:37:04.661912 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. May 8 00:37:04.664861 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. May 8 00:37:04.665609 kernel: BTRFS: device fsid 28014d97-e6d7-4db4-b1d9-76a980e09972 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (477) May 8 00:37:04.668553 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. May 8 00:37:04.670900 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. May 8 00:37:04.671155 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. May 8 00:37:04.675705 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 8 00:37:04.705631 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:37:04.710164 kernel: GPT:disk_guids don't match. May 8 00:37:04.710213 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 00:37:04.710222 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:37:04.715626 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:37:05.715135 disk-uuid[589]: The operation has completed successfully. May 8 00:37:05.715848 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:37:05.752835 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 00:37:05.752890 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 8 00:37:05.766695 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 8 00:37:05.768369 sh[610]: Success May 8 00:37:05.776610 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 8 00:37:05.826321 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 8 00:37:05.827242 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 8 00:37:05.827499 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 8 00:37:05.848015 kernel: BTRFS info (device dm-0): first mount of filesystem 28014d97-e6d7-4db4-b1d9-76a980e09972 May 8 00:37:05.848068 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 8 00:37:05.848091 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 8 00:37:05.848105 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 8 00:37:05.848856 kernel: BTRFS info (device dm-0): using free space tree May 8 00:37:05.857618 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 8 00:37:05.859712 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 8 00:37:05.874691 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... May 8 00:37:05.875905 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 8 00:37:05.890751 kernel: BTRFS info (device sda6): first mount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:37:05.890783 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:37:05.890792 kernel: BTRFS info (device sda6): using free space tree May 8 00:37:05.895607 kernel: BTRFS info (device sda6): enabling ssd optimizations May 8 00:37:05.901536 systemd[1]: mnt-oem.mount: Deactivated successfully. May 8 00:37:05.903674 kernel: BTRFS info (device sda6): last unmount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:37:05.909133 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 8 00:37:05.912692 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 8 00:37:05.934133 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. May 8 00:37:05.940792 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 8 00:37:06.000357 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:37:06.003654 ignition[670]: Ignition 2.19.0 May 8 00:37:06.003767 ignition[670]: Stage: fetch-offline May 8 00:37:06.005742 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:37:06.003790 ignition[670]: no configs at "/usr/lib/ignition/base.d" May 8 00:37:06.003796 ignition[670]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:37:06.003852 ignition[670]: parsed url from cmdline: "" May 8 00:37:06.003854 ignition[670]: no config URL provided May 8 00:37:06.003856 ignition[670]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:37:06.003861 ignition[670]: no config at "/usr/lib/ignition/user.ign" May 8 00:37:06.004284 ignition[670]: config successfully fetched May 8 00:37:06.004301 ignition[670]: parsing config with SHA512: 3ed07f39835fece6e97a36a5f86b387b5a89fb34ad2634218656ceee71d4fe703b5ee88936467004a66b66816b96097bfe65219db2dfd9ae5ee761102e89a5e1 May 8 00:37:06.008266 unknown[670]: fetched base config from "system" May 8 00:37:06.008394 unknown[670]: fetched user config from "vmware" May 8 00:37:06.008821 ignition[670]: fetch-offline: fetch-offline passed May 8 00:37:06.009000 ignition[670]: Ignition finished successfully May 8 00:37:06.009788 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:37:06.019169 systemd-networkd[802]: lo: Link UP May 8 00:37:06.019176 systemd-networkd[802]: lo: Gained carrier May 8 00:37:06.020205 systemd-networkd[802]: Enumeration completed May 8 00:37:06.020422 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:37:06.020581 systemd-networkd[802]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. May 8 00:37:06.020590 systemd[1]: Reached target network.target - Network. May 8 00:37:06.020700 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 8 00:37:06.024628 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated May 8 00:37:06.024827 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps May 8 00:37:06.022527 systemd-networkd[802]: ens192: Link UP May 8 00:37:06.022529 systemd-networkd[802]: ens192: Gained carrier May 8 00:37:06.026072 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 8 00:37:06.035370 ignition[806]: Ignition 2.19.0 May 8 00:37:06.035379 ignition[806]: Stage: kargs May 8 00:37:06.035491 ignition[806]: no configs at "/usr/lib/ignition/base.d" May 8 00:37:06.035498 ignition[806]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:37:06.036126 ignition[806]: kargs: kargs passed May 8 00:37:06.036162 ignition[806]: Ignition finished successfully May 8 00:37:06.037497 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 8 00:37:06.041792 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 8 00:37:06.050140 ignition[813]: Ignition 2.19.0 May 8 00:37:06.050150 ignition[813]: Stage: disks May 8 00:37:06.050259 ignition[813]: no configs at "/usr/lib/ignition/base.d" May 8 00:37:06.050266 ignition[813]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:37:06.050816 ignition[813]: disks: disks passed May 8 00:37:06.050849 ignition[813]: Ignition finished successfully May 8 00:37:06.051569 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 8 00:37:06.052083 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 8 00:37:06.052194 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 00:37:06.052301 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:37:06.052393 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:37:06.052483 systemd[1]: Reached target basic.target - Basic System. May 8 00:37:06.057720 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 8 00:37:06.068596 systemd-fsck[821]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 8 00:37:06.069818 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 8 00:37:06.075679 systemd[1]: Mounting sysroot.mount - /sysroot... May 8 00:37:06.132642 kernel: EXT4-fs (sda9): mounted filesystem 36960c89-ba45-4808-a41c-bf61ce9470a3 r/w with ordered data mode. Quota mode: none. May 8 00:37:06.132636 systemd[1]: Mounted sysroot.mount - /sysroot. May 8 00:37:06.132992 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 8 00:37:06.144687 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:37:06.146457 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 8 00:37:06.146924 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 8 00:37:06.146966 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 00:37:06.146987 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:37:06.149961 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 8 00:37:06.151007 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 8 00:37:06.153631 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (829) May 8 00:37:06.155819 kernel: BTRFS info (device sda6): first mount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:37:06.155842 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:37:06.155851 kernel: BTRFS info (device sda6): using free space tree May 8 00:37:06.160663 kernel: BTRFS info (device sda6): enabling ssd optimizations May 8 00:37:06.161362 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:37:06.178301 initrd-setup-root[853]: cut: /sysroot/etc/passwd: No such file or directory May 8 00:37:06.180879 initrd-setup-root[860]: cut: /sysroot/etc/group: No such file or directory May 8 00:37:06.183260 initrd-setup-root[867]: cut: /sysroot/etc/shadow: No such file or directory May 8 00:37:06.185341 initrd-setup-root[874]: cut: /sysroot/etc/gshadow: No such file or directory May 8 00:37:06.256758 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 8 00:37:06.261689 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 8 00:37:06.263104 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 8 00:37:06.267611 kernel: BTRFS info (device sda6): last unmount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:37:06.279831 ignition[941]: INFO : Ignition 2.19.0 May 8 00:37:06.279831 ignition[941]: INFO : Stage: mount May 8 00:37:06.279831 ignition[941]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:37:06.279831 ignition[941]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:37:06.280399 ignition[941]: INFO : mount: mount passed May 8 00:37:06.280399 ignition[941]: INFO : Ignition finished successfully May 8 00:37:06.280981 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 8 00:37:06.285692 systemd[1]: Starting ignition-files.service - Ignition (files)... May 8 00:37:06.285884 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 8 00:37:06.844856 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 8 00:37:06.849741 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:37:06.943632 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (953) May 8 00:37:06.953667 kernel: BTRFS info (device sda6): first mount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:37:06.953703 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:37:06.955904 kernel: BTRFS info (device sda6): using free space tree May 8 00:37:07.010618 kernel: BTRFS info (device sda6): enabling ssd optimizations May 8 00:37:07.019034 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:37:07.035247 ignition[970]: INFO : Ignition 2.19.0 May 8 00:37:07.035247 ignition[970]: INFO : Stage: files May 8 00:37:07.035247 ignition[970]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:37:07.035247 ignition[970]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:37:07.035829 ignition[970]: DEBUG : files: compiled without relabeling support, skipping May 8 00:37:07.040623 ignition[970]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 00:37:07.040623 ignition[970]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 00:37:07.083433 ignition[970]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 00:37:07.083802 ignition[970]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 00:37:07.084178 unknown[970]: wrote ssh authorized keys file for user: core May 8 00:37:07.084481 ignition[970]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 00:37:07.103655 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 8 00:37:07.103655 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 8 00:37:07.144169 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 8 00:37:07.310415 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 8 00:37:07.310415 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 8 00:37:07.311005 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 8 00:37:07.311005 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 00:37:07.311005 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 00:37:07.311005 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:37:07.311005 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:37:07.311005 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:37:07.311005 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:37:07.311005 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:37:07.312941 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:37:07.312941 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 8 00:37:07.312941 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 8 00:37:07.312941 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 8 00:37:07.312941 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 8 00:37:07.826163 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 8 00:37:08.063803 systemd-networkd[802]: ens192: Gained IPv6LL May 8 00:37:08.125637 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 8 00:37:08.125637 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" May 8 00:37:08.126216 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" May 8 00:37:08.126216 ignition[970]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 8 00:37:08.126216 ignition[970]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:37:08.126216 ignition[970]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:37:08.126216 ignition[970]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 8 00:37:08.126216 ignition[970]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 8 00:37:08.126216 ignition[970]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:37:08.127192 ignition[970]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:37:08.127192 ignition[970]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 8 00:37:08.127192 ignition[970]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 8 00:37:08.202915 ignition[970]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:37:08.206348 ignition[970]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:37:08.206348 ignition[970]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 8 00:37:08.206348 ignition[970]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 8 00:37:08.206348 ignition[970]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 8 00:37:08.206348 ignition[970]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 00:37:08.206348 ignition[970]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 00:37:08.206348 ignition[970]: INFO : files: files passed May 8 00:37:08.206348 ignition[970]: INFO : Ignition finished successfully May 8 00:37:08.207722 systemd[1]: Finished ignition-files.service - Ignition (files). May 8 00:37:08.216730 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 8 00:37:08.219048 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 8 00:37:08.219362 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 00:37:08.219446 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 8 00:37:08.226841 initrd-setup-root-after-ignition[1000]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:37:08.226841 initrd-setup-root-after-ignition[1000]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 8 00:37:08.227480 initrd-setup-root-after-ignition[1004]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:37:08.228437 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:37:08.228850 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 8 00:37:08.231756 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 8 00:37:08.251710 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 00:37:08.251769 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 8 00:37:08.252176 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 8 00:37:08.252307 systemd[1]: Reached target initrd.target - Initrd Default Target. May 8 00:37:08.252508 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 8 00:37:08.252978 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 8 00:37:08.262793 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:37:08.266685 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 8 00:37:08.272155 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 8 00:37:08.272455 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:37:08.272640 systemd[1]: Stopped target timers.target - Timer Units. May 8 00:37:08.272775 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 00:37:08.272841 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:37:08.273060 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 8 00:37:08.273274 systemd[1]: Stopped target basic.target - Basic System. May 8 00:37:08.273451 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 8 00:37:08.273650 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:37:08.273865 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 8 00:37:08.274062 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 8 00:37:08.274394 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:37:08.274619 systemd[1]: Stopped target sysinit.target - System Initialization. May 8 00:37:08.274818 systemd[1]: Stopped target local-fs.target - Local File Systems. May 8 00:37:08.275015 systemd[1]: Stopped target swap.target - Swaps. May 8 00:37:08.275181 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 00:37:08.275239 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 8 00:37:08.275519 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 8 00:37:08.275697 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:37:08.275878 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 8 00:37:08.275920 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:37:08.276069 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 00:37:08.276128 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 8 00:37:08.276368 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 00:37:08.276429 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:37:08.276680 systemd[1]: Stopped target paths.target - Path Units. May 8 00:37:08.276829 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 00:37:08.280619 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:37:08.280792 systemd[1]: Stopped target slices.target - Slice Units. May 8 00:37:08.280993 systemd[1]: Stopped target sockets.target - Socket Units. May 8 00:37:08.281181 systemd[1]: iscsid.socket: Deactivated successfully. May 8 00:37:08.281244 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:37:08.281454 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 00:37:08.281499 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:37:08.281734 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 00:37:08.281793 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:37:08.282043 systemd[1]: ignition-files.service: Deactivated successfully. May 8 00:37:08.282098 systemd[1]: Stopped ignition-files.service - Ignition (files). May 8 00:37:08.290726 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 8 00:37:08.292086 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 8 00:37:08.292193 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 00:37:08.292280 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:37:08.292469 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 00:37:08.292547 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:37:08.295538 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 00:37:08.295587 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 8 00:37:08.301283 ignition[1024]: INFO : Ignition 2.19.0 May 8 00:37:08.301283 ignition[1024]: INFO : Stage: umount May 8 00:37:08.301283 ignition[1024]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:37:08.301283 ignition[1024]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:37:08.304024 ignition[1024]: INFO : umount: umount passed May 8 00:37:08.304024 ignition[1024]: INFO : Ignition finished successfully May 8 00:37:08.302532 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 00:37:08.302607 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 8 00:37:08.302849 systemd[1]: Stopped target network.target - Network. May 8 00:37:08.302937 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 00:37:08.302981 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 8 00:37:08.303104 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 00:37:08.303125 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 8 00:37:08.303229 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 00:37:08.303249 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 8 00:37:08.303351 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 8 00:37:08.303370 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 8 00:37:08.303541 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 8 00:37:08.303690 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 8 00:37:08.306512 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 00:37:08.307412 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 00:37:08.307466 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 8 00:37:08.308029 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 00:37:08.308061 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 8 00:37:08.311733 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 8 00:37:08.311818 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 00:37:08.311846 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:37:08.311965 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. May 8 00:37:08.312005 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. May 8 00:37:08.312168 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:37:08.312380 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 00:37:08.312432 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 8 00:37:08.315834 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:37:08.315867 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:37:08.316673 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 00:37:08.316698 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 8 00:37:08.316798 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 8 00:37:08.316819 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:37:08.320201 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 00:37:08.320251 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 8 00:37:08.323113 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 00:37:08.323220 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:37:08.323592 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 00:37:08.323639 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 8 00:37:08.323920 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 00:37:08.323942 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:37:08.324157 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 00:37:08.324186 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 8 00:37:08.324545 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 00:37:08.324574 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 8 00:37:08.324953 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:37:08.324981 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:37:08.333704 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 8 00:37:08.333840 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 00:37:08.333878 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:37:08.334035 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 8 00:37:08.334064 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:37:08.334212 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 00:37:08.334241 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:37:08.334390 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:37:08.334418 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:37:08.337531 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 00:37:08.337622 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 8 00:37:08.410447 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 00:37:08.410524 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 8 00:37:08.411027 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 8 00:37:08.411180 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 00:37:08.411230 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 8 00:37:08.416717 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 8 00:37:08.428172 systemd[1]: Switching root. May 8 00:37:08.459484 systemd-journald[215]: Journal stopped May 8 00:37:10.027554 systemd-journald[215]: Received SIGTERM from PID 1 (systemd). May 8 00:37:10.027575 kernel: SELinux: policy capability network_peer_controls=1 May 8 00:37:10.027584 kernel: SELinux: policy capability open_perms=1 May 8 00:37:10.027589 kernel: SELinux: policy capability extended_socket_class=1 May 8 00:37:10.027594 kernel: SELinux: policy capability always_check_network=0 May 8 00:37:10.027722 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 00:37:10.027736 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 00:37:10.027743 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 00:37:10.027748 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 00:37:10.027755 systemd[1]: Successfully loaded SELinux policy in 72.161ms. May 8 00:37:10.027761 kernel: audit: type=1403 audit(1746664629.412:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 8 00:37:10.027767 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.305ms. May 8 00:37:10.027774 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 00:37:10.027782 systemd[1]: Detected virtualization vmware. May 8 00:37:10.027789 systemd[1]: Detected architecture x86-64. May 8 00:37:10.027795 systemd[1]: Detected first boot. May 8 00:37:10.027802 systemd[1]: Initializing machine ID from random generator. May 8 00:37:10.027809 zram_generator::config[1067]: No configuration found. May 8 00:37:10.027817 systemd[1]: Populated /etc with preset unit settings. May 8 00:37:10.027824 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 8 00:37:10.027830 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" May 8 00:37:10.027837 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 8 00:37:10.027843 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 8 00:37:10.027850 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 8 00:37:10.027857 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 8 00:37:10.027864 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 8 00:37:10.027871 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 8 00:37:10.027877 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 8 00:37:10.027884 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 8 00:37:10.027890 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 8 00:37:10.027897 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 8 00:37:10.027905 systemd[1]: Created slice user.slice - User and Session Slice. May 8 00:37:10.027912 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:37:10.027918 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:37:10.027925 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 8 00:37:10.027931 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 8 00:37:10.027938 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 8 00:37:10.027944 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:37:10.027951 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 8 00:37:10.027959 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:37:10.027966 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 8 00:37:10.027974 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 8 00:37:10.027980 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 8 00:37:10.027987 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 8 00:37:10.027994 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:37:10.028001 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:37:10.028008 systemd[1]: Reached target slices.target - Slice Units. May 8 00:37:10.028016 systemd[1]: Reached target swap.target - Swaps. May 8 00:37:10.028022 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 8 00:37:10.028029 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 8 00:37:10.028035 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:37:10.028042 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:37:10.028051 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:37:10.028058 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 8 00:37:10.028065 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 8 00:37:10.028072 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 8 00:37:10.028079 systemd[1]: Mounting media.mount - External Media Directory... May 8 00:37:10.028086 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:37:10.028093 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 8 00:37:10.028100 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 8 00:37:10.028108 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 8 00:37:10.028115 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 00:37:10.028122 systemd[1]: Reached target machines.target - Containers. May 8 00:37:10.028129 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 8 00:37:10.028136 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... May 8 00:37:10.028143 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:37:10.028150 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 8 00:37:10.028156 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:37:10.028164 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:37:10.028172 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:37:10.028178 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 8 00:37:10.028185 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:37:10.028192 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 00:37:10.028199 kernel: fuse: init (API version 7.39) May 8 00:37:10.028205 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 8 00:37:10.028212 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 8 00:37:10.028219 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 8 00:37:10.028226 systemd[1]: Stopped systemd-fsck-usr.service. May 8 00:37:10.028233 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:37:10.028240 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:37:10.028247 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 8 00:37:10.028253 kernel: loop: module loaded May 8 00:37:10.028260 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 8 00:37:10.028267 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:37:10.028273 systemd[1]: verity-setup.service: Deactivated successfully. May 8 00:37:10.028280 systemd[1]: Stopped verity-setup.service. May 8 00:37:10.028288 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:37:10.028295 kernel: ACPI: bus type drm_connector registered May 8 00:37:10.028301 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 8 00:37:10.028308 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 8 00:37:10.028315 systemd[1]: Mounted media.mount - External Media Directory. May 8 00:37:10.028322 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 8 00:37:10.028329 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 8 00:37:10.028336 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 8 00:37:10.028344 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:37:10.028350 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 00:37:10.028357 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 8 00:37:10.028364 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:37:10.028371 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:37:10.028377 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:37:10.028384 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:37:10.028391 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:37:10.028398 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:37:10.028406 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 00:37:10.028413 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 8 00:37:10.028420 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:37:10.028427 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:37:10.028433 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:37:10.028440 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 8 00:37:10.028447 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 8 00:37:10.028466 systemd-journald[1157]: Collecting audit messages is disabled. May 8 00:37:10.028483 systemd[1]: Reached target network-pre.target - Preparation for Network. May 8 00:37:10.028492 systemd-journald[1157]: Journal started May 8 00:37:10.028508 systemd-journald[1157]: Runtime Journal (/run/log/journal/6e8899c4d9744d2581ed3b2dd42ecae6) is 4.8M, max 38.6M, 33.8M free. May 8 00:37:09.831769 systemd[1]: Queued start job for default target multi-user.target. May 8 00:37:09.849892 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 8 00:37:09.850090 systemd[1]: systemd-journald.service: Deactivated successfully. May 8 00:37:10.030392 jq[1134]: true May 8 00:37:10.030945 jq[1176]: true May 8 00:37:10.034657 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 8 00:37:10.039626 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 8 00:37:10.039650 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 00:37:10.041029 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:37:10.042672 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 8 00:37:10.050793 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 8 00:37:10.056943 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 8 00:37:10.056982 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:37:10.068828 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 8 00:37:10.068865 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:37:10.082653 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 8 00:37:10.082694 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:37:10.091618 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:37:10.096629 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 8 00:37:10.106667 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:37:10.115735 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:37:10.110264 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 8 00:37:10.110699 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 8 00:37:10.110864 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 8 00:37:10.111310 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 8 00:37:10.133797 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 8 00:37:10.141120 kernel: loop0: detected capacity change from 0 to 218376 May 8 00:37:10.147787 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 8 00:37:10.148045 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 8 00:37:10.156014 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 8 00:37:10.156338 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:37:10.162655 systemd-journald[1157]: Time spent on flushing to /var/log/journal/6e8899c4d9744d2581ed3b2dd42ecae6 is 75.512ms for 1843 entries. May 8 00:37:10.162655 systemd-journald[1157]: System Journal (/var/log/journal/6e8899c4d9744d2581ed3b2dd42ecae6) is 8.0M, max 584.8M, 576.8M free. May 8 00:37:10.262288 systemd-journald[1157]: Received client request to flush runtime journal. May 8 00:37:10.262336 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 00:37:10.262352 kernel: loop1: detected capacity change from 0 to 2976 May 8 00:37:10.191927 ignition[1177]: Ignition 2.19.0 May 8 00:37:10.206421 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). May 8 00:37:10.192311 ignition[1177]: deleting config from guestinfo properties May 8 00:37:10.217143 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 00:37:10.204117 ignition[1177]: Successfully deleted config May 8 00:37:10.217654 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 8 00:37:10.219751 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. May 8 00:37:10.219762 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. May 8 00:37:10.228657 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:37:10.232780 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 8 00:37:10.233154 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:37:10.236841 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 8 00:37:10.248058 udevadm[1224]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 8 00:37:10.264320 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 8 00:37:10.284758 kernel: loop2: detected capacity change from 0 to 140768 May 8 00:37:10.285016 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 8 00:37:10.290734 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:37:10.313938 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. May 8 00:37:10.313952 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. May 8 00:37:10.319628 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:37:10.321678 kernel: loop3: detected capacity change from 0 to 142488 May 8 00:37:10.381589 kernel: loop4: detected capacity change from 0 to 218376 May 8 00:37:10.416614 kernel: loop5: detected capacity change from 0 to 2976 May 8 00:37:10.451617 kernel: loop6: detected capacity change from 0 to 140768 May 8 00:37:10.469739 kernel: loop7: detected capacity change from 0 to 142488 May 8 00:37:10.488961 (sd-merge)[1238]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. May 8 00:37:10.489685 (sd-merge)[1238]: Merged extensions into '/usr'. May 8 00:37:10.493271 systemd[1]: Reloading requested from client PID 1185 ('systemd-sysext') (unit systemd-sysext.service)... May 8 00:37:10.493281 systemd[1]: Reloading... May 8 00:37:10.552621 zram_generator::config[1262]: No configuration found. May 8 00:37:10.650730 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 8 00:37:10.665856 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:37:10.693165 systemd[1]: Reloading finished in 199 ms. May 8 00:37:10.709643 ldconfig[1181]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 00:37:10.710387 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 8 00:37:10.714673 systemd[1]: Starting ensure-sysext.service... May 8 00:37:10.716691 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:37:10.729526 systemd[1]: Reloading requested from client PID 1319 ('systemctl') (unit ensure-sysext.service)... May 8 00:37:10.729535 systemd[1]: Reloading... May 8 00:37:10.759471 systemd-tmpfiles[1320]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 00:37:10.759909 systemd-tmpfiles[1320]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 8 00:37:10.762230 systemd-tmpfiles[1320]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 00:37:10.762455 systemd-tmpfiles[1320]: ACLs are not supported, ignoring. May 8 00:37:10.762494 systemd-tmpfiles[1320]: ACLs are not supported, ignoring. May 8 00:37:10.767814 systemd-tmpfiles[1320]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:37:10.767820 systemd-tmpfiles[1320]: Skipping /boot May 8 00:37:10.773613 zram_generator::config[1348]: No configuration found. May 8 00:37:10.777270 systemd-tmpfiles[1320]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:37:10.778632 systemd-tmpfiles[1320]: Skipping /boot May 8 00:37:10.831920 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 8 00:37:10.846539 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:37:10.874085 systemd[1]: Reloading finished in 144 ms. May 8 00:37:10.889081 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 8 00:37:10.889430 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 8 00:37:10.892814 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:37:10.897884 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 8 00:37:10.900873 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 8 00:37:10.903168 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 8 00:37:10.905962 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:37:10.909760 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:37:10.911195 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 8 00:37:10.913531 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:37:10.918843 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:37:10.920308 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:37:10.922751 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:37:10.922927 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:37:10.923001 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:37:10.925993 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 8 00:37:10.928925 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:37:10.929023 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:37:10.929078 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:37:10.931635 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:37:10.933017 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:37:10.933242 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:37:10.933395 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:37:10.934073 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:37:10.934219 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:37:10.935967 systemd[1]: Finished ensure-sysext.service. May 8 00:37:10.936296 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:37:10.936374 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:37:10.938217 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:37:10.943804 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 8 00:37:10.944297 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 8 00:37:10.947770 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:37:10.948204 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:37:10.949154 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:37:10.954685 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:37:10.954805 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:37:10.958643 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 8 00:37:10.959699 systemd-udevd[1413]: Using default interface naming scheme 'v255'. May 8 00:37:10.962899 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 8 00:37:10.969738 augenrules[1442]: No rules May 8 00:37:10.970706 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 8 00:37:10.975455 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 8 00:37:10.982950 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 8 00:37:10.992669 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:37:10.996038 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:37:11.047691 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 8 00:37:11.048050 systemd[1]: Reached target time-set.target - System Time Set. May 8 00:37:11.058833 systemd-resolved[1412]: Positive Trust Anchors: May 8 00:37:11.058841 systemd-resolved[1412]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:37:11.058863 systemd-resolved[1412]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:37:11.061508 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 8 00:37:11.061684 systemd-networkd[1454]: lo: Link UP May 8 00:37:11.061811 systemd-networkd[1454]: lo: Gained carrier May 8 00:37:11.062635 systemd-resolved[1412]: Defaulting to hostname 'linux'. May 8 00:37:11.062998 systemd-networkd[1454]: Enumeration completed May 8 00:37:11.063044 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:37:11.069704 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 8 00:37:11.069869 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:37:11.070066 systemd[1]: Reached target network.target - Network. May 8 00:37:11.070147 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:37:11.092429 systemd-networkd[1454]: ens192: Configuring with /etc/systemd/network/00-vmware.network. May 8 00:37:11.095904 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated May 8 00:37:11.096057 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps May 8 00:37:11.096939 systemd-networkd[1454]: ens192: Link UP May 8 00:37:11.097429 systemd-networkd[1454]: ens192: Gained carrier May 8 00:37:11.103209 systemd-timesyncd[1429]: Network configuration changed, trying to establish connection. May 8 00:37:11.127618 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1465) May 8 00:37:11.151659 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 8 00:37:11.153608 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! May 8 00:37:11.156678 kernel: ACPI: button: Power Button [PWRF] May 8 00:37:11.174993 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 8 00:37:11.175352 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:37:11.182618 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc May 8 00:37:11.185167 kernel: Guest personality initialized and is active May 8 00:37:11.186617 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 8 00:37:11.186659 kernel: Initialized host personality May 8 00:37:11.205702 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 May 8 00:37:11.205755 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:37:11.211571 (udev-worker)[1461]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. May 8 00:37:11.227622 kernel: mousedev: PS/2 mouse device common for all mice May 8 00:37:11.256962 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. May 8 00:37:11.264766 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 8 00:37:11.271348 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 8 00:37:11.271629 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 8 00:37:11.275733 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 8 00:37:11.284340 lvm[1498]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:37:11.298693 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:37:11.304060 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 8 00:37:11.304451 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:37:11.304581 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:37:11.304830 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 8 00:37:11.304969 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 8 00:37:11.305180 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 8 00:37:11.305338 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 8 00:37:11.305458 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 8 00:37:11.305577 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 00:37:11.305605 systemd[1]: Reached target paths.target - Path Units. May 8 00:37:11.305709 systemd[1]: Reached target timers.target - Timer Units. May 8 00:37:11.306482 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 8 00:37:11.307519 systemd[1]: Starting docker.socket - Docker Socket for the API... May 8 00:37:11.311641 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 8 00:37:11.312491 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 8 00:37:11.312908 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 8 00:37:11.313048 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:37:11.313141 systemd[1]: Reached target basic.target - Basic System. May 8 00:37:11.313254 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 8 00:37:11.313267 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 8 00:37:11.315686 systemd[1]: Starting containerd.service - containerd container runtime... May 8 00:37:11.317155 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 8 00:37:11.319618 lvm[1505]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:37:11.320549 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 8 00:37:11.323893 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 8 00:37:11.324730 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 8 00:37:11.327839 jq[1508]: false May 8 00:37:11.327686 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 8 00:37:11.337541 extend-filesystems[1509]: Found loop4 May 8 00:37:11.337541 extend-filesystems[1509]: Found loop5 May 8 00:37:11.337541 extend-filesystems[1509]: Found loop6 May 8 00:37:11.337541 extend-filesystems[1509]: Found loop7 May 8 00:37:11.337541 extend-filesystems[1509]: Found sda May 8 00:37:11.337541 extend-filesystems[1509]: Found sda1 May 8 00:37:11.337541 extend-filesystems[1509]: Found sda2 May 8 00:37:11.337541 extend-filesystems[1509]: Found sda3 May 8 00:37:11.337541 extend-filesystems[1509]: Found usr May 8 00:37:11.337541 extend-filesystems[1509]: Found sda4 May 8 00:37:11.337541 extend-filesystems[1509]: Found sda6 May 8 00:37:11.337541 extend-filesystems[1509]: Found sda7 May 8 00:37:11.337541 extend-filesystems[1509]: Found sda9 May 8 00:37:11.337541 extend-filesystems[1509]: Checking size of /dev/sda9 May 8 00:37:11.338696 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 8 00:37:11.342369 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 8 00:37:11.344180 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 8 00:37:11.346683 systemd[1]: Starting systemd-logind.service - User Login Management... May 8 00:37:11.347178 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 00:37:11.347582 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 8 00:37:11.349851 extend-filesystems[1509]: Old size kept for /dev/sda9 May 8 00:37:11.350217 extend-filesystems[1509]: Found sr0 May 8 00:37:11.353209 dbus-daemon[1507]: [system] SELinux support is enabled May 8 00:37:11.356777 systemd[1]: Starting update-engine.service - Update Engine... May 8 00:37:11.358803 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 8 00:37:11.360958 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... May 8 00:37:11.361956 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 8 00:37:11.364917 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 8 00:37:11.365862 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 00:37:11.365961 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 8 00:37:11.366122 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 00:37:11.366209 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 8 00:37:11.367958 systemd[1]: motdgen.service: Deactivated successfully. May 8 00:37:11.368047 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 8 00:37:11.369781 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 00:37:11.369875 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 8 00:37:11.374030 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 00:37:11.374062 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 8 00:37:11.374732 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 00:37:11.374751 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 8 00:37:11.376489 jq[1527]: true May 8 00:37:11.391155 (ntainerd)[1544]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 8 00:37:11.392250 update_engine[1521]: I20250508 00:37:11.392203 1521 main.cc:92] Flatcar Update Engine starting May 8 00:37:11.394716 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. May 8 00:37:11.395646 jq[1541]: true May 8 00:37:11.398683 update_engine[1521]: I20250508 00:37:11.398092 1521 update_check_scheduler.cc:74] Next update check in 9m31s May 8 00:37:11.399303 systemd[1]: Started update-engine.service - Update Engine. May 8 00:37:11.401623 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1464) May 8 00:37:11.404274 tar[1531]: linux-amd64/LICENSE May 8 00:37:11.404274 tar[1531]: linux-amd64/helm May 8 00:37:11.409710 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... May 8 00:37:11.411689 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 8 00:37:11.439726 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. May 8 00:37:11.461281 systemd-logind[1518]: Watching system buttons on /dev/input/event1 (Power Button) May 8 00:37:11.461294 systemd-logind[1518]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 8 00:37:11.462938 unknown[1547]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath May 8 00:37:11.464455 systemd-logind[1518]: New seat seat0. May 8 00:37:11.472377 unknown[1547]: Core dump limit set to -1 May 8 00:37:11.476874 systemd[1]: Started systemd-logind.service - User Login Management. May 8 00:37:11.485412 kernel: NET: Registered PF_VSOCK protocol family May 8 00:37:11.513109 bash[1571]: Updated "/home/core/.ssh/authorized_keys" May 8 00:37:11.518555 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 8 00:37:11.519820 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 8 00:37:11.574375 locksmithd[1553]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 00:37:11.618459 sshd_keygen[1548]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 00:37:11.651571 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 8 00:37:11.657759 systemd[1]: Starting issuegen.service - Generate /run/issue... May 8 00:37:11.666524 systemd[1]: issuegen.service: Deactivated successfully. May 8 00:37:11.666691 systemd[1]: Finished issuegen.service - Generate /run/issue. May 8 00:37:11.672991 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 8 00:37:11.698940 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 8 00:37:11.706700 systemd[1]: Started getty@tty1.service - Getty on tty1. May 8 00:37:11.708292 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 8 00:37:11.708474 systemd[1]: Reached target getty.target - Login Prompts. May 8 00:37:11.720653 containerd[1544]: time="2025-05-08T00:37:11.720567728Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 8 00:37:11.742682 containerd[1544]: time="2025-05-08T00:37:11.742648081Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 00:37:11.744455 containerd[1544]: time="2025-05-08T00:37:11.744435658Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 00:37:11.744513 containerd[1544]: time="2025-05-08T00:37:11.744505365Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 00:37:11.744613 containerd[1544]: time="2025-05-08T00:37:11.744548291Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 00:37:11.744836 containerd[1544]: time="2025-05-08T00:37:11.744739575Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 8 00:37:11.744836 containerd[1544]: time="2025-05-08T00:37:11.744751565Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 8 00:37:11.744836 containerd[1544]: time="2025-05-08T00:37:11.744786732Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:37:11.744836 containerd[1544]: time="2025-05-08T00:37:11.744794868Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 00:37:11.745007 containerd[1544]: time="2025-05-08T00:37:11.744996690Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:37:11.745260 containerd[1544]: time="2025-05-08T00:37:11.745063530Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 00:37:11.745260 containerd[1544]: time="2025-05-08T00:37:11.745075891Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:37:11.745260 containerd[1544]: time="2025-05-08T00:37:11.745082375Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 00:37:11.745260 containerd[1544]: time="2025-05-08T00:37:11.745131652Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 00:37:11.745260 containerd[1544]: time="2025-05-08T00:37:11.745243509Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 00:37:11.745426 containerd[1544]: time="2025-05-08T00:37:11.745411286Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:37:11.745483 containerd[1544]: time="2025-05-08T00:37:11.745474300Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 00:37:11.745621 containerd[1544]: time="2025-05-08T00:37:11.745550624Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 00:37:11.745654 containerd[1544]: time="2025-05-08T00:37:11.745646530Z" level=info msg="metadata content store policy set" policy=shared May 8 00:37:11.747852 containerd[1544]: time="2025-05-08T00:37:11.747495565Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 00:37:11.747852 containerd[1544]: time="2025-05-08T00:37:11.747539817Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 00:37:11.747852 containerd[1544]: time="2025-05-08T00:37:11.747553148Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 8 00:37:11.747852 containerd[1544]: time="2025-05-08T00:37:11.747562624Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 8 00:37:11.747852 containerd[1544]: time="2025-05-08T00:37:11.747571703Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 00:37:11.747852 containerd[1544]: time="2025-05-08T00:37:11.747664923Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 00:37:11.747852 containerd[1544]: time="2025-05-08T00:37:11.747811567Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 00:37:11.748014 containerd[1544]: time="2025-05-08T00:37:11.747881835Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 8 00:37:11.748014 containerd[1544]: time="2025-05-08T00:37:11.747893241Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 8 00:37:11.748014 containerd[1544]: time="2025-05-08T00:37:11.747900548Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 8 00:37:11.748014 containerd[1544]: time="2025-05-08T00:37:11.747909150Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 00:37:11.748014 containerd[1544]: time="2025-05-08T00:37:11.747917710Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 00:37:11.748014 containerd[1544]: time="2025-05-08T00:37:11.747926338Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 00:37:11.748014 containerd[1544]: time="2025-05-08T00:37:11.747936916Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 00:37:11.748014 containerd[1544]: time="2025-05-08T00:37:11.747945374Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 00:37:11.748014 containerd[1544]: time="2025-05-08T00:37:11.747953106Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 00:37:11.748014 containerd[1544]: time="2025-05-08T00:37:11.747960423Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 00:37:11.748014 containerd[1544]: time="2025-05-08T00:37:11.747967110Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 00:37:11.748014 containerd[1544]: time="2025-05-08T00:37:11.747980416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 00:37:11.748014 containerd[1544]: time="2025-05-08T00:37:11.747988542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 00:37:11.748014 containerd[1544]: time="2025-05-08T00:37:11.747995542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 00:37:11.748202 containerd[1544]: time="2025-05-08T00:37:11.748003068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 00:37:11.748202 containerd[1544]: time="2025-05-08T00:37:11.748010561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 00:37:11.748202 containerd[1544]: time="2025-05-08T00:37:11.748018332Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 00:37:11.748202 containerd[1544]: time="2025-05-08T00:37:11.748025183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 00:37:11.748202 containerd[1544]: time="2025-05-08T00:37:11.748032709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 00:37:11.748202 containerd[1544]: time="2025-05-08T00:37:11.748039908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 8 00:37:11.748202 containerd[1544]: time="2025-05-08T00:37:11.748049001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 8 00:37:11.748202 containerd[1544]: time="2025-05-08T00:37:11.748055773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 00:37:11.748202 containerd[1544]: time="2025-05-08T00:37:11.748065577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 8 00:37:11.748202 containerd[1544]: time="2025-05-08T00:37:11.748075419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 00:37:11.748202 containerd[1544]: time="2025-05-08T00:37:11.748084372Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 8 00:37:11.748202 containerd[1544]: time="2025-05-08T00:37:11.748096544Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 8 00:37:11.748202 containerd[1544]: time="2025-05-08T00:37:11.748103236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 00:37:11.748202 containerd[1544]: time="2025-05-08T00:37:11.748109427Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 00:37:11.748394 containerd[1544]: time="2025-05-08T00:37:11.748135217Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 00:37:11.748394 containerd[1544]: time="2025-05-08T00:37:11.748145679Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 8 00:37:11.748394 containerd[1544]: time="2025-05-08T00:37:11.748152107Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 00:37:11.748394 containerd[1544]: time="2025-05-08T00:37:11.748158734Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 8 00:37:11.748394 containerd[1544]: time="2025-05-08T00:37:11.748164967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 00:37:11.748394 containerd[1544]: time="2025-05-08T00:37:11.748171652Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 8 00:37:11.748394 containerd[1544]: time="2025-05-08T00:37:11.748176952Z" level=info msg="NRI interface is disabled by configuration." May 8 00:37:11.748394 containerd[1544]: time="2025-05-08T00:37:11.748184232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 00:37:11.748502 containerd[1544]: time="2025-05-08T00:37:11.748381422Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 00:37:11.748502 containerd[1544]: time="2025-05-08T00:37:11.748418060Z" level=info msg="Connect containerd service" May 8 00:37:11.748502 containerd[1544]: time="2025-05-08T00:37:11.748441945Z" level=info msg="using legacy CRI server" May 8 00:37:11.748502 containerd[1544]: time="2025-05-08T00:37:11.748446736Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 8 00:37:11.748502 containerd[1544]: time="2025-05-08T00:37:11.748502422Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 00:37:11.748963 containerd[1544]: time="2025-05-08T00:37:11.748901521Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:37:11.749651 containerd[1544]: time="2025-05-08T00:37:11.749002885Z" level=info msg="Start subscribing containerd event" May 8 00:37:11.749651 containerd[1544]: time="2025-05-08T00:37:11.749039770Z" level=info msg="Start recovering state" May 8 00:37:11.749651 containerd[1544]: time="2025-05-08T00:37:11.749090429Z" level=info msg="Start event monitor" May 8 00:37:11.749651 containerd[1544]: time="2025-05-08T00:37:11.749095398Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 00:37:11.749651 containerd[1544]: time="2025-05-08T00:37:11.749102939Z" level=info msg="Start snapshots syncer" May 8 00:37:11.749651 containerd[1544]: time="2025-05-08T00:37:11.749108743Z" level=info msg="Start cni network conf syncer for default" May 8 00:37:11.749651 containerd[1544]: time="2025-05-08T00:37:11.749113688Z" level=info msg="Start streaming server" May 8 00:37:11.749651 containerd[1544]: time="2025-05-08T00:37:11.749119914Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 00:37:11.749201 systemd[1]: Started containerd.service - containerd container runtime. May 8 00:37:11.749860 containerd[1544]: time="2025-05-08T00:37:11.749701574Z" level=info msg="containerd successfully booted in 0.029600s" May 8 00:37:11.879450 tar[1531]: linux-amd64/README.md May 8 00:37:11.893020 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 8 00:37:12.671753 systemd-networkd[1454]: ens192: Gained IPv6LL May 8 00:37:12.672177 systemd-timesyncd[1429]: Network configuration changed, trying to establish connection. May 8 00:37:12.672914 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 8 00:37:12.673678 systemd[1]: Reached target network-online.target - Network is Online. May 8 00:37:12.680783 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... May 8 00:37:12.696743 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:37:12.698792 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 8 00:37:12.743634 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 8 00:37:12.749063 systemd[1]: coreos-metadata.service: Deactivated successfully. May 8 00:37:12.749185 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. May 8 00:37:12.749540 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 8 00:37:14.741694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:37:14.742082 systemd[1]: Reached target multi-user.target - Multi-User System. May 8 00:37:14.742385 systemd[1]: Startup finished in 1.040s (kernel) + 5.739s (initrd) + 5.400s (userspace) = 12.180s. May 8 00:37:14.748330 (kubelet)[1686]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:37:14.778266 login[1628]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying May 8 00:37:14.778414 login[1623]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 8 00:37:14.785044 systemd-logind[1518]: New session 1 of user core. May 8 00:37:14.785880 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 8 00:37:14.794837 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 8 00:37:14.809630 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 8 00:37:14.814790 systemd[1]: Starting user@500.service - User Manager for UID 500... May 8 00:37:14.820513 (systemd)[1693]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 00:37:14.935024 systemd[1693]: Queued start job for default target default.target. May 8 00:37:14.941402 systemd[1693]: Created slice app.slice - User Application Slice. May 8 00:37:14.941425 systemd[1693]: Reached target paths.target - Paths. May 8 00:37:14.941434 systemd[1693]: Reached target timers.target - Timers. May 8 00:37:14.942209 systemd[1693]: Starting dbus.socket - D-Bus User Message Bus Socket... May 8 00:37:14.950149 systemd[1693]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 8 00:37:14.950185 systemd[1693]: Reached target sockets.target - Sockets. May 8 00:37:14.950194 systemd[1693]: Reached target basic.target - Basic System. May 8 00:37:14.950216 systemd[1693]: Reached target default.target - Main User Target. May 8 00:37:14.950232 systemd[1693]: Startup finished in 126ms. May 8 00:37:14.950418 systemd[1]: Started user@500.service - User Manager for UID 500. May 8 00:37:14.954681 systemd[1]: Started session-1.scope - Session 1 of User core. May 8 00:37:15.780010 login[1628]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 8 00:37:15.782718 systemd-logind[1518]: New session 2 of user core. May 8 00:37:15.792755 systemd[1]: Started session-2.scope - Session 2 of User core. May 8 00:37:15.925806 kubelet[1686]: E0508 00:37:15.925744 1686 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:37:15.927279 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:37:15.927366 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:37:26.177587 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 8 00:37:26.186780 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:37:26.522733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:37:26.532880 (kubelet)[1735]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:37:26.627259 kubelet[1735]: E0508 00:37:26.627132 1735 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:37:26.630118 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:37:26.630223 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:37:36.880522 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 8 00:37:36.885754 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:37:37.105228 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:37:37.108062 (kubelet)[1750]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:37:37.137072 kubelet[1750]: E0508 00:37:37.136981 1750 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:37:37.138568 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:37:37.138666 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:37:41.571149 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 8 00:37:41.573735 systemd[1]: Started sshd@0-139.178.70.100:22-139.178.68.195:53658.service - OpenSSH per-connection server daemon (139.178.68.195:53658). May 8 00:37:41.601572 sshd[1758]: Accepted publickey for core from 139.178.68.195 port 53658 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:37:41.602283 sshd[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:37:41.605461 systemd-logind[1518]: New session 3 of user core. May 8 00:37:41.610705 systemd[1]: Started session-3.scope - Session 3 of User core. May 8 00:37:41.664779 systemd[1]: Started sshd@1-139.178.70.100:22-139.178.68.195:53668.service - OpenSSH per-connection server daemon (139.178.68.195:53668). May 8 00:37:41.687894 sshd[1763]: Accepted publickey for core from 139.178.68.195 port 53668 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:37:41.688905 sshd[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:37:41.692508 systemd-logind[1518]: New session 4 of user core. May 8 00:37:41.696710 systemd[1]: Started session-4.scope - Session 4 of User core. May 8 00:37:41.748134 sshd[1763]: pam_unix(sshd:session): session closed for user core May 8 00:37:41.752988 systemd[1]: sshd@1-139.178.70.100:22-139.178.68.195:53668.service: Deactivated successfully. May 8 00:37:41.753756 systemd[1]: session-4.scope: Deactivated successfully. May 8 00:37:41.754480 systemd-logind[1518]: Session 4 logged out. Waiting for processes to exit. May 8 00:37:41.757805 systemd[1]: Started sshd@2-139.178.70.100:22-139.178.68.195:53684.service - OpenSSH per-connection server daemon (139.178.68.195:53684). May 8 00:37:41.758769 systemd-logind[1518]: Removed session 4. May 8 00:37:41.779793 sshd[1770]: Accepted publickey for core from 139.178.68.195 port 53684 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:37:41.780609 sshd[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:37:41.783715 systemd-logind[1518]: New session 5 of user core. May 8 00:37:41.793688 systemd[1]: Started session-5.scope - Session 5 of User core. May 8 00:37:41.840027 sshd[1770]: pam_unix(sshd:session): session closed for user core May 8 00:37:41.849277 systemd[1]: sshd@2-139.178.70.100:22-139.178.68.195:53684.service: Deactivated successfully. May 8 00:37:41.850056 systemd[1]: session-5.scope: Deactivated successfully. May 8 00:37:41.850781 systemd-logind[1518]: Session 5 logged out. Waiting for processes to exit. May 8 00:37:41.851461 systemd[1]: Started sshd@3-139.178.70.100:22-139.178.68.195:53686.service - OpenSSH per-connection server daemon (139.178.68.195:53686). May 8 00:37:41.853111 systemd-logind[1518]: Removed session 5. May 8 00:37:41.875723 sshd[1777]: Accepted publickey for core from 139.178.68.195 port 53686 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:37:41.876469 sshd[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:37:41.879035 systemd-logind[1518]: New session 6 of user core. May 8 00:37:41.887722 systemd[1]: Started session-6.scope - Session 6 of User core. May 8 00:37:41.936196 sshd[1777]: pam_unix(sshd:session): session closed for user core May 8 00:37:41.946552 systemd[1]: sshd@3-139.178.70.100:22-139.178.68.195:53686.service: Deactivated successfully. May 8 00:37:41.947544 systemd[1]: session-6.scope: Deactivated successfully. May 8 00:37:41.948376 systemd-logind[1518]: Session 6 logged out. Waiting for processes to exit. May 8 00:37:41.949242 systemd[1]: Started sshd@4-139.178.70.100:22-139.178.68.195:53702.service - OpenSSH per-connection server daemon (139.178.68.195:53702). May 8 00:37:41.950908 systemd-logind[1518]: Removed session 6. May 8 00:37:41.973812 sshd[1784]: Accepted publickey for core from 139.178.68.195 port 53702 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:37:41.974496 sshd[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:37:41.976667 systemd-logind[1518]: New session 7 of user core. May 8 00:37:41.983692 systemd[1]: Started session-7.scope - Session 7 of User core. May 8 00:37:42.038573 sudo[1787]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 8 00:37:42.038744 sudo[1787]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:37:42.047884 sudo[1787]: pam_unix(sudo:session): session closed for user root May 8 00:37:42.048775 sshd[1784]: pam_unix(sshd:session): session closed for user core May 8 00:37:42.063499 systemd[1]: sshd@4-139.178.70.100:22-139.178.68.195:53702.service: Deactivated successfully. May 8 00:37:42.064243 systemd[1]: session-7.scope: Deactivated successfully. May 8 00:37:42.065006 systemd-logind[1518]: Session 7 logged out. Waiting for processes to exit. May 8 00:37:42.065735 systemd[1]: Started sshd@5-139.178.70.100:22-139.178.68.195:53718.service - OpenSSH per-connection server daemon (139.178.68.195:53718). May 8 00:37:42.066771 systemd-logind[1518]: Removed session 7. May 8 00:37:42.096022 sshd[1792]: Accepted publickey for core from 139.178.68.195 port 53718 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:37:42.097126 sshd[1792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:37:42.100343 systemd-logind[1518]: New session 8 of user core. May 8 00:37:42.105719 systemd[1]: Started session-8.scope - Session 8 of User core. May 8 00:37:42.154005 sudo[1796]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 8 00:37:42.154169 sudo[1796]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:37:42.155974 sudo[1796]: pam_unix(sudo:session): session closed for user root May 8 00:37:42.158719 sudo[1795]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 8 00:37:42.158983 sudo[1795]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:37:42.168780 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 8 00:37:42.169404 auditctl[1799]: No rules May 8 00:37:42.169653 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:37:42.169769 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 8 00:37:42.171300 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 8 00:37:42.185786 augenrules[1817]: No rules May 8 00:37:42.186486 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 8 00:37:42.187237 sudo[1795]: pam_unix(sudo:session): session closed for user root May 8 00:37:42.188840 sshd[1792]: pam_unix(sshd:session): session closed for user core May 8 00:37:42.192018 systemd[1]: sshd@5-139.178.70.100:22-139.178.68.195:53718.service: Deactivated successfully. May 8 00:37:42.193281 systemd[1]: session-8.scope: Deactivated successfully. May 8 00:37:42.194157 systemd-logind[1518]: Session 8 logged out. Waiting for processes to exit. May 8 00:37:42.197896 systemd[1]: Started sshd@6-139.178.70.100:22-139.178.68.195:53730.service - OpenSSH per-connection server daemon (139.178.68.195:53730). May 8 00:37:42.199631 systemd-logind[1518]: Removed session 8. May 8 00:37:42.220557 sshd[1825]: Accepted publickey for core from 139.178.68.195 port 53730 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:37:42.221238 sshd[1825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:37:42.223782 systemd-logind[1518]: New session 9 of user core. May 8 00:37:42.230677 systemd[1]: Started session-9.scope - Session 9 of User core. May 8 00:37:42.277749 sudo[1828]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 00:37:42.277904 sudo[1828]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:37:42.547784 systemd[1]: Starting docker.service - Docker Application Container Engine... May 8 00:37:42.547881 (dockerd)[1844]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 8 00:37:42.880251 dockerd[1844]: time="2025-05-08T00:37:42.879938588Z" level=info msg="Starting up" May 8 00:37:42.983484 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3657236699-merged.mount: Deactivated successfully. May 8 00:37:43.002852 dockerd[1844]: time="2025-05-08T00:37:43.002827584Z" level=info msg="Loading containers: start." May 8 00:37:43.100624 kernel: Initializing XFRM netlink socket May 8 00:39:03.215001 systemd-timesyncd[1429]: Contacted time server 85.209.17.10:123 (2.flatcar.pool.ntp.org). May 8 00:39:03.215031 systemd-timesyncd[1429]: Initial clock synchronization to Thu 2025-05-08 00:39:03.214665 UTC. May 8 00:39:03.215148 systemd-resolved[1412]: Clock change detected. Flushing caches. May 8 00:39:03.259455 systemd-networkd[1454]: docker0: Link UP May 8 00:39:03.270902 dockerd[1844]: time="2025-05-08T00:39:03.270868718Z" level=info msg="Loading containers: done." May 8 00:39:03.282974 dockerd[1844]: time="2025-05-08T00:39:03.282856886Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 8 00:39:03.282974 dockerd[1844]: time="2025-05-08T00:39:03.282931727Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 8 00:39:03.283082 dockerd[1844]: time="2025-05-08T00:39:03.283051391Z" level=info msg="Daemon has completed initialization" May 8 00:39:03.327081 dockerd[1844]: time="2025-05-08T00:39:03.326974150Z" level=info msg="API listen on /run/docker.sock" May 8 00:39:03.327225 systemd[1]: Started docker.service - Docker Application Container Engine. May 8 00:39:04.066156 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2329044623-merged.mount: Deactivated successfully. May 8 00:39:04.302583 containerd[1544]: time="2025-05-08T00:39:04.302472350Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 8 00:39:04.973060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4013183110.mount: Deactivated successfully. May 8 00:39:06.096501 containerd[1544]: time="2025-05-08T00:39:06.095930350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:06.097161 containerd[1544]: time="2025-05-08T00:39:06.097100710Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682879" May 8 00:39:06.097991 containerd[1544]: time="2025-05-08T00:39:06.097530686Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:06.099072 containerd[1544]: time="2025-05-08T00:39:06.099054798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:06.099842 containerd[1544]: time="2025-05-08T00:39:06.099823983Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 1.797323401s" May 8 00:39:06.099878 containerd[1544]: time="2025-05-08T00:39:06.099843261Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" May 8 00:39:06.100249 containerd[1544]: time="2025-05-08T00:39:06.100224474Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 8 00:39:07.279389 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 8 00:39:07.284137 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:39:07.810766 containerd[1544]: time="2025-05-08T00:39:07.810683376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:07.812064 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:39:07.816199 (kubelet)[2049]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:39:07.850831 kubelet[2049]: E0508 00:39:07.850748 2049 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:39:07.852307 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:39:07.852400 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:39:08.224135 containerd[1544]: time="2025-05-08T00:39:08.224054838Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779589" May 8 00:39:08.235804 containerd[1544]: time="2025-05-08T00:39:08.235753392Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:08.248431 containerd[1544]: time="2025-05-08T00:39:08.248387388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:08.249134 containerd[1544]: time="2025-05-08T00:39:08.248819483Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 2.148574776s" May 8 00:39:08.249134 containerd[1544]: time="2025-05-08T00:39:08.248846750Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" May 8 00:39:08.249220 containerd[1544]: time="2025-05-08T00:39:08.249192593Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 8 00:39:10.041283 containerd[1544]: time="2025-05-08T00:39:10.041223013Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:10.041989 containerd[1544]: time="2025-05-08T00:39:10.041961697Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169938" May 8 00:39:10.042711 containerd[1544]: time="2025-05-08T00:39:10.042293808Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:10.043961 containerd[1544]: time="2025-05-08T00:39:10.043916700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:10.044673 containerd[1544]: time="2025-05-08T00:39:10.044587493Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 1.79537652s" May 8 00:39:10.044673 containerd[1544]: time="2025-05-08T00:39:10.044608047Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" May 8 00:39:10.045622 containerd[1544]: time="2025-05-08T00:39:10.045404582Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 8 00:39:10.949158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount207735962.mount: Deactivated successfully. May 8 00:39:11.639617 containerd[1544]: time="2025-05-08T00:39:11.639578906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:11.649119 containerd[1544]: time="2025-05-08T00:39:11.649078501Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917856" May 8 00:39:11.657785 containerd[1544]: time="2025-05-08T00:39:11.657747066Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:11.664861 containerd[1544]: time="2025-05-08T00:39:11.664828513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:11.665466 containerd[1544]: time="2025-05-08T00:39:11.665217904Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 1.619793939s" May 8 00:39:11.665466 containerd[1544]: time="2025-05-08T00:39:11.665240506Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 8 00:39:11.665604 containerd[1544]: time="2025-05-08T00:39:11.665582026Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 8 00:39:12.309395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4196399859.mount: Deactivated successfully. May 8 00:39:12.979800 containerd[1544]: time="2025-05-08T00:39:12.979764210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:12.980899 containerd[1544]: time="2025-05-08T00:39:12.980878869Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 8 00:39:12.981972 containerd[1544]: time="2025-05-08T00:39:12.981355055Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:12.982828 containerd[1544]: time="2025-05-08T00:39:12.982803081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:12.983557 containerd[1544]: time="2025-05-08T00:39:12.983485157Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.31787963s" May 8 00:39:12.983557 containerd[1544]: time="2025-05-08T00:39:12.983502797Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 8 00:39:12.983933 containerd[1544]: time="2025-05-08T00:39:12.983822568Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 8 00:39:13.642398 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3827295871.mount: Deactivated successfully. May 8 00:39:13.644914 containerd[1544]: time="2025-05-08T00:39:13.644709599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:13.645136 containerd[1544]: time="2025-05-08T00:39:13.645111815Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 8 00:39:13.645725 containerd[1544]: time="2025-05-08T00:39:13.645212218Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:13.646486 containerd[1544]: time="2025-05-08T00:39:13.646464524Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:13.647020 containerd[1544]: time="2025-05-08T00:39:13.646904745Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 663.066885ms" May 8 00:39:13.647020 containerd[1544]: time="2025-05-08T00:39:13.646923524Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 8 00:39:13.647410 containerd[1544]: time="2025-05-08T00:39:13.647329029Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 8 00:39:14.182493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3022549297.mount: Deactivated successfully. May 8 00:39:16.653607 update_engine[1521]: I20250508 00:39:16.653551 1521 update_attempter.cc:509] Updating boot flags... May 8 00:39:16.728956 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2181) May 8 00:39:16.847079 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2180) May 8 00:39:17.598365 containerd[1544]: time="2025-05-08T00:39:17.598336301Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:17.601961 containerd[1544]: time="2025-05-08T00:39:17.601242886Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 8 00:39:17.603503 containerd[1544]: time="2025-05-08T00:39:17.603488427Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:17.606571 containerd[1544]: time="2025-05-08T00:39:17.606554082Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:17.607133 containerd[1544]: time="2025-05-08T00:39:17.607113765Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.959768398s" May 8 00:39:17.607169 containerd[1544]: time="2025-05-08T00:39:17.607138154Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 8 00:39:18.028884 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 8 00:39:18.037084 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:39:18.758094 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:39:18.763042 (kubelet)[2223]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:39:18.807554 kubelet[2223]: E0508 00:39:18.806064 2223 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:39:18.807839 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:39:18.807918 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:39:19.652869 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:39:19.664314 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:39:19.683485 systemd[1]: Reloading requested from client PID 2238 ('systemctl') (unit session-9.scope)... May 8 00:39:19.683497 systemd[1]: Reloading... May 8 00:39:19.728965 zram_generator::config[2275]: No configuration found. May 8 00:39:19.791260 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 8 00:39:19.806364 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:39:19.851458 systemd[1]: Reloading finished in 167 ms. May 8 00:39:19.958026 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 8 00:39:19.958093 systemd[1]: kubelet.service: Failed with result 'signal'. May 8 00:39:19.958322 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:39:19.962187 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:39:20.287365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:39:20.292159 (kubelet)[2343]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:39:20.347027 kubelet[2343]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:39:20.368207 kubelet[2343]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 8 00:39:20.368287 kubelet[2343]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:39:20.368420 kubelet[2343]: I0508 00:39:20.368401 2343 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:39:20.709655 kubelet[2343]: I0508 00:39:20.709492 2343 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 8 00:39:20.709655 kubelet[2343]: I0508 00:39:20.709534 2343 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:39:20.709837 kubelet[2343]: I0508 00:39:20.709755 2343 server.go:954] "Client rotation is on, will bootstrap in background" May 8 00:39:21.043491 kubelet[2343]: E0508 00:39:21.043437 2343 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.100:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" May 8 00:39:21.043715 kubelet[2343]: I0508 00:39:21.043626 2343 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:39:21.071795 kubelet[2343]: E0508 00:39:21.071751 2343 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 00:39:21.071795 kubelet[2343]: I0508 00:39:21.071794 2343 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 00:39:21.077628 kubelet[2343]: I0508 00:39:21.077605 2343 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:39:21.081203 kubelet[2343]: I0508 00:39:21.081144 2343 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:39:21.081338 kubelet[2343]: I0508 00:39:21.081202 2343 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 00:39:21.082855 kubelet[2343]: I0508 00:39:21.082837 2343 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:39:21.082855 kubelet[2343]: I0508 00:39:21.082854 2343 container_manager_linux.go:304] "Creating device plugin manager" May 8 00:39:21.083001 kubelet[2343]: I0508 00:39:21.082987 2343 state_mem.go:36] "Initialized new in-memory state store" May 8 00:39:21.086006 kubelet[2343]: I0508 00:39:21.085991 2343 kubelet.go:446] "Attempting to sync node with API server" May 8 00:39:21.086055 kubelet[2343]: I0508 00:39:21.086013 2343 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:39:21.086055 kubelet[2343]: I0508 00:39:21.086028 2343 kubelet.go:352] "Adding apiserver pod source" May 8 00:39:21.086055 kubelet[2343]: I0508 00:39:21.086035 2343 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:39:21.142870 kubelet[2343]: W0508 00:39:21.142571 2343 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused May 8 00:39:21.142870 kubelet[2343]: E0508 00:39:21.142622 2343 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" May 8 00:39:21.142870 kubelet[2343]: W0508 00:39:21.142830 2343 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused May 8 00:39:21.142870 kubelet[2343]: E0508 00:39:21.142854 2343 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" May 8 00:39:21.143320 kubelet[2343]: I0508 00:39:21.143200 2343 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 8 00:39:21.166838 kubelet[2343]: I0508 00:39:21.166776 2343 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:39:21.173124 kubelet[2343]: W0508 00:39:21.173095 2343 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 00:39:21.173661 kubelet[2343]: I0508 00:39:21.173643 2343 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 8 00:39:21.173714 kubelet[2343]: I0508 00:39:21.173673 2343 server.go:1287] "Started kubelet" May 8 00:39:21.191448 kubelet[2343]: I0508 00:39:21.191095 2343 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:39:21.214660 kubelet[2343]: I0508 00:39:21.214640 2343 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:39:21.223197 kubelet[2343]: I0508 00:39:21.222809 2343 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:39:21.223197 kubelet[2343]: I0508 00:39:21.223081 2343 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:39:21.223986 kubelet[2343]: I0508 00:39:21.223539 2343 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 00:39:21.224340 kubelet[2343]: I0508 00:39:21.224330 2343 volume_manager.go:297] "Starting Kubelet Volume Manager" May 8 00:39:21.225030 kubelet[2343]: E0508 00:39:21.224533 2343 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:39:21.228153 kubelet[2343]: I0508 00:39:21.228140 2343 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:39:21.228266 kubelet[2343]: I0508 00:39:21.228257 2343 reconciler.go:26] "Reconciler: start to sync state" May 8 00:39:21.230582 kubelet[2343]: W0508 00:39:21.230555 2343 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused May 8 00:39:21.230679 kubelet[2343]: E0508 00:39:21.230665 2343 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" May 8 00:39:21.230787 kubelet[2343]: E0508 00:39:21.230771 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.100:6443: connect: connection refused" interval="200ms" May 8 00:39:21.239300 kubelet[2343]: I0508 00:39:21.239282 2343 server.go:490] "Adding debug handlers to kubelet server" May 8 00:39:21.241813 kubelet[2343]: I0508 00:39:21.241790 2343 factory.go:221] Registration of the systemd container factory successfully May 8 00:39:21.241900 kubelet[2343]: I0508 00:39:21.241881 2343 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:39:21.263119 kubelet[2343]: I0508 00:39:21.262478 2343 factory.go:221] Registration of the containerd container factory successfully May 8 00:39:21.265843 kubelet[2343]: E0508 00:39:21.249146 2343 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.100:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.100:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183d66589772c369 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-08 00:39:21.173656425 +0000 UTC m=+0.878808917,LastTimestamp:2025-05-08 00:39:21.173656425 +0000 UTC m=+0.878808917,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 8 00:39:21.270204 kubelet[2343]: E0508 00:39:21.269579 2343 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:39:21.273678 kubelet[2343]: I0508 00:39:21.273642 2343 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:39:21.274521 kubelet[2343]: I0508 00:39:21.274500 2343 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:39:21.274521 kubelet[2343]: I0508 00:39:21.274517 2343 status_manager.go:227] "Starting to sync pod status with apiserver" May 8 00:39:21.274587 kubelet[2343]: I0508 00:39:21.274531 2343 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 8 00:39:21.274587 kubelet[2343]: I0508 00:39:21.274536 2343 kubelet.go:2388] "Starting kubelet main sync loop" May 8 00:39:21.274587 kubelet[2343]: E0508 00:39:21.274572 2343 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:39:21.279694 kubelet[2343]: W0508 00:39:21.279641 2343 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused May 8 00:39:21.279694 kubelet[2343]: E0508 00:39:21.279668 2343 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" May 8 00:39:21.291132 kubelet[2343]: I0508 00:39:21.291112 2343 cpu_manager.go:221] "Starting CPU manager" policy="none" May 8 00:39:21.291132 kubelet[2343]: I0508 00:39:21.291126 2343 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 8 00:39:21.291132 kubelet[2343]: I0508 00:39:21.291138 2343 state_mem.go:36] "Initialized new in-memory state store" May 8 00:39:21.304642 kubelet[2343]: I0508 00:39:21.303846 2343 policy_none.go:49] "None policy: Start" May 8 00:39:21.304642 kubelet[2343]: I0508 00:39:21.303861 2343 memory_manager.go:186] "Starting memorymanager" policy="None" May 8 00:39:21.304642 kubelet[2343]: I0508 00:39:21.303870 2343 state_mem.go:35] "Initializing new in-memory state store" May 8 00:39:21.323355 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 8 00:39:21.324935 kubelet[2343]: E0508 00:39:21.324913 2343 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:39:21.336384 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 8 00:39:21.347049 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 8 00:39:21.348288 kubelet[2343]: I0508 00:39:21.348099 2343 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:39:21.348288 kubelet[2343]: I0508 00:39:21.348222 2343 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 00:39:21.348288 kubelet[2343]: I0508 00:39:21.348229 2343 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:39:21.348921 kubelet[2343]: I0508 00:39:21.348531 2343 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:39:21.349478 kubelet[2343]: E0508 00:39:21.349398 2343 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 8 00:39:21.349520 kubelet[2343]: E0508 00:39:21.349487 2343 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 8 00:39:21.380510 systemd[1]: Created slice kubepods-burstable-pod869b8e471e64cd87cf7890edec21cfa0.slice - libcontainer container kubepods-burstable-pod869b8e471e64cd87cf7890edec21cfa0.slice. May 8 00:39:21.388506 kubelet[2343]: E0508 00:39:21.388398 2343 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 00:39:21.390577 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice - libcontainer container kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. May 8 00:39:21.391897 kubelet[2343]: E0508 00:39:21.391804 2343 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 00:39:21.393289 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice - libcontainer container kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. May 8 00:39:21.394248 kubelet[2343]: E0508 00:39:21.394233 2343 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 00:39:21.428992 kubelet[2343]: I0508 00:39:21.428875 2343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/869b8e471e64cd87cf7890edec21cfa0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"869b8e471e64cd87cf7890edec21cfa0\") " pod="kube-system/kube-apiserver-localhost" May 8 00:39:21.428992 kubelet[2343]: I0508 00:39:21.428914 2343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:39:21.428992 kubelet[2343]: I0508 00:39:21.428930 2343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:39:21.428992 kubelet[2343]: I0508 00:39:21.428955 2343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:39:21.428992 kubelet[2343]: I0508 00:39:21.428968 2343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:39:21.429189 kubelet[2343]: I0508 00:39:21.428979 2343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/869b8e471e64cd87cf7890edec21cfa0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"869b8e471e64cd87cf7890edec21cfa0\") " pod="kube-system/kube-apiserver-localhost" May 8 00:39:21.429189 kubelet[2343]: I0508 00:39:21.428991 2343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/869b8e471e64cd87cf7890edec21cfa0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"869b8e471e64cd87cf7890edec21cfa0\") " pod="kube-system/kube-apiserver-localhost" May 8 00:39:21.429189 kubelet[2343]: I0508 00:39:21.429003 2343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:39:21.429189 kubelet[2343]: I0508 00:39:21.429019 2343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 8 00:39:21.431190 kubelet[2343]: E0508 00:39:21.431158 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.100:6443: connect: connection refused" interval="400ms" May 8 00:39:21.449406 kubelet[2343]: I0508 00:39:21.449214 2343 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 8 00:39:21.449476 kubelet[2343]: E0508 00:39:21.449439 2343 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://139.178.70.100:6443/api/v1/nodes\": dial tcp 139.178.70.100:6443: connect: connection refused" node="localhost" May 8 00:39:21.651027 kubelet[2343]: I0508 00:39:21.650686 2343 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 8 00:39:21.651027 kubelet[2343]: E0508 00:39:21.650917 2343 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://139.178.70.100:6443/api/v1/nodes\": dial tcp 139.178.70.100:6443: connect: connection refused" node="localhost" May 8 00:39:21.691581 containerd[1544]: time="2025-05-08T00:39:21.691512060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:869b8e471e64cd87cf7890edec21cfa0,Namespace:kube-system,Attempt:0,}" May 8 00:39:21.703003 containerd[1544]: time="2025-05-08T00:39:21.702836240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" May 8 00:39:21.703003 containerd[1544]: time="2025-05-08T00:39:21.702835837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" May 8 00:39:21.832250 kubelet[2343]: E0508 00:39:21.832217 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.100:6443: connect: connection refused" interval="800ms" May 8 00:39:22.052517 kubelet[2343]: I0508 00:39:22.052057 2343 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 8 00:39:22.052517 kubelet[2343]: E0508 00:39:22.052301 2343 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://139.178.70.100:6443/api/v1/nodes\": dial tcp 139.178.70.100:6443: connect: connection refused" node="localhost" May 8 00:39:22.337370 kubelet[2343]: W0508 00:39:22.337268 2343 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused May 8 00:39:22.337626 kubelet[2343]: E0508 00:39:22.337602 2343 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" May 8 00:39:22.431011 kubelet[2343]: W0508 00:39:22.430927 2343 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused May 8 00:39:22.431011 kubelet[2343]: E0508 00:39:22.430994 2343 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" May 8 00:39:22.520775 kubelet[2343]: W0508 00:39:22.520705 2343 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused May 8 00:39:22.520775 kubelet[2343]: E0508 00:39:22.520753 2343 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" May 8 00:39:22.545031 kubelet[2343]: W0508 00:39:22.544940 2343 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused May 8 00:39:22.545031 kubelet[2343]: E0508 00:39:22.545010 2343 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" May 8 00:39:22.633082 kubelet[2343]: E0508 00:39:22.632998 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.100:6443: connect: connection refused" interval="1.6s" May 8 00:39:22.853226 kubelet[2343]: I0508 00:39:22.853188 2343 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 8 00:39:22.853562 kubelet[2343]: E0508 00:39:22.853546 2343 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://139.178.70.100:6443/api/v1/nodes\": dial tcp 139.178.70.100:6443: connect: connection refused" node="localhost" May 8 00:39:22.888274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3749875223.mount: Deactivated successfully. May 8 00:39:22.890974 containerd[1544]: time="2025-05-08T00:39:22.890453653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:39:22.891471 containerd[1544]: time="2025-05-08T00:39:22.891421416Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 8 00:39:22.891873 containerd[1544]: time="2025-05-08T00:39:22.891858946Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:39:22.893399 containerd[1544]: time="2025-05-08T00:39:22.893364220Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:39:22.893863 containerd[1544]: time="2025-05-08T00:39:22.893830249Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:39:22.894090 containerd[1544]: time="2025-05-08T00:39:22.894045080Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:39:22.894385 containerd[1544]: time="2025-05-08T00:39:22.894259331Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:39:22.894547 containerd[1544]: time="2025-05-08T00:39:22.894530615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:39:22.896114 containerd[1544]: time="2025-05-08T00:39:22.896100756Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.204516221s" May 8 00:39:22.897358 containerd[1544]: time="2025-05-08T00:39:22.897301475Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.194389099s" May 8 00:39:22.898701 containerd[1544]: time="2025-05-08T00:39:22.898556846Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.195644731s" May 8 00:39:23.001973 containerd[1544]: time="2025-05-08T00:39:23.001883557Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:39:23.001973 containerd[1544]: time="2025-05-08T00:39:23.001927074Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:39:23.001973 containerd[1544]: time="2025-05-08T00:39:23.001936728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:23.002774 containerd[1544]: time="2025-05-08T00:39:23.002735412Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:39:23.002977 containerd[1544]: time="2025-05-08T00:39:23.002893397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:23.003045 containerd[1544]: time="2025-05-08T00:39:23.003027300Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:39:23.003113 containerd[1544]: time="2025-05-08T00:39:23.003099409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:23.003324 containerd[1544]: time="2025-05-08T00:39:23.003252674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:23.011305 containerd[1544]: time="2025-05-08T00:39:23.011195886Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:39:23.011305 containerd[1544]: time="2025-05-08T00:39:23.011228031Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:39:23.011305 containerd[1544]: time="2025-05-08T00:39:23.011234878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:23.011305 containerd[1544]: time="2025-05-08T00:39:23.011279314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:23.020209 systemd[1]: Started cri-containerd-a1fad2ba690371dbf6f756f2c80edeede8374ece799d98c5ff980449cae3f7e1.scope - libcontainer container a1fad2ba690371dbf6f756f2c80edeede8374ece799d98c5ff980449cae3f7e1. May 8 00:39:23.022672 systemd[1]: Started cri-containerd-018b3f9da24ae898d9fdd8a532349a5097cb781fa5f7b59fb6cdc4259092d92e.scope - libcontainer container 018b3f9da24ae898d9fdd8a532349a5097cb781fa5f7b59fb6cdc4259092d92e. May 8 00:39:23.038051 systemd[1]: Started cri-containerd-1178a18e4a6cbd3090608518cdee45ef5de595f398a97c3db07d8d9ef9e83df1.scope - libcontainer container 1178a18e4a6cbd3090608518cdee45ef5de595f398a97c3db07d8d9ef9e83df1. May 8 00:39:23.071098 containerd[1544]: time="2025-05-08T00:39:23.071059326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"1178a18e4a6cbd3090608518cdee45ef5de595f398a97c3db07d8d9ef9e83df1\"" May 8 00:39:23.078285 containerd[1544]: time="2025-05-08T00:39:23.078212342Z" level=info msg="CreateContainer within sandbox \"1178a18e4a6cbd3090608518cdee45ef5de595f398a97c3db07d8d9ef9e83df1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 8 00:39:23.078608 containerd[1544]: time="2025-05-08T00:39:23.078560676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:869b8e471e64cd87cf7890edec21cfa0,Namespace:kube-system,Attempt:0,} returns sandbox id \"018b3f9da24ae898d9fdd8a532349a5097cb781fa5f7b59fb6cdc4259092d92e\"" May 8 00:39:23.083627 containerd[1544]: time="2025-05-08T00:39:23.083605999Z" level=info msg="CreateContainer within sandbox \"018b3f9da24ae898d9fdd8a532349a5097cb781fa5f7b59fb6cdc4259092d92e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 8 00:39:23.084317 containerd[1544]: time="2025-05-08T00:39:23.083908157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1fad2ba690371dbf6f756f2c80edeede8374ece799d98c5ff980449cae3f7e1\"" May 8 00:39:23.086822 containerd[1544]: time="2025-05-08T00:39:23.086783765Z" level=info msg="CreateContainer within sandbox \"a1fad2ba690371dbf6f756f2c80edeede8374ece799d98c5ff980449cae3f7e1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 8 00:39:23.092864 containerd[1544]: time="2025-05-08T00:39:23.092830728Z" level=info msg="CreateContainer within sandbox \"018b3f9da24ae898d9fdd8a532349a5097cb781fa5f7b59fb6cdc4259092d92e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8d5b48092de926ec6099ccdc0a3cea9effa362f4a5fc41c22171596119d96330\"" May 8 00:39:23.093276 containerd[1544]: time="2025-05-08T00:39:23.093260104Z" level=info msg="StartContainer for \"8d5b48092de926ec6099ccdc0a3cea9effa362f4a5fc41c22171596119d96330\"" May 8 00:39:23.094557 containerd[1544]: time="2025-05-08T00:39:23.094506860Z" level=info msg="CreateContainer within sandbox \"a1fad2ba690371dbf6f756f2c80edeede8374ece799d98c5ff980449cae3f7e1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1c63ca376091e23673681c95162225dd5192c7e641fa9e701419f3b44b7109bf\"" May 8 00:39:23.094801 containerd[1544]: time="2025-05-08T00:39:23.094773957Z" level=info msg="StartContainer for \"1c63ca376091e23673681c95162225dd5192c7e641fa9e701419f3b44b7109bf\"" May 8 00:39:23.096951 containerd[1544]: time="2025-05-08T00:39:23.096638036Z" level=info msg="CreateContainer within sandbox \"1178a18e4a6cbd3090608518cdee45ef5de595f398a97c3db07d8d9ef9e83df1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"562b76ab6026f2208903004a31dd75420f97b8876a221721615d98cd311c71e9\"" May 8 00:39:23.096951 containerd[1544]: time="2025-05-08T00:39:23.096907148Z" level=info msg="StartContainer for \"562b76ab6026f2208903004a31dd75420f97b8876a221721615d98cd311c71e9\"" May 8 00:39:23.118080 systemd[1]: Started cri-containerd-562b76ab6026f2208903004a31dd75420f97b8876a221721615d98cd311c71e9.scope - libcontainer container 562b76ab6026f2208903004a31dd75420f97b8876a221721615d98cd311c71e9. May 8 00:39:23.124110 systemd[1]: Started cri-containerd-1c63ca376091e23673681c95162225dd5192c7e641fa9e701419f3b44b7109bf.scope - libcontainer container 1c63ca376091e23673681c95162225dd5192c7e641fa9e701419f3b44b7109bf. May 8 00:39:23.126013 systemd[1]: Started cri-containerd-8d5b48092de926ec6099ccdc0a3cea9effa362f4a5fc41c22171596119d96330.scope - libcontainer container 8d5b48092de926ec6099ccdc0a3cea9effa362f4a5fc41c22171596119d96330. May 8 00:39:23.167711 containerd[1544]: time="2025-05-08T00:39:23.167553146Z" level=info msg="StartContainer for \"562b76ab6026f2208903004a31dd75420f97b8876a221721615d98cd311c71e9\" returns successfully" May 8 00:39:23.172127 containerd[1544]: time="2025-05-08T00:39:23.172079321Z" level=info msg="StartContainer for \"1c63ca376091e23673681c95162225dd5192c7e641fa9e701419f3b44b7109bf\" returns successfully" May 8 00:39:23.173509 containerd[1544]: time="2025-05-08T00:39:23.173174972Z" level=info msg="StartContainer for \"8d5b48092de926ec6099ccdc0a3cea9effa362f4a5fc41c22171596119d96330\" returns successfully" May 8 00:39:23.173558 kubelet[2343]: E0508 00:39:23.173330 2343 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.100:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" May 8 00:39:23.293331 kubelet[2343]: E0508 00:39:23.293315 2343 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 00:39:23.293631 kubelet[2343]: E0508 00:39:23.293611 2343 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 00:39:23.295078 kubelet[2343]: E0508 00:39:23.295064 2343 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 00:39:24.297321 kubelet[2343]: E0508 00:39:24.297301 2343 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 00:39:24.297578 kubelet[2343]: E0508 00:39:24.297472 2343 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 00:39:24.455247 kubelet[2343]: I0508 00:39:24.455225 2343 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 8 00:39:24.771148 kubelet[2343]: E0508 00:39:24.771068 2343 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 8 00:39:24.957718 kubelet[2343]: I0508 00:39:24.957622 2343 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 8 00:39:24.957718 kubelet[2343]: E0508 00:39:24.957649 2343 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 8 00:39:24.967427 kubelet[2343]: E0508 00:39:24.967409 2343 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:39:25.025058 kubelet[2343]: I0508 00:39:25.024910 2343 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 8 00:39:25.043289 kubelet[2343]: E0508 00:39:25.043260 2343 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 8 00:39:25.043289 kubelet[2343]: I0508 00:39:25.043281 2343 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 8 00:39:25.044393 kubelet[2343]: E0508 00:39:25.044362 2343 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 8 00:39:25.044393 kubelet[2343]: I0508 00:39:25.044382 2343 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 8 00:39:25.045321 kubelet[2343]: E0508 00:39:25.045303 2343 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 8 00:39:25.091744 kubelet[2343]: I0508 00:39:25.091715 2343 apiserver.go:52] "Watching apiserver" May 8 00:39:25.128823 kubelet[2343]: I0508 00:39:25.128753 2343 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:39:25.297471 kubelet[2343]: I0508 00:39:25.297454 2343 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 8 00:39:25.298956 kubelet[2343]: E0508 00:39:25.298860 2343 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 8 00:39:26.298552 kubelet[2343]: I0508 00:39:26.298360 2343 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 8 00:39:27.129463 systemd[1]: Reloading requested from client PID 2620 ('systemctl') (unit session-9.scope)... May 8 00:39:27.129498 systemd[1]: Reloading... May 8 00:39:27.193977 zram_generator::config[2664]: No configuration found. May 8 00:39:27.254492 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 8 00:39:27.271293 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:39:27.323254 systemd[1]: Reloading finished in 193 ms. May 8 00:39:27.348806 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:39:27.361718 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:39:27.361872 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:39:27.368556 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:39:27.636252 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:39:27.643259 (kubelet)[2724]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:39:27.717593 kubelet[2724]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:39:27.717593 kubelet[2724]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 8 00:39:27.717593 kubelet[2724]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:39:27.717593 kubelet[2724]: I0508 00:39:27.715263 2724 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:39:27.724188 kubelet[2724]: I0508 00:39:27.723936 2724 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 8 00:39:27.724304 kubelet[2724]: I0508 00:39:27.724295 2724 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:39:27.724522 kubelet[2724]: I0508 00:39:27.724514 2724 server.go:954] "Client rotation is on, will bootstrap in background" May 8 00:39:27.725364 kubelet[2724]: I0508 00:39:27.725354 2724 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 8 00:39:27.729679 kubelet[2724]: I0508 00:39:27.729655 2724 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:39:27.738909 kubelet[2724]: E0508 00:39:27.738878 2724 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 00:39:27.738909 kubelet[2724]: I0508 00:39:27.738905 2724 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 00:39:27.740979 kubelet[2724]: I0508 00:39:27.740957 2724 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:39:27.741707 kubelet[2724]: I0508 00:39:27.741678 2724 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:39:27.741827 kubelet[2724]: I0508 00:39:27.741707 2724 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 00:39:27.741894 kubelet[2724]: I0508 00:39:27.741828 2724 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:39:27.741894 kubelet[2724]: I0508 00:39:27.741835 2724 container_manager_linux.go:304] "Creating device plugin manager" May 8 00:39:27.743868 kubelet[2724]: I0508 00:39:27.743853 2724 state_mem.go:36] "Initialized new in-memory state store" May 8 00:39:27.744574 kubelet[2724]: I0508 00:39:27.744007 2724 kubelet.go:446] "Attempting to sync node with API server" May 8 00:39:27.744574 kubelet[2724]: I0508 00:39:27.744020 2724 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:39:27.744574 kubelet[2724]: I0508 00:39:27.744033 2724 kubelet.go:352] "Adding apiserver pod source" May 8 00:39:27.744574 kubelet[2724]: I0508 00:39:27.744040 2724 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:39:27.744915 kubelet[2724]: I0508 00:39:27.744896 2724 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 8 00:39:27.745227 kubelet[2724]: I0508 00:39:27.745199 2724 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:39:27.754117 kubelet[2724]: I0508 00:39:27.754097 2724 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 8 00:39:27.754244 kubelet[2724]: I0508 00:39:27.754238 2724 server.go:1287] "Started kubelet" May 8 00:39:27.755067 kubelet[2724]: I0508 00:39:27.755049 2724 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:39:27.756532 kubelet[2724]: I0508 00:39:27.756436 2724 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:39:27.757153 kubelet[2724]: I0508 00:39:27.757143 2724 server.go:490] "Adding debug handlers to kubelet server" May 8 00:39:27.760971 kubelet[2724]: I0508 00:39:27.759601 2724 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:39:27.760971 kubelet[2724]: I0508 00:39:27.759769 2724 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:39:27.760971 kubelet[2724]: I0508 00:39:27.760184 2724 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 00:39:27.762697 kubelet[2724]: I0508 00:39:27.762685 2724 volume_manager.go:297] "Starting Kubelet Volume Manager" May 8 00:39:27.764202 kubelet[2724]: I0508 00:39:27.764188 2724 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:39:27.764358 kubelet[2724]: I0508 00:39:27.764351 2724 reconciler.go:26] "Reconciler: start to sync state" May 8 00:39:27.766010 kubelet[2724]: I0508 00:39:27.765934 2724 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:39:27.771976 kubelet[2724]: I0508 00:39:27.770072 2724 factory.go:221] Registration of the containerd container factory successfully May 8 00:39:27.771976 kubelet[2724]: I0508 00:39:27.770085 2724 factory.go:221] Registration of the systemd container factory successfully May 8 00:39:27.773878 kubelet[2724]: I0508 00:39:27.773852 2724 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:39:27.774506 kubelet[2724]: I0508 00:39:27.774494 2724 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:39:27.774536 kubelet[2724]: I0508 00:39:27.774512 2724 status_manager.go:227] "Starting to sync pod status with apiserver" May 8 00:39:27.774536 kubelet[2724]: I0508 00:39:27.774526 2724 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 8 00:39:27.774536 kubelet[2724]: I0508 00:39:27.774530 2724 kubelet.go:2388] "Starting kubelet main sync loop" May 8 00:39:27.774593 kubelet[2724]: E0508 00:39:27.774556 2724 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:39:27.780751 kubelet[2724]: E0508 00:39:27.780707 2724 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:39:27.808177 kubelet[2724]: I0508 00:39:27.808102 2724 cpu_manager.go:221] "Starting CPU manager" policy="none" May 8 00:39:27.808177 kubelet[2724]: I0508 00:39:27.808115 2724 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 8 00:39:27.808177 kubelet[2724]: I0508 00:39:27.808129 2724 state_mem.go:36] "Initialized new in-memory state store" May 8 00:39:27.808571 kubelet[2724]: I0508 00:39:27.808463 2724 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 8 00:39:27.808571 kubelet[2724]: I0508 00:39:27.808473 2724 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 8 00:39:27.808571 kubelet[2724]: I0508 00:39:27.808497 2724 policy_none.go:49] "None policy: Start" May 8 00:39:27.808571 kubelet[2724]: I0508 00:39:27.808503 2724 memory_manager.go:186] "Starting memorymanager" policy="None" May 8 00:39:27.808571 kubelet[2724]: I0508 00:39:27.808511 2724 state_mem.go:35] "Initializing new in-memory state store" May 8 00:39:27.808829 kubelet[2724]: I0508 00:39:27.808728 2724 state_mem.go:75] "Updated machine memory state" May 8 00:39:27.812886 kubelet[2724]: I0508 00:39:27.812868 2724 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:39:27.812886 kubelet[2724]: I0508 00:39:27.812996 2724 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 00:39:27.812886 kubelet[2724]: I0508 00:39:27.813003 2724 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:39:27.813289 kubelet[2724]: I0508 00:39:27.813226 2724 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:39:27.815447 kubelet[2724]: E0508 00:39:27.815297 2724 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 8 00:39:27.875772 kubelet[2724]: I0508 00:39:27.875699 2724 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 8 00:39:27.876980 kubelet[2724]: I0508 00:39:27.876438 2724 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 8 00:39:27.876980 kubelet[2724]: I0508 00:39:27.876654 2724 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 8 00:39:27.881079 kubelet[2724]: E0508 00:39:27.881053 2724 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 8 00:39:27.917333 kubelet[2724]: I0508 00:39:27.917234 2724 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 8 00:39:27.921359 kubelet[2724]: I0508 00:39:27.921333 2724 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 8 00:39:27.921498 kubelet[2724]: I0508 00:39:27.921403 2724 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 8 00:39:27.966059 kubelet[2724]: I0508 00:39:27.966010 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:39:27.966059 kubelet[2724]: I0508 00:39:27.966043 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:39:27.966059 kubelet[2724]: I0508 00:39:27.966055 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:39:27.966059 kubelet[2724]: I0508 00:39:27.966066 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:39:27.966267 kubelet[2724]: I0508 00:39:27.966076 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:39:27.966267 kubelet[2724]: I0508 00:39:27.966090 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 8 00:39:27.966267 kubelet[2724]: I0508 00:39:27.966098 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/869b8e471e64cd87cf7890edec21cfa0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"869b8e471e64cd87cf7890edec21cfa0\") " pod="kube-system/kube-apiserver-localhost" May 8 00:39:27.966267 kubelet[2724]: I0508 00:39:27.966107 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/869b8e471e64cd87cf7890edec21cfa0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"869b8e471e64cd87cf7890edec21cfa0\") " pod="kube-system/kube-apiserver-localhost" May 8 00:39:27.966267 kubelet[2724]: I0508 00:39:27.966119 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/869b8e471e64cd87cf7890edec21cfa0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"869b8e471e64cd87cf7890edec21cfa0\") " pod="kube-system/kube-apiserver-localhost" May 8 00:39:28.746480 kubelet[2724]: I0508 00:39:28.745965 2724 apiserver.go:52] "Watching apiserver" May 8 00:39:28.765338 kubelet[2724]: I0508 00:39:28.765294 2724 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:39:28.798597 kubelet[2724]: I0508 00:39:28.798560 2724 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 8 00:39:28.798721 kubelet[2724]: I0508 00:39:28.798713 2724 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 8 00:39:28.798961 kubelet[2724]: I0508 00:39:28.798826 2724 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 8 00:39:28.816533 kubelet[2724]: E0508 00:39:28.816369 2724 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 8 00:39:28.837934 kubelet[2724]: E0508 00:39:28.837911 2724 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 8 00:39:28.838111 kubelet[2724]: E0508 00:39:28.838102 2724 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 8 00:39:28.862096 kubelet[2724]: I0508 00:39:28.862047 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.86203321 podStartE2EDuration="2.86203321s" podCreationTimestamp="2025-05-08 00:39:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:39:28.85101827 +0000 UTC m=+1.200644877" watchObservedRunningTime="2025-05-08 00:39:28.86203321 +0000 UTC m=+1.211659813" May 8 00:39:28.870345 kubelet[2724]: I0508 00:39:28.869866 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.869854025 podStartE2EDuration="1.869854025s" podCreationTimestamp="2025-05-08 00:39:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:39:28.862526799 +0000 UTC m=+1.212153400" watchObservedRunningTime="2025-05-08 00:39:28.869854025 +0000 UTC m=+1.219480629" May 8 00:39:28.878948 kubelet[2724]: I0508 00:39:28.878515 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.878503774 podStartE2EDuration="1.878503774s" podCreationTimestamp="2025-05-08 00:39:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:39:28.870004541 +0000 UTC m=+1.219631144" watchObservedRunningTime="2025-05-08 00:39:28.878503774 +0000 UTC m=+1.228130377" May 8 00:39:32.093543 sudo[1828]: pam_unix(sudo:session): session closed for user root May 8 00:39:32.095011 sshd[1825]: pam_unix(sshd:session): session closed for user core May 8 00:39:32.096527 systemd[1]: sshd@6-139.178.70.100:22-139.178.68.195:53730.service: Deactivated successfully. May 8 00:39:32.097730 systemd[1]: session-9.scope: Deactivated successfully. May 8 00:39:32.097878 systemd[1]: session-9.scope: Consumed 2.659s CPU time, 141.6M memory peak, 0B memory swap peak. May 8 00:39:32.098596 systemd-logind[1518]: Session 9 logged out. Waiting for processes to exit. May 8 00:39:32.099407 systemd-logind[1518]: Removed session 9. May 8 00:39:32.314291 kubelet[2724]: I0508 00:39:32.314263 2724 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 8 00:39:32.314575 containerd[1544]: time="2025-05-08T00:39:32.314517693Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 00:39:32.314830 kubelet[2724]: I0508 00:39:32.314640 2724 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 8 00:39:32.972378 systemd[1]: Created slice kubepods-besteffort-poda6b4d98f_ff93_4265_a9e0_5f8e7913eabe.slice - libcontainer container kubepods-besteffort-poda6b4d98f_ff93_4265_a9e0_5f8e7913eabe.slice. May 8 00:39:32.996521 kubelet[2724]: I0508 00:39:32.996394 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a6b4d98f-ff93-4265-a9e0-5f8e7913eabe-xtables-lock\") pod \"kube-proxy-bdkzc\" (UID: \"a6b4d98f-ff93-4265-a9e0-5f8e7913eabe\") " pod="kube-system/kube-proxy-bdkzc" May 8 00:39:32.996521 kubelet[2724]: I0508 00:39:32.996431 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a6b4d98f-ff93-4265-a9e0-5f8e7913eabe-kube-proxy\") pod \"kube-proxy-bdkzc\" (UID: \"a6b4d98f-ff93-4265-a9e0-5f8e7913eabe\") " pod="kube-system/kube-proxy-bdkzc" May 8 00:39:32.996521 kubelet[2724]: I0508 00:39:32.996446 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a6b4d98f-ff93-4265-a9e0-5f8e7913eabe-lib-modules\") pod \"kube-proxy-bdkzc\" (UID: \"a6b4d98f-ff93-4265-a9e0-5f8e7913eabe\") " pod="kube-system/kube-proxy-bdkzc" May 8 00:39:32.996521 kubelet[2724]: I0508 00:39:32.996457 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh6l9\" (UniqueName: \"kubernetes.io/projected/a6b4d98f-ff93-4265-a9e0-5f8e7913eabe-kube-api-access-dh6l9\") pod \"kube-proxy-bdkzc\" (UID: \"a6b4d98f-ff93-4265-a9e0-5f8e7913eabe\") " pod="kube-system/kube-proxy-bdkzc" May 8 00:39:33.281988 containerd[1544]: time="2025-05-08T00:39:33.281961992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bdkzc,Uid:a6b4d98f-ff93-4265-a9e0-5f8e7913eabe,Namespace:kube-system,Attempt:0,}" May 8 00:39:33.332653 containerd[1544]: time="2025-05-08T00:39:33.332204154Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:39:33.332653 containerd[1544]: time="2025-05-08T00:39:33.332602632Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:39:33.333005 containerd[1544]: time="2025-05-08T00:39:33.332643756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:33.333005 containerd[1544]: time="2025-05-08T00:39:33.332770666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:33.348090 systemd[1]: Started cri-containerd-95f66ad625fee8813a56728b0589d7ce29097d7f49c7a523257ed1793982e649.scope - libcontainer container 95f66ad625fee8813a56728b0589d7ce29097d7f49c7a523257ed1793982e649. May 8 00:39:33.366435 containerd[1544]: time="2025-05-08T00:39:33.366332196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bdkzc,Uid:a6b4d98f-ff93-4265-a9e0-5f8e7913eabe,Namespace:kube-system,Attempt:0,} returns sandbox id \"95f66ad625fee8813a56728b0589d7ce29097d7f49c7a523257ed1793982e649\"" May 8 00:39:33.369869 containerd[1544]: time="2025-05-08T00:39:33.369658998Z" level=info msg="CreateContainer within sandbox \"95f66ad625fee8813a56728b0589d7ce29097d7f49c7a523257ed1793982e649\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 00:39:33.391130 systemd[1]: Created slice kubepods-besteffort-pod9d577479_de7d_4ccf_940b_e02820ca23fb.slice - libcontainer container kubepods-besteffort-pod9d577479_de7d_4ccf_940b_e02820ca23fb.slice. May 8 00:39:33.398834 kubelet[2724]: I0508 00:39:33.398764 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgr9w\" (UniqueName: \"kubernetes.io/projected/9d577479-de7d-4ccf-940b-e02820ca23fb-kube-api-access-rgr9w\") pod \"tigera-operator-789496d6f5-f4xdz\" (UID: \"9d577479-de7d-4ccf-940b-e02820ca23fb\") " pod="tigera-operator/tigera-operator-789496d6f5-f4xdz" May 8 00:39:33.398834 kubelet[2724]: I0508 00:39:33.398796 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9d577479-de7d-4ccf-940b-e02820ca23fb-var-lib-calico\") pod \"tigera-operator-789496d6f5-f4xdz\" (UID: \"9d577479-de7d-4ccf-940b-e02820ca23fb\") " pod="tigera-operator/tigera-operator-789496d6f5-f4xdz" May 8 00:39:33.405530 containerd[1544]: time="2025-05-08T00:39:33.405509279Z" level=info msg="CreateContainer within sandbox \"95f66ad625fee8813a56728b0589d7ce29097d7f49c7a523257ed1793982e649\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7685437f5e1184037f170a31ea221592d28c07a04432ed7f36d1ab1cfa6f7cfb\"" May 8 00:39:33.406983 containerd[1544]: time="2025-05-08T00:39:33.406114138Z" level=info msg="StartContainer for \"7685437f5e1184037f170a31ea221592d28c07a04432ed7f36d1ab1cfa6f7cfb\"" May 8 00:39:33.429115 systemd[1]: Started cri-containerd-7685437f5e1184037f170a31ea221592d28c07a04432ed7f36d1ab1cfa6f7cfb.scope - libcontainer container 7685437f5e1184037f170a31ea221592d28c07a04432ed7f36d1ab1cfa6f7cfb. May 8 00:39:33.446701 containerd[1544]: time="2025-05-08T00:39:33.446638214Z" level=info msg="StartContainer for \"7685437f5e1184037f170a31ea221592d28c07a04432ed7f36d1ab1cfa6f7cfb\" returns successfully" May 8 00:39:33.694558 containerd[1544]: time="2025-05-08T00:39:33.694436761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-f4xdz,Uid:9d577479-de7d-4ccf-940b-e02820ca23fb,Namespace:tigera-operator,Attempt:0,}" May 8 00:39:33.710602 containerd[1544]: time="2025-05-08T00:39:33.710466527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:39:33.710602 containerd[1544]: time="2025-05-08T00:39:33.710506178Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:39:33.710602 containerd[1544]: time="2025-05-08T00:39:33.710527816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:33.711244 containerd[1544]: time="2025-05-08T00:39:33.711201346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:33.728072 systemd[1]: Started cri-containerd-5329c7f646b877b38794a2ea6f8d249a984e3b2e6c57c016cf804be91e1d1bf8.scope - libcontainer container 5329c7f646b877b38794a2ea6f8d249a984e3b2e6c57c016cf804be91e1d1bf8. May 8 00:39:33.758801 containerd[1544]: time="2025-05-08T00:39:33.758763930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-f4xdz,Uid:9d577479-de7d-4ccf-940b-e02820ca23fb,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5329c7f646b877b38794a2ea6f8d249a984e3b2e6c57c016cf804be91e1d1bf8\"" May 8 00:39:33.760637 containerd[1544]: time="2025-05-08T00:39:33.760617994Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 8 00:39:33.817577 kubelet[2724]: I0508 00:39:33.817442 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bdkzc" podStartSLOduration=1.817426211 podStartE2EDuration="1.817426211s" podCreationTimestamp="2025-05-08 00:39:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:39:33.817404427 +0000 UTC m=+6.167031059" watchObservedRunningTime="2025-05-08 00:39:33.817426211 +0000 UTC m=+6.167052816" May 8 00:39:34.110149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3781606485.mount: Deactivated successfully. May 8 00:39:35.001903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1249346210.mount: Deactivated successfully. May 8 00:39:35.725617 containerd[1544]: time="2025-05-08T00:39:35.725432626Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:35.726037 containerd[1544]: time="2025-05-08T00:39:35.725989999Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:35.726037 containerd[1544]: time="2025-05-08T00:39:35.726016179Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 8 00:39:35.728764 containerd[1544]: time="2025-05-08T00:39:35.728741065Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:35.729222 containerd[1544]: time="2025-05-08T00:39:35.729146377Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 1.968481736s" May 8 00:39:35.729222 containerd[1544]: time="2025-05-08T00:39:35.729165135Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 8 00:39:35.730574 containerd[1544]: time="2025-05-08T00:39:35.730557307Z" level=info msg="CreateContainer within sandbox \"5329c7f646b877b38794a2ea6f8d249a984e3b2e6c57c016cf804be91e1d1bf8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 8 00:39:35.744088 containerd[1544]: time="2025-05-08T00:39:35.744010628Z" level=info msg="CreateContainer within sandbox \"5329c7f646b877b38794a2ea6f8d249a984e3b2e6c57c016cf804be91e1d1bf8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8cc6158f8f76f796c28434dc3756384e28addad50f7a9fa7edeaea58bb93571d\"" May 8 00:39:35.744427 containerd[1544]: time="2025-05-08T00:39:35.744329162Z" level=info msg="StartContainer for \"8cc6158f8f76f796c28434dc3756384e28addad50f7a9fa7edeaea58bb93571d\"" May 8 00:39:35.770097 systemd[1]: Started cri-containerd-8cc6158f8f76f796c28434dc3756384e28addad50f7a9fa7edeaea58bb93571d.scope - libcontainer container 8cc6158f8f76f796c28434dc3756384e28addad50f7a9fa7edeaea58bb93571d. May 8 00:39:35.788686 containerd[1544]: time="2025-05-08T00:39:35.788565905Z" level=info msg="StartContainer for \"8cc6158f8f76f796c28434dc3756384e28addad50f7a9fa7edeaea58bb93571d\" returns successfully" May 8 00:39:38.336023 kubelet[2724]: I0508 00:39:38.335828 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-789496d6f5-f4xdz" podStartSLOduration=3.365574935 podStartE2EDuration="5.335815959s" podCreationTimestamp="2025-05-08 00:39:33 +0000 UTC" firstStartedPulling="2025-05-08 00:39:33.759399386 +0000 UTC m=+6.109025985" lastFinishedPulling="2025-05-08 00:39:35.729640408 +0000 UTC m=+8.079267009" observedRunningTime="2025-05-08 00:39:35.8336818 +0000 UTC m=+8.183308419" watchObservedRunningTime="2025-05-08 00:39:38.335815959 +0000 UTC m=+10.685442558" May 8 00:39:38.917751 systemd[1]: Created slice kubepods-besteffort-poddb640756_fea4_4ecc_b315_6f7ce3318342.slice - libcontainer container kubepods-besteffort-poddb640756_fea4_4ecc_b315_6f7ce3318342.slice. May 8 00:39:38.918704 kubelet[2724]: I0508 00:39:38.918670 2724 status_manager.go:890] "Failed to get status for pod" podUID="db640756-fea4-4ecc-b315-6f7ce3318342" pod="calico-system/calico-typha-844b4d9c87-rbvhl" err="pods \"calico-typha-844b4d9c87-rbvhl\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" May 8 00:39:38.923277 kubelet[2724]: W0508 00:39:38.921741 2724 reflector.go:569] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object May 8 00:39:38.923277 kubelet[2724]: E0508 00:39:38.921793 2724 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 8 00:39:38.923277 kubelet[2724]: W0508 00:39:38.921743 2724 reflector.go:569] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object May 8 00:39:38.923277 kubelet[2724]: W0508 00:39:38.921826 2724 reflector.go:569] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object May 8 00:39:38.923277 kubelet[2724]: E0508 00:39:38.921852 2724 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"tigera-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"tigera-ca-bundle\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 8 00:39:38.923470 kubelet[2724]: E0508 00:39:38.921820 2724 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"typha-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 8 00:39:39.030989 kubelet[2724]: I0508 00:39:39.030932 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db640756-fea4-4ecc-b315-6f7ce3318342-tigera-ca-bundle\") pod \"calico-typha-844b4d9c87-rbvhl\" (UID: \"db640756-fea4-4ecc-b315-6f7ce3318342\") " pod="calico-system/calico-typha-844b4d9c87-rbvhl" May 8 00:39:39.030989 kubelet[2724]: I0508 00:39:39.030987 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhfhm\" (UniqueName: \"kubernetes.io/projected/db640756-fea4-4ecc-b315-6f7ce3318342-kube-api-access-xhfhm\") pod \"calico-typha-844b4d9c87-rbvhl\" (UID: \"db640756-fea4-4ecc-b315-6f7ce3318342\") " pod="calico-system/calico-typha-844b4d9c87-rbvhl" May 8 00:39:39.031122 kubelet[2724]: I0508 00:39:39.031005 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/db640756-fea4-4ecc-b315-6f7ce3318342-typha-certs\") pod \"calico-typha-844b4d9c87-rbvhl\" (UID: \"db640756-fea4-4ecc-b315-6f7ce3318342\") " pod="calico-system/calico-typha-844b4d9c87-rbvhl" May 8 00:39:39.139785 systemd[1]: Created slice kubepods-besteffort-podf34e9eef_ef24_4654_80e2_7e5383be17b9.slice - libcontainer container kubepods-besteffort-podf34e9eef_ef24_4654_80e2_7e5383be17b9.slice. May 8 00:39:39.232612 kubelet[2724]: I0508 00:39:39.232089 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f34e9eef-ef24-4654-80e2-7e5383be17b9-tigera-ca-bundle\") pod \"calico-node-9fk5s\" (UID: \"f34e9eef-ef24-4654-80e2-7e5383be17b9\") " pod="calico-system/calico-node-9fk5s" May 8 00:39:39.232690 kubelet[2724]: I0508 00:39:39.232628 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f34e9eef-ef24-4654-80e2-7e5383be17b9-xtables-lock\") pod \"calico-node-9fk5s\" (UID: \"f34e9eef-ef24-4654-80e2-7e5383be17b9\") " pod="calico-system/calico-node-9fk5s" May 8 00:39:39.232690 kubelet[2724]: I0508 00:39:39.232646 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f34e9eef-ef24-4654-80e2-7e5383be17b9-var-lib-calico\") pod \"calico-node-9fk5s\" (UID: \"f34e9eef-ef24-4654-80e2-7e5383be17b9\") " pod="calico-system/calico-node-9fk5s" May 8 00:39:39.232690 kubelet[2724]: I0508 00:39:39.232661 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f34e9eef-ef24-4654-80e2-7e5383be17b9-cni-net-dir\") pod \"calico-node-9fk5s\" (UID: \"f34e9eef-ef24-4654-80e2-7e5383be17b9\") " pod="calico-system/calico-node-9fk5s" May 8 00:39:39.232690 kubelet[2724]: I0508 00:39:39.232675 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f34e9eef-ef24-4654-80e2-7e5383be17b9-flexvol-driver-host\") pod \"calico-node-9fk5s\" (UID: \"f34e9eef-ef24-4654-80e2-7e5383be17b9\") " pod="calico-system/calico-node-9fk5s" May 8 00:39:39.232763 kubelet[2724]: I0508 00:39:39.232699 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f34e9eef-ef24-4654-80e2-7e5383be17b9-policysync\") pod \"calico-node-9fk5s\" (UID: \"f34e9eef-ef24-4654-80e2-7e5383be17b9\") " pod="calico-system/calico-node-9fk5s" May 8 00:39:39.232763 kubelet[2724]: I0508 00:39:39.232716 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f34e9eef-ef24-4654-80e2-7e5383be17b9-cni-log-dir\") pod \"calico-node-9fk5s\" (UID: \"f34e9eef-ef24-4654-80e2-7e5383be17b9\") " pod="calico-system/calico-node-9fk5s" May 8 00:39:39.232763 kubelet[2724]: I0508 00:39:39.232734 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f34e9eef-ef24-4654-80e2-7e5383be17b9-lib-modules\") pod \"calico-node-9fk5s\" (UID: \"f34e9eef-ef24-4654-80e2-7e5383be17b9\") " pod="calico-system/calico-node-9fk5s" May 8 00:39:39.232763 kubelet[2724]: I0508 00:39:39.232751 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f34e9eef-ef24-4654-80e2-7e5383be17b9-var-run-calico\") pod \"calico-node-9fk5s\" (UID: \"f34e9eef-ef24-4654-80e2-7e5383be17b9\") " pod="calico-system/calico-node-9fk5s" May 8 00:39:39.232830 kubelet[2724]: I0508 00:39:39.232767 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn25c\" (UniqueName: \"kubernetes.io/projected/f34e9eef-ef24-4654-80e2-7e5383be17b9-kube-api-access-xn25c\") pod \"calico-node-9fk5s\" (UID: \"f34e9eef-ef24-4654-80e2-7e5383be17b9\") " pod="calico-system/calico-node-9fk5s" May 8 00:39:39.232830 kubelet[2724]: I0508 00:39:39.232784 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f34e9eef-ef24-4654-80e2-7e5383be17b9-node-certs\") pod \"calico-node-9fk5s\" (UID: \"f34e9eef-ef24-4654-80e2-7e5383be17b9\") " pod="calico-system/calico-node-9fk5s" May 8 00:39:39.232830 kubelet[2724]: I0508 00:39:39.232799 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f34e9eef-ef24-4654-80e2-7e5383be17b9-cni-bin-dir\") pod \"calico-node-9fk5s\" (UID: \"f34e9eef-ef24-4654-80e2-7e5383be17b9\") " pod="calico-system/calico-node-9fk5s" May 8 00:39:39.232830 kubelet[2724]: E0508 00:39:39.232179 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xqv5x" podUID="91493713-d7eb-4156-9aba-7e866dca9c56" May 8 00:39:39.333670 kubelet[2724]: I0508 00:39:39.333424 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/91493713-d7eb-4156-9aba-7e866dca9c56-varrun\") pod \"csi-node-driver-xqv5x\" (UID: \"91493713-d7eb-4156-9aba-7e866dca9c56\") " pod="calico-system/csi-node-driver-xqv5x" May 8 00:39:39.333670 kubelet[2724]: I0508 00:39:39.333464 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/91493713-d7eb-4156-9aba-7e866dca9c56-socket-dir\") pod \"csi-node-driver-xqv5x\" (UID: \"91493713-d7eb-4156-9aba-7e866dca9c56\") " pod="calico-system/csi-node-driver-xqv5x" May 8 00:39:39.333670 kubelet[2724]: I0508 00:39:39.333475 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/91493713-d7eb-4156-9aba-7e866dca9c56-registration-dir\") pod \"csi-node-driver-xqv5x\" (UID: \"91493713-d7eb-4156-9aba-7e866dca9c56\") " pod="calico-system/csi-node-driver-xqv5x" May 8 00:39:39.334459 kubelet[2724]: I0508 00:39:39.333904 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd66j\" (UniqueName: \"kubernetes.io/projected/91493713-d7eb-4156-9aba-7e866dca9c56-kube-api-access-fd66j\") pod \"csi-node-driver-xqv5x\" (UID: \"91493713-d7eb-4156-9aba-7e866dca9c56\") " pod="calico-system/csi-node-driver-xqv5x" May 8 00:39:39.334459 kubelet[2724]: I0508 00:39:39.333923 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/91493713-d7eb-4156-9aba-7e866dca9c56-kubelet-dir\") pod \"csi-node-driver-xqv5x\" (UID: \"91493713-d7eb-4156-9aba-7e866dca9c56\") " pod="calico-system/csi-node-driver-xqv5x" May 8 00:39:39.344888 kubelet[2724]: E0508 00:39:39.344872 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:39.345191 kubelet[2724]: W0508 00:39:39.345173 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:39.348720 kubelet[2724]: E0508 00:39:39.345279 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:39.348720 kubelet[2724]: E0508 00:39:39.345388 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:39.348720 kubelet[2724]: W0508 00:39:39.345394 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:39.348720 kubelet[2724]: E0508 00:39:39.345399 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:39.348720 kubelet[2724]: E0508 00:39:39.345486 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:39.348720 kubelet[2724]: W0508 00:39:39.345491 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:39.348720 kubelet[2724]: E0508 00:39:39.345496 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:39.348720 kubelet[2724]: E0508 00:39:39.345606 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:39.348720 kubelet[2724]: W0508 00:39:39.345611 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:39.348720 kubelet[2724]: E0508 00:39:39.345617 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:39.348888 kubelet[2724]: E0508 00:39:39.345709 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:39.348888 kubelet[2724]: W0508 00:39:39.345713 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:39.348888 kubelet[2724]: E0508 00:39:39.345718 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:39.348888 kubelet[2724]: E0508 00:39:39.345799 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:39.348888 kubelet[2724]: W0508 00:39:39.345803 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:39.348888 kubelet[2724]: E0508 00:39:39.345807 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:39.353044 kubelet[2724]: E0508 00:39:39.352999 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:39.353044 kubelet[2724]: W0508 00:39:39.353011 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:39.353044 kubelet[2724]: E0508 00:39:39.353026 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:39.435051 kubelet[2724]: E0508 00:39:39.435022 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:39.435051 kubelet[2724]: W0508 00:39:39.435038 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:39.435168 kubelet[2724]: E0508 00:39:39.435060 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:39.435451 kubelet[2724]: E0508 00:39:39.435265 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:39.435451 kubelet[2724]: W0508 00:39:39.435272 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:39.435451 kubelet[2724]: E0508 00:39:39.435286 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:39.435451 kubelet[2724]: E0508 00:39:39.435414 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:39.435451 kubelet[2724]: W0508 00:39:39.435424 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:39.435892 kubelet[2724]: E0508 00:39:39.435655 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:39.435892 kubelet[2724]: E0508 00:39:39.435808 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:39.435892 kubelet[2724]: W0508 00:39:39.435814 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:39.435892 kubelet[2724]: E0508 00:39:39.435823 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:39.436092 kubelet[2724]: E0508 00:39:39.435967 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:39.436092 kubelet[2724]: W0508 00:39:39.435976 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:39.436278 kubelet[2724]: E0508 00:39:39.436155 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:39.436278 kubelet[2724]: W0508 00:39:39.436174 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:39.436278 kubelet[2724]: E0508 00:39:39.436181 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:39.436278 kubelet[2724]: E0508 00:39:39.436227 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:39.436278 kubelet[2724]: E0508 00:39:39.436262 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:39.436608 kubelet[2724]: W0508 00:39:39.436280 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:39.436608 kubelet[2724]: E0508 00:39:39.436289 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:39.436608 kubelet[2724]: E0508 00:39:39.436437 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:39.436608 kubelet[2724]: W0508 00:39:39.436444 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:39.436608 kubelet[2724]: E0508 00:39:39.436451 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:39.436608 kubelet[2724]: E0508 00:39:39.436571 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:39.436608 kubelet[2724]: W0508 00:39:39.436578 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:39.436608 kubelet[2724]: E0508 00:39:39.436585 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:39.436787 kubelet[2724]: E0508 00:39:39.436679 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:39.436787 kubelet[2724]: W0508 00:39:39.436683 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:39.436787 kubelet[2724]: E0508 00:39:39.436689 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:39.436787 kubelet[2724]: E0508 00:39:39.436780 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:39.436787 kubelet[2724]: W0508 00:39:39.436785 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:39.436926 kubelet[2724]: E0508 00:39:39.436790 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:39.436926 kubelet[2724]: E0508 00:39:39.436869 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:39.436926 kubelet[2724]: W0508 00:39:39.436873 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:39.436926 kubelet[2724]: E0508 00:39:39.436878 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:39.437053 kubelet[2724]: E0508 00:39:39.436982 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:39.437053 kubelet[2724]: W0508 00:39:39.436986 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:39.437053 kubelet[2724]: E0508 00:39:39.436991 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:39.437120 kubelet[2724]: E0508 00:39:39.437085 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:39.437120 kubelet[2724]: W0508 00:39:39.437089 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:39.437120 kubelet[2724]: E0508 00:39:39.437094 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:39.437190 kubelet[2724]: E0508 00:39:39.437182 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:39.437190 kubelet[2724]: W0508 00:39:39.437186 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:39.437225 kubelet[2724]: E0508 00:39:39.437190 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:39.437711 kubelet[2724]: E0508 00:39:39.437281 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:39.437711 kubelet[2724]: W0508 00:39:39.437287 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:39.437711 kubelet[2724]: E0508 00:39:39.437292 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:39.437711 kubelet[2724]: E0508 00:39:39.437365 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:39.437711 kubelet[2724]: W0508 00:39:39.437369 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:39.437711 kubelet[2724]: E0508 00:39:39.437374 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:39.437711 kubelet[2724]: E0508 00:39:39.437464 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:39.437711 kubelet[2724]: W0508 00:39:39.437469 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:39.437711 kubelet[2724]: E0508 00:39:39.437473 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:39.437711 kubelet[2724]: E0508 00:39:39.437569 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:39.438428 kubelet[2724]: W0508 00:39:39.437576 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:39.438428 kubelet[2724]: E0508 00:39:39.437583 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:39.438428 kubelet[2724]: E0508 00:39:39.437723 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:39.438428 kubelet[2724]: W0508 00:39:39.437729 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:39.438428 kubelet[2724]: E0508 00:39:39.437853 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:39.438428 kubelet[2724]: E0508 00:39:39.437939 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:39.438428 kubelet[2724]: W0508 00:39:39.437966 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:39.438428 kubelet[2724]: E0508 00:39:39.437979 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:39.438428 kubelet[2724]: E0508 00:39:39.438111 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:39.438428 kubelet[2724]: W0508 00:39:39.438116 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:39.438791 kubelet[2724]: E0508 00:39:39.438121 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:39.438791 kubelet[2724]: E0508 00:39:39.438210 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:39.438791 kubelet[2724]: W0508 00:39:39.438215 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:39.438791 kubelet[2724]: E0508 00:39:39.438225 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:39.438791 kubelet[2724]: E0508 00:39:39.438430 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:39.438791 kubelet[2724]: W0508 00:39:39.438435 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:39.438791 kubelet[2724]: E0508 00:39:39.438440 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:39.439589 kubelet[2724]: E0508 00:39:39.439081 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:39.439589 kubelet[2724]: W0508 00:39:39.439088 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:39.439589 kubelet[2724]: E0508 00:39:39.439094 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:39.533671 kubelet[2724]: E0508 00:39:39.533594 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:39.533671 kubelet[2724]: W0508 00:39:39.533610 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:39.533671 kubelet[2724]: E0508 00:39:39.533626 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:39.533869 kubelet[2724]: E0508 00:39:39.533752 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:39.533869 kubelet[2724]: W0508 00:39:39.533757 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:39.533869 kubelet[2724]: E0508 00:39:39.533763 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:39.533869 kubelet[2724]: E0508 00:39:39.533864 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:39.533869 kubelet[2724]: W0508 00:39:39.533869 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:39.534153 kubelet[2724]: E0508 00:39:39.533874 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:39.534153 kubelet[2724]: E0508 00:39:39.534014 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:39.534153 kubelet[2724]: W0508 00:39:39.534020 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:39.534153 kubelet[2724]: E0508 00:39:39.534028 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:39.534153 kubelet[2724]: E0508 00:39:39.534143 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:39.534153 kubelet[2724]: W0508 00:39:39.534147 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:39.534153 kubelet[2724]: E0508 00:39:39.534152 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:40.136183 kubelet[2724]: E0508 00:39:40.135574 2724 secret.go:189] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition May 8 00:39:40.136183 kubelet[2724]: E0508 00:39:40.135670 2724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db640756-fea4-4ecc-b315-6f7ce3318342-typha-certs podName:db640756-fea4-4ecc-b315-6f7ce3318342 nodeName:}" failed. No retries permitted until 2025-05-08 00:39:40.635649742 +0000 UTC m=+12.985276352 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/db640756-fea4-4ecc-b315-6f7ce3318342-typha-certs") pod "calico-typha-844b4d9c87-rbvhl" (UID: "db640756-fea4-4ecc-b315-6f7ce3318342") : failed to sync secret cache: timed out waiting for the condition May 8 00:39:40.137176 kubelet[2724]: E0508 00:39:40.137155 2724 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition May 8 00:39:40.137529 kubelet[2724]: E0508 00:39:40.137413 2724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/db640756-fea4-4ecc-b315-6f7ce3318342-tigera-ca-bundle podName:db640756-fea4-4ecc-b315-6f7ce3318342 nodeName:}" failed. No retries permitted until 2025-05-08 00:39:40.637394249 +0000 UTC m=+12.987020852 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/db640756-fea4-4ecc-b315-6f7ce3318342-tigera-ca-bundle") pod "calico-typha-844b4d9c87-rbvhl" (UID: "db640756-fea4-4ecc-b315-6f7ce3318342") : failed to sync configmap cache: timed out waiting for the condition May 8 00:39:40.142164 kubelet[2724]: E0508 00:39:40.142037 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:40.142164 kubelet[2724]: W0508 00:39:40.142055 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:40.142164 kubelet[2724]: E0508 00:39:40.142070 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:40.142361 kubelet[2724]: E0508 00:39:40.142350 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:40.142428 kubelet[2724]: W0508 00:39:40.142420 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:40.142477 kubelet[2724]: E0508 00:39:40.142470 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:40.156185 kubelet[2724]: E0508 00:39:40.156111 2724 projected.go:288] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition May 8 00:39:40.156185 kubelet[2724]: E0508 00:39:40.156139 2724 projected.go:194] Error preparing data for projected volume kube-api-access-xhfhm for pod calico-system/calico-typha-844b4d9c87-rbvhl: failed to sync configmap cache: timed out waiting for the condition May 8 00:39:40.156185 kubelet[2724]: E0508 00:39:40.156182 2724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/db640756-fea4-4ecc-b315-6f7ce3318342-kube-api-access-xhfhm podName:db640756-fea4-4ecc-b315-6f7ce3318342 nodeName:}" failed. No retries permitted until 2025-05-08 00:39:40.656172224 +0000 UTC m=+13.005798823 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xhfhm" (UniqueName: "kubernetes.io/projected/db640756-fea4-4ecc-b315-6f7ce3318342-kube-api-access-xhfhm") pod "calico-typha-844b4d9c87-rbvhl" (UID: "db640756-fea4-4ecc-b315-6f7ce3318342") : failed to sync configmap cache: timed out waiting for the condition May 8 00:39:40.243209 kubelet[2724]: E0508 00:39:40.243123 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:40.243209 kubelet[2724]: W0508 00:39:40.243139 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:40.243209 kubelet[2724]: E0508 00:39:40.243153 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:40.243364 kubelet[2724]: E0508 00:39:40.243275 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:40.243364 kubelet[2724]: W0508 00:39:40.243280 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:40.243364 kubelet[2724]: E0508 00:39:40.243286 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:40.243434 kubelet[2724]: E0508 00:39:40.243382 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:40.243434 kubelet[2724]: W0508 00:39:40.243387 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:40.243434 kubelet[2724]: E0508 00:39:40.243392 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:40.309110 kubelet[2724]: E0508 00:39:40.308045 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:40.309110 kubelet[2724]: W0508 00:39:40.308058 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:40.309110 kubelet[2724]: E0508 00:39:40.308072 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:40.309110 kubelet[2724]: E0508 00:39:40.309083 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:40.309110 kubelet[2724]: W0508 00:39:40.309089 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:40.309110 kubelet[2724]: E0508 00:39:40.309095 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:40.334543 kubelet[2724]: E0508 00:39:40.334516 2724 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition May 8 00:39:40.335039 kubelet[2724]: E0508 00:39:40.334578 2724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f34e9eef-ef24-4654-80e2-7e5383be17b9-tigera-ca-bundle podName:f34e9eef-ef24-4654-80e2-7e5383be17b9 nodeName:}" failed. No retries permitted until 2025-05-08 00:39:40.83456307 +0000 UTC m=+13.184189675 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/f34e9eef-ef24-4654-80e2-7e5383be17b9-tigera-ca-bundle") pod "calico-node-9fk5s" (UID: "f34e9eef-ef24-4654-80e2-7e5383be17b9") : failed to sync configmap cache: timed out waiting for the condition May 8 00:39:40.344133 kubelet[2724]: E0508 00:39:40.344091 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:40.344133 kubelet[2724]: W0508 00:39:40.344108 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:40.344133 kubelet[2724]: E0508 00:39:40.344129 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:40.344456 kubelet[2724]: E0508 00:39:40.344298 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:40.344456 kubelet[2724]: W0508 00:39:40.344306 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:40.344456 kubelet[2724]: E0508 00:39:40.344313 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:40.344456 kubelet[2724]: E0508 00:39:40.344439 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:40.344456 kubelet[2724]: W0508 00:39:40.344445 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:40.344456 kubelet[2724]: E0508 00:39:40.344452 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:40.344682 kubelet[2724]: E0508 00:39:40.344585 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:40.344682 kubelet[2724]: W0508 00:39:40.344592 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:40.344682 kubelet[2724]: E0508 00:39:40.344598 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:40.445121 kubelet[2724]: E0508 00:39:40.445047 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:40.445121 kubelet[2724]: W0508 00:39:40.445063 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:40.445121 kubelet[2724]: E0508 00:39:40.445077 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:40.445377 kubelet[2724]: E0508 00:39:40.445227 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:40.445377 kubelet[2724]: W0508 00:39:40.445233 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:40.445377 kubelet[2724]: E0508 00:39:40.445238 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:40.445377 kubelet[2724]: E0508 00:39:40.445351 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:40.445377 kubelet[2724]: W0508 00:39:40.445356 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:40.445377 kubelet[2724]: E0508 00:39:40.445369 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:40.445484 kubelet[2724]: E0508 00:39:40.445474 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:40.445484 kubelet[2724]: W0508 00:39:40.445481 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:40.445521 kubelet[2724]: E0508 00:39:40.445486 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:40.545897 kubelet[2724]: E0508 00:39:40.545807 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:40.545897 kubelet[2724]: W0508 00:39:40.545823 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:40.545897 kubelet[2724]: E0508 00:39:40.545838 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:40.546165 kubelet[2724]: E0508 00:39:40.545980 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:40.546165 kubelet[2724]: W0508 00:39:40.545986 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:40.546165 kubelet[2724]: E0508 00:39:40.545993 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:40.546165 kubelet[2724]: E0508 00:39:40.546108 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:40.546165 kubelet[2724]: W0508 00:39:40.546114 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:40.546165 kubelet[2724]: E0508 00:39:40.546120 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:40.546318 kubelet[2724]: E0508 00:39:40.546227 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:40.546318 kubelet[2724]: W0508 00:39:40.546232 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:40.546318 kubelet[2724]: E0508 00:39:40.546238 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:40.647469 kubelet[2724]: E0508 00:39:40.647365 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:40.647469 kubelet[2724]: W0508 00:39:40.647404 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:40.647469 kubelet[2724]: E0508 00:39:40.647420 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:40.647905 kubelet[2724]: E0508 00:39:40.647817 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:40.647905 kubelet[2724]: W0508 00:39:40.647825 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:40.647905 kubelet[2724]: E0508 00:39:40.647832 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:40.648127 kubelet[2724]: E0508 00:39:40.648066 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:40.648127 kubelet[2724]: W0508 00:39:40.648072 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:40.648127 kubelet[2724]: E0508 00:39:40.648079 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:40.648411 kubelet[2724]: E0508 00:39:40.648312 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:40.648411 kubelet[2724]: W0508 00:39:40.648319 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:40.648411 kubelet[2724]: E0508 00:39:40.648329 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:40.648548 kubelet[2724]: E0508 00:39:40.648508 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:40.648548 kubelet[2724]: W0508 00:39:40.648514 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:40.648548 kubelet[2724]: E0508 00:39:40.648524 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:40.648787 kubelet[2724]: E0508 00:39:40.648732 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:40.648787 kubelet[2724]: W0508 00:39:40.648739 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:40.648787 kubelet[2724]: E0508 00:39:40.648749 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:40.649089 kubelet[2724]: E0508 00:39:40.648960 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:40.649089 kubelet[2724]: W0508 00:39:40.648969 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:40.649089 kubelet[2724]: E0508 00:39:40.648980 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:40.649269 kubelet[2724]: E0508 00:39:40.649214 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:40.649269 kubelet[2724]: W0508 00:39:40.649221 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:40.649269 kubelet[2724]: E0508 00:39:40.649234 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:40.649763 kubelet[2724]: E0508 00:39:40.649679 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:40.649763 kubelet[2724]: W0508 00:39:40.649689 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:40.649763 kubelet[2724]: E0508 00:39:40.649698 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:40.650003 kubelet[2724]: E0508 00:39:40.649933 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:40.650003 kubelet[2724]: W0508 00:39:40.649940 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:40.650003 kubelet[2724]: E0508 00:39:40.649961 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:40.650238 kubelet[2724]: E0508 00:39:40.650166 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:40.650238 kubelet[2724]: W0508 00:39:40.650173 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:40.650238 kubelet[2724]: E0508 00:39:40.650179 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:40.651001 kubelet[2724]: E0508 00:39:40.650417 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:40.651001 kubelet[2724]: W0508 00:39:40.650426 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:40.651001 kubelet[2724]: E0508 00:39:40.650434 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:40.652547 kubelet[2724]: E0508 00:39:40.652531 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:40.652641 kubelet[2724]: W0508 00:39:40.652629 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:40.652696 kubelet[2724]: E0508 00:39:40.652688 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:40.654226 kubelet[2724]: E0508 00:39:40.654178 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:40.654226 kubelet[2724]: W0508 00:39:40.654189 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:40.654226 kubelet[2724]: E0508 00:39:40.654202 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:40.749047 kubelet[2724]: E0508 00:39:40.748982 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:40.749140 kubelet[2724]: W0508 00:39:40.749131 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:40.749184 kubelet[2724]: E0508 00:39:40.749178 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:40.749359 kubelet[2724]: E0508 00:39:40.749353 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:40.749402 kubelet[2724]: W0508 00:39:40.749396 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:40.749434 kubelet[2724]: E0508 00:39:40.749429 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:40.749569 kubelet[2724]: E0508 00:39:40.749563 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:40.749606 kubelet[2724]: W0508 00:39:40.749601 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:40.749640 kubelet[2724]: E0508 00:39:40.749634 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:40.749754 kubelet[2724]: E0508 00:39:40.749749 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:40.749794 kubelet[2724]: W0508 00:39:40.749788 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:40.749830 kubelet[2724]: E0508 00:39:40.749824 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:40.749937 kubelet[2724]: E0508 00:39:40.749932 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:40.750000 kubelet[2724]: W0508 00:39:40.749993 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:40.750034 kubelet[2724]: E0508 00:39:40.750028 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:40.750163 kubelet[2724]: E0508 00:39:40.750157 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:40.750199 kubelet[2724]: W0508 00:39:40.750194 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:40.750234 kubelet[2724]: E0508 00:39:40.750228 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:40.753290 kubelet[2724]: E0508 00:39:40.753270 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:40.753290 kubelet[2724]: W0508 00:39:40.753281 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:40.753290 kubelet[2724]: E0508 00:39:40.753291 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:40.775511 kubelet[2724]: E0508 00:39:40.775475 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xqv5x" podUID="91493713-d7eb-4156-9aba-7e866dca9c56" May 8 00:39:40.850554 kubelet[2724]: E0508 00:39:40.850492 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:40.850554 kubelet[2724]: W0508 00:39:40.850507 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:40.850554 kubelet[2724]: E0508 00:39:40.850520 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:40.850879 kubelet[2724]: E0508 00:39:40.850691 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:40.850879 kubelet[2724]: W0508 00:39:40.850696 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:40.850879 kubelet[2724]: E0508 00:39:40.850705 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:40.850879 kubelet[2724]: E0508 00:39:40.850807 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:40.850879 kubelet[2724]: W0508 00:39:40.850812 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:40.850879 kubelet[2724]: E0508 00:39:40.850818 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:40.851498 kubelet[2724]: E0508 00:39:40.850903 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:40.851498 kubelet[2724]: W0508 00:39:40.850908 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:40.851498 kubelet[2724]: E0508 00:39:40.850913 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:40.851498 kubelet[2724]: E0508 00:39:40.851029 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:40.851498 kubelet[2724]: W0508 00:39:40.851034 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:40.851498 kubelet[2724]: E0508 00:39:40.851039 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:40.851776 kubelet[2724]: E0508 00:39:40.851765 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:40.851776 kubelet[2724]: W0508 00:39:40.851773 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:40.851823 kubelet[2724]: E0508 00:39:40.851779 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:40.942546 containerd[1544]: time="2025-05-08T00:39:40.942510085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9fk5s,Uid:f34e9eef-ef24-4654-80e2-7e5383be17b9,Namespace:calico-system,Attempt:0,}" May 8 00:39:40.999014 containerd[1544]: time="2025-05-08T00:39:40.998916075Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:39:40.999014 containerd[1544]: time="2025-05-08T00:39:40.998987296Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:39:40.999014 containerd[1544]: time="2025-05-08T00:39:40.999009964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:40.999405 containerd[1544]: time="2025-05-08T00:39:40.999091596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:41.023036 systemd[1]: Started cri-containerd-00b39b0ece04afb0b2520cb0d54a7f14d5e2cb16e43b46b624c2aaeb7ea01928.scope - libcontainer container 00b39b0ece04afb0b2520cb0d54a7f14d5e2cb16e43b46b624c2aaeb7ea01928. May 8 00:39:41.025930 containerd[1544]: time="2025-05-08T00:39:41.025873222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-844b4d9c87-rbvhl,Uid:db640756-fea4-4ecc-b315-6f7ce3318342,Namespace:calico-system,Attempt:0,}" May 8 00:39:41.040817 containerd[1544]: time="2025-05-08T00:39:41.040767803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9fk5s,Uid:f34e9eef-ef24-4654-80e2-7e5383be17b9,Namespace:calico-system,Attempt:0,} returns sandbox id \"00b39b0ece04afb0b2520cb0d54a7f14d5e2cb16e43b46b624c2aaeb7ea01928\"" May 8 00:39:41.041919 containerd[1544]: time="2025-05-08T00:39:41.041776136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 8 00:39:41.068103 containerd[1544]: time="2025-05-08T00:39:41.067890912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:39:41.068103 containerd[1544]: time="2025-05-08T00:39:41.067926829Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:39:41.068103 containerd[1544]: time="2025-05-08T00:39:41.067964975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:41.068103 containerd[1544]: time="2025-05-08T00:39:41.068032353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:41.085119 systemd[1]: Started cri-containerd-887c615cbabd0b97fd5067d3974d3940e89a3cda0efb3468394358a8268e3733.scope - libcontainer container 887c615cbabd0b97fd5067d3974d3940e89a3cda0efb3468394358a8268e3733. May 8 00:39:41.122242 containerd[1544]: time="2025-05-08T00:39:41.122132504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-844b4d9c87-rbvhl,Uid:db640756-fea4-4ecc-b315-6f7ce3318342,Namespace:calico-system,Attempt:0,} returns sandbox id \"887c615cbabd0b97fd5067d3974d3940e89a3cda0efb3468394358a8268e3733\"" May 8 00:39:42.254154 kubelet[2724]: E0508 00:39:42.254136 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:42.254488 kubelet[2724]: W0508 00:39:42.254188 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:42.254488 kubelet[2724]: E0508 00:39:42.254203 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:42.254726 kubelet[2724]: E0508 00:39:42.254652 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:42.254726 kubelet[2724]: W0508 00:39:42.254659 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:42.254726 kubelet[2724]: E0508 00:39:42.254665 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:42.254922 kubelet[2724]: E0508 00:39:42.254792 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:42.254922 kubelet[2724]: W0508 00:39:42.254810 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:42.254922 kubelet[2724]: E0508 00:39:42.254815 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:42.255532 kubelet[2724]: E0508 00:39:42.255234 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:42.255532 kubelet[2724]: W0508 00:39:42.255242 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:42.255532 kubelet[2724]: E0508 00:39:42.255248 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:42.255532 kubelet[2724]: E0508 00:39:42.255371 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:42.255532 kubelet[2724]: W0508 00:39:42.255377 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:42.255532 kubelet[2724]: E0508 00:39:42.255382 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:42.255532 kubelet[2724]: E0508 00:39:42.255468 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:42.255532 kubelet[2724]: W0508 00:39:42.255474 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:42.255532 kubelet[2724]: E0508 00:39:42.255479 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:42.255832 kubelet[2724]: E0508 00:39:42.255729 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:42.255832 kubelet[2724]: W0508 00:39:42.255736 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:42.255832 kubelet[2724]: E0508 00:39:42.255742 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:42.256046 kubelet[2724]: E0508 00:39:42.255981 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:42.256046 kubelet[2724]: W0508 00:39:42.255987 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:42.256046 kubelet[2724]: E0508 00:39:42.255992 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:42.256176 kubelet[2724]: E0508 00:39:42.256094 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:42.256176 kubelet[2724]: W0508 00:39:42.256099 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:42.256176 kubelet[2724]: E0508 00:39:42.256105 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:42.256376 kubelet[2724]: E0508 00:39:42.256298 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:42.256376 kubelet[2724]: W0508 00:39:42.256303 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:42.256376 kubelet[2724]: E0508 00:39:42.256308 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:42.256563 kubelet[2724]: E0508 00:39:42.256533 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:42.256563 kubelet[2724]: W0508 00:39:42.256539 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:42.256563 kubelet[2724]: E0508 00:39:42.256544 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:42.256753 kubelet[2724]: E0508 00:39:42.256711 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:42.256753 kubelet[2724]: W0508 00:39:42.256717 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:42.256753 kubelet[2724]: E0508 00:39:42.256722 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:42.256963 kubelet[2724]: E0508 00:39:42.256905 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:42.256963 kubelet[2724]: W0508 00:39:42.256910 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:42.256963 kubelet[2724]: E0508 00:39:42.256915 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:42.257139 kubelet[2724]: E0508 00:39:42.257084 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:42.257139 kubelet[2724]: W0508 00:39:42.257090 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:42.257139 kubelet[2724]: E0508 00:39:42.257095 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:42.257274 kubelet[2724]: E0508 00:39:42.257236 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:42.257274 kubelet[2724]: W0508 00:39:42.257241 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:42.257274 kubelet[2724]: E0508 00:39:42.257247 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:42.728065 containerd[1544]: time="2025-05-08T00:39:42.727978727Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:42.728564 containerd[1544]: time="2025-05-08T00:39:42.728477140Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 8 00:39:42.728997 containerd[1544]: time="2025-05-08T00:39:42.728857053Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:42.730078 containerd[1544]: time="2025-05-08T00:39:42.730059116Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:42.730485 containerd[1544]: time="2025-05-08T00:39:42.730464287Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 1.688646319s" May 8 00:39:42.730532 containerd[1544]: time="2025-05-08T00:39:42.730484842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 8 00:39:42.731443 containerd[1544]: time="2025-05-08T00:39:42.731421058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 8 00:39:42.733818 containerd[1544]: time="2025-05-08T00:39:42.733749139Z" level=info msg="CreateContainer within sandbox \"00b39b0ece04afb0b2520cb0d54a7f14d5e2cb16e43b46b624c2aaeb7ea01928\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 8 00:39:42.760682 containerd[1544]: time="2025-05-08T00:39:42.760640905Z" level=info msg="CreateContainer within sandbox \"00b39b0ece04afb0b2520cb0d54a7f14d5e2cb16e43b46b624c2aaeb7ea01928\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1161a28ee59dfb660fc18e7813f16c527521a08415b92cc00a88b7ec95385a44\"" May 8 00:39:42.761122 containerd[1544]: time="2025-05-08T00:39:42.761024797Z" level=info msg="StartContainer for \"1161a28ee59dfb660fc18e7813f16c527521a08415b92cc00a88b7ec95385a44\"" May 8 00:39:42.775493 kubelet[2724]: E0508 00:39:42.775291 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xqv5x" podUID="91493713-d7eb-4156-9aba-7e866dca9c56" May 8 00:39:42.815121 systemd[1]: Started cri-containerd-1161a28ee59dfb660fc18e7813f16c527521a08415b92cc00a88b7ec95385a44.scope - libcontainer container 1161a28ee59dfb660fc18e7813f16c527521a08415b92cc00a88b7ec95385a44. May 8 00:39:42.836059 containerd[1544]: time="2025-05-08T00:39:42.835669139Z" level=info msg="StartContainer for \"1161a28ee59dfb660fc18e7813f16c527521a08415b92cc00a88b7ec95385a44\" returns successfully" May 8 00:39:42.856069 systemd[1]: cri-containerd-1161a28ee59dfb660fc18e7813f16c527521a08415b92cc00a88b7ec95385a44.scope: Deactivated successfully. May 8 00:39:42.870374 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1161a28ee59dfb660fc18e7813f16c527521a08415b92cc00a88b7ec95385a44-rootfs.mount: Deactivated successfully. May 8 00:39:42.942709 containerd[1544]: time="2025-05-08T00:39:42.927472862Z" level=info msg="shim disconnected" id=1161a28ee59dfb660fc18e7813f16c527521a08415b92cc00a88b7ec95385a44 namespace=k8s.io May 8 00:39:42.942920 containerd[1544]: time="2025-05-08T00:39:42.942850838Z" level=warning msg="cleaning up after shim disconnected" id=1161a28ee59dfb660fc18e7813f16c527521a08415b92cc00a88b7ec95385a44 namespace=k8s.io May 8 00:39:42.942920 containerd[1544]: time="2025-05-08T00:39:42.942865527Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:39:44.528636 containerd[1544]: time="2025-05-08T00:39:44.528591829Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:44.529089 containerd[1544]: time="2025-05-08T00:39:44.529049421Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 8 00:39:44.530024 containerd[1544]: time="2025-05-08T00:39:44.529330798Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:44.530317 containerd[1544]: time="2025-05-08T00:39:44.530302507Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:44.530793 containerd[1544]: time="2025-05-08T00:39:44.530779832Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 1.799183394s" May 8 00:39:44.530840 containerd[1544]: time="2025-05-08T00:39:44.530832071Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 8 00:39:44.531537 containerd[1544]: time="2025-05-08T00:39:44.531528434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 8 00:39:44.540616 containerd[1544]: time="2025-05-08T00:39:44.540598076Z" level=info msg="CreateContainer within sandbox \"887c615cbabd0b97fd5067d3974d3940e89a3cda0efb3468394358a8268e3733\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 8 00:39:44.546345 containerd[1544]: time="2025-05-08T00:39:44.546316059Z" level=info msg="CreateContainer within sandbox \"887c615cbabd0b97fd5067d3974d3940e89a3cda0efb3468394358a8268e3733\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"50b161342f94d3b9b9d56b8931295047bacc4f0b7d228f50b29b6cb9fbf0bc24\"" May 8 00:39:44.548068 containerd[1544]: time="2025-05-08T00:39:44.547129000Z" level=info msg="StartContainer for \"50b161342f94d3b9b9d56b8931295047bacc4f0b7d228f50b29b6cb9fbf0bc24\"" May 8 00:39:44.569018 systemd[1]: Started sshd@7-139.178.70.100:22-85.208.84.5:20354.service - OpenSSH per-connection server daemon (85.208.84.5:20354). May 8 00:39:44.571877 systemd[1]: Started cri-containerd-50b161342f94d3b9b9d56b8931295047bacc4f0b7d228f50b29b6cb9fbf0bc24.scope - libcontainer container 50b161342f94d3b9b9d56b8931295047bacc4f0b7d228f50b29b6cb9fbf0bc24. May 8 00:39:44.614521 containerd[1544]: time="2025-05-08T00:39:44.614496249Z" level=info msg="StartContainer for \"50b161342f94d3b9b9d56b8931295047bacc4f0b7d228f50b29b6cb9fbf0bc24\" returns successfully" May 8 00:39:44.774746 kubelet[2724]: E0508 00:39:44.774707 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xqv5x" podUID="91493713-d7eb-4156-9aba-7e866dca9c56" May 8 00:39:44.833578 kubelet[2724]: I0508 00:39:44.832818 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-844b4d9c87-rbvhl" podStartSLOduration=3.42432424 podStartE2EDuration="6.832805661s" podCreationTimestamp="2025-05-08 00:39:38 +0000 UTC" firstStartedPulling="2025-05-08 00:39:41.122838066 +0000 UTC m=+13.472464665" lastFinishedPulling="2025-05-08 00:39:44.531319487 +0000 UTC m=+16.880946086" observedRunningTime="2025-05-08 00:39:44.832650719 +0000 UTC m=+17.182277341" watchObservedRunningTime="2025-05-08 00:39:44.832805661 +0000 UTC m=+17.182432269" May 8 00:39:45.543607 sshd[3395]: Invalid user user from 85.208.84.5 port 20354 May 8 00:39:45.748990 sshd[3395]: Connection closed by invalid user user 85.208.84.5 port 20354 [preauth] May 8 00:39:45.750723 systemd[1]: sshd@7-139.178.70.100:22-85.208.84.5:20354.service: Deactivated successfully. May 8 00:39:45.827823 kubelet[2724]: I0508 00:39:45.827398 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:39:46.775983 kubelet[2724]: E0508 00:39:46.774852 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xqv5x" podUID="91493713-d7eb-4156-9aba-7e866dca9c56" May 8 00:39:47.747884 containerd[1544]: time="2025-05-08T00:39:47.747851859Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:47.752438 containerd[1544]: time="2025-05-08T00:39:47.752287030Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 8 00:39:47.757777 containerd[1544]: time="2025-05-08T00:39:47.757543742Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:47.762717 containerd[1544]: time="2025-05-08T00:39:47.762683676Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:47.763595 containerd[1544]: time="2025-05-08T00:39:47.763535855Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 3.231876511s" May 8 00:39:47.763595 containerd[1544]: time="2025-05-08T00:39:47.763552801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 8 00:39:47.765232 containerd[1544]: time="2025-05-08T00:39:47.765080426Z" level=info msg="CreateContainer within sandbox \"00b39b0ece04afb0b2520cb0d54a7f14d5e2cb16e43b46b624c2aaeb7ea01928\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 8 00:39:47.850200 containerd[1544]: time="2025-05-08T00:39:47.850164032Z" level=info msg="CreateContainer within sandbox \"00b39b0ece04afb0b2520cb0d54a7f14d5e2cb16e43b46b624c2aaeb7ea01928\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c536c3d7a8e7f5a424785239dd3f132067fef753cb45a5ad263616df93e419a7\"" May 8 00:39:47.850862 containerd[1544]: time="2025-05-08T00:39:47.850724832Z" level=info msg="StartContainer for \"c536c3d7a8e7f5a424785239dd3f132067fef753cb45a5ad263616df93e419a7\"" May 8 00:39:47.899093 systemd[1]: Started cri-containerd-c536c3d7a8e7f5a424785239dd3f132067fef753cb45a5ad263616df93e419a7.scope - libcontainer container c536c3d7a8e7f5a424785239dd3f132067fef753cb45a5ad263616df93e419a7. May 8 00:39:47.915800 containerd[1544]: time="2025-05-08T00:39:47.915771290Z" level=info msg="StartContainer for \"c536c3d7a8e7f5a424785239dd3f132067fef753cb45a5ad263616df93e419a7\" returns successfully" May 8 00:39:48.801693 kubelet[2724]: E0508 00:39:48.801653 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xqv5x" podUID="91493713-d7eb-4156-9aba-7e866dca9c56" May 8 00:39:49.528765 systemd[1]: cri-containerd-c536c3d7a8e7f5a424785239dd3f132067fef753cb45a5ad263616df93e419a7.scope: Deactivated successfully. May 8 00:39:49.548939 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c536c3d7a8e7f5a424785239dd3f132067fef753cb45a5ad263616df93e419a7-rootfs.mount: Deactivated successfully. May 8 00:39:49.611889 kubelet[2724]: I0508 00:39:49.611345 2724 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 8 00:39:49.741667 containerd[1544]: time="2025-05-08T00:39:49.737043437Z" level=info msg="shim disconnected" id=c536c3d7a8e7f5a424785239dd3f132067fef753cb45a5ad263616df93e419a7 namespace=k8s.io May 8 00:39:49.741667 containerd[1544]: time="2025-05-08T00:39:49.741327312Z" level=warning msg="cleaning up after shim disconnected" id=c536c3d7a8e7f5a424785239dd3f132067fef753cb45a5ad263616df93e419a7 namespace=k8s.io May 8 00:39:49.741667 containerd[1544]: time="2025-05-08T00:39:49.741342176Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:39:49.791452 systemd[1]: Created slice kubepods-besteffort-pod5c48551f_d382_48fc_8b2c_4049dd697e7b.slice - libcontainer container kubepods-besteffort-pod5c48551f_d382_48fc_8b2c_4049dd697e7b.slice. May 8 00:39:49.797804 systemd[1]: Created slice kubepods-besteffort-podb0cb28c6_493c_4ec2_95eb_08870a5239cb.slice - libcontainer container kubepods-besteffort-podb0cb28c6_493c_4ec2_95eb_08870a5239cb.slice. May 8 00:39:49.801404 systemd[1]: Created slice kubepods-burstable-pod46106b43_6a82_4ed1_a0c6_7d6292ac8f6f.slice - libcontainer container kubepods-burstable-pod46106b43_6a82_4ed1_a0c6_7d6292ac8f6f.slice. May 8 00:39:49.806759 systemd[1]: Created slice kubepods-besteffort-pod99de4f5c_61dc_4b3d_a2ad_08772fa94419.slice - libcontainer container kubepods-besteffort-pod99de4f5c_61dc_4b3d_a2ad_08772fa94419.slice. May 8 00:39:49.809917 systemd[1]: Created slice kubepods-burstable-pod7cd62a62_c3b8_4d08_90a0_b53c0511c1f5.slice - libcontainer container kubepods-burstable-pod7cd62a62_c3b8_4d08_90a0_b53c0511c1f5.slice. May 8 00:39:49.832336 kubelet[2724]: I0508 00:39:49.832307 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdcdx\" (UniqueName: \"kubernetes.io/projected/b0cb28c6-493c-4ec2-95eb-08870a5239cb-kube-api-access-rdcdx\") pod \"calico-kube-controllers-bcbf44d57-tjjwh\" (UID: \"b0cb28c6-493c-4ec2-95eb-08870a5239cb\") " pod="calico-system/calico-kube-controllers-bcbf44d57-tjjwh" May 8 00:39:49.832336 kubelet[2724]: I0508 00:39:49.832333 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46106b43-6a82-4ed1-a0c6-7d6292ac8f6f-config-volume\") pod \"coredns-668d6bf9bc-dbhz2\" (UID: \"46106b43-6a82-4ed1-a0c6-7d6292ac8f6f\") " pod="kube-system/coredns-668d6bf9bc-dbhz2" May 8 00:39:49.832336 kubelet[2724]: I0508 00:39:49.832344 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0cb28c6-493c-4ec2-95eb-08870a5239cb-tigera-ca-bundle\") pod \"calico-kube-controllers-bcbf44d57-tjjwh\" (UID: \"b0cb28c6-493c-4ec2-95eb-08870a5239cb\") " pod="calico-system/calico-kube-controllers-bcbf44d57-tjjwh" May 8 00:39:49.835939 kubelet[2724]: I0508 00:39:49.832356 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5c48551f-d382-48fc-8b2c-4049dd697e7b-calico-apiserver-certs\") pod \"calico-apiserver-6c849db6d6-wzfw7\" (UID: \"5c48551f-d382-48fc-8b2c-4049dd697e7b\") " pod="calico-apiserver/calico-apiserver-6c849db6d6-wzfw7" May 8 00:39:49.835939 kubelet[2724]: I0508 00:39:49.832366 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvnkb\" (UniqueName: \"kubernetes.io/projected/7cd62a62-c3b8-4d08-90a0-b53c0511c1f5-kube-api-access-qvnkb\") pod \"coredns-668d6bf9bc-jq978\" (UID: \"7cd62a62-c3b8-4d08-90a0-b53c0511c1f5\") " pod="kube-system/coredns-668d6bf9bc-jq978" May 8 00:39:49.835939 kubelet[2724]: I0508 00:39:49.832375 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cd62a62-c3b8-4d08-90a0-b53c0511c1f5-config-volume\") pod \"coredns-668d6bf9bc-jq978\" (UID: \"7cd62a62-c3b8-4d08-90a0-b53c0511c1f5\") " pod="kube-system/coredns-668d6bf9bc-jq978" May 8 00:39:49.835939 kubelet[2724]: I0508 00:39:49.832385 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmkdw\" (UniqueName: \"kubernetes.io/projected/99de4f5c-61dc-4b3d-a2ad-08772fa94419-kube-api-access-hmkdw\") pod \"calico-apiserver-6c849db6d6-n9bs2\" (UID: \"99de4f5c-61dc-4b3d-a2ad-08772fa94419\") " pod="calico-apiserver/calico-apiserver-6c849db6d6-n9bs2" May 8 00:39:49.835939 kubelet[2724]: I0508 00:39:49.832402 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z57kv\" (UniqueName: \"kubernetes.io/projected/46106b43-6a82-4ed1-a0c6-7d6292ac8f6f-kube-api-access-z57kv\") pod \"coredns-668d6bf9bc-dbhz2\" (UID: \"46106b43-6a82-4ed1-a0c6-7d6292ac8f6f\") " pod="kube-system/coredns-668d6bf9bc-dbhz2" May 8 00:39:49.840633 kubelet[2724]: I0508 00:39:49.832411 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/99de4f5c-61dc-4b3d-a2ad-08772fa94419-calico-apiserver-certs\") pod \"calico-apiserver-6c849db6d6-n9bs2\" (UID: \"99de4f5c-61dc-4b3d-a2ad-08772fa94419\") " pod="calico-apiserver/calico-apiserver-6c849db6d6-n9bs2" May 8 00:39:49.840633 kubelet[2724]: I0508 00:39:49.832420 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78vrz\" (UniqueName: \"kubernetes.io/projected/5c48551f-d382-48fc-8b2c-4049dd697e7b-kube-api-access-78vrz\") pod \"calico-apiserver-6c849db6d6-wzfw7\" (UID: \"5c48551f-d382-48fc-8b2c-4049dd697e7b\") " pod="calico-apiserver/calico-apiserver-6c849db6d6-wzfw7" May 8 00:39:49.889001 containerd[1544]: time="2025-05-08T00:39:49.888965217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 8 00:39:50.100096 containerd[1544]: time="2025-05-08T00:39:50.099807746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c849db6d6-wzfw7,Uid:5c48551f-d382-48fc-8b2c-4049dd697e7b,Namespace:calico-apiserver,Attempt:0,}" May 8 00:39:50.100096 containerd[1544]: time="2025-05-08T00:39:50.099852681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bcbf44d57-tjjwh,Uid:b0cb28c6-493c-4ec2-95eb-08870a5239cb,Namespace:calico-system,Attempt:0,}" May 8 00:39:50.105625 containerd[1544]: time="2025-05-08T00:39:50.105504496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dbhz2,Uid:46106b43-6a82-4ed1-a0c6-7d6292ac8f6f,Namespace:kube-system,Attempt:0,}" May 8 00:39:50.109229 containerd[1544]: time="2025-05-08T00:39:50.109084410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c849db6d6-n9bs2,Uid:99de4f5c-61dc-4b3d-a2ad-08772fa94419,Namespace:calico-apiserver,Attempt:0,}" May 8 00:39:50.111905 containerd[1544]: time="2025-05-08T00:39:50.111880504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jq978,Uid:7cd62a62-c3b8-4d08-90a0-b53c0511c1f5,Namespace:kube-system,Attempt:0,}" May 8 00:39:50.590444 containerd[1544]: time="2025-05-08T00:39:50.590390448Z" level=error msg="Failed to destroy network for sandbox \"d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:39:50.591779 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647-shm.mount: Deactivated successfully. May 8 00:39:50.598842 containerd[1544]: time="2025-05-08T00:39:50.592254133Z" level=error msg="encountered an error cleaning up failed sandbox \"d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:39:50.598842 containerd[1544]: time="2025-05-08T00:39:50.592292484Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c849db6d6-n9bs2,Uid:99de4f5c-61dc-4b3d-a2ad-08772fa94419,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:39:50.601050 containerd[1544]: time="2025-05-08T00:39:50.601011258Z" level=error msg="Failed to destroy network for sandbox \"3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:39:50.603389 containerd[1544]: time="2025-05-08T00:39:50.601273443Z" level=error msg="encountered an error cleaning up failed sandbox \"3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:39:50.603389 containerd[1544]: time="2025-05-08T00:39:50.601310052Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dbhz2,Uid:46106b43-6a82-4ed1-a0c6-7d6292ac8f6f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:39:50.603389 containerd[1544]: time="2025-05-08T00:39:50.601394352Z" level=error msg="Failed to destroy network for sandbox \"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:39:50.603389 containerd[1544]: time="2025-05-08T00:39:50.602984572Z" level=error msg="encountered an error cleaning up failed sandbox \"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:39:50.603389 containerd[1544]: time="2025-05-08T00:39:50.603008929Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bcbf44d57-tjjwh,Uid:b0cb28c6-493c-4ec2-95eb-08870a5239cb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:39:50.603389 containerd[1544]: time="2025-05-08T00:39:50.603058807Z" level=error msg="Failed to destroy network for sandbox \"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:39:50.603389 containerd[1544]: time="2025-05-08T00:39:50.603205974Z" level=error msg="encountered an error cleaning up failed sandbox \"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:39:50.603389 containerd[1544]: time="2025-05-08T00:39:50.603224641Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jq978,Uid:7cd62a62-c3b8-4d08-90a0-b53c0511c1f5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:39:50.603389 containerd[1544]: time="2025-05-08T00:39:50.603269268Z" level=error msg="Failed to destroy network for sandbox \"454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:39:50.602562 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03-shm.mount: Deactivated successfully. May 8 00:39:50.603621 containerd[1544]: time="2025-05-08T00:39:50.603405895Z" level=error msg="encountered an error cleaning up failed sandbox \"454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:39:50.603621 containerd[1544]: time="2025-05-08T00:39:50.603425174Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c849db6d6-wzfw7,Uid:5c48551f-d382-48fc-8b2c-4049dd697e7b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:39:50.602616 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2-shm.mount: Deactivated successfully. May 8 00:39:50.605779 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f-shm.mount: Deactivated successfully. May 8 00:39:50.605849 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09-shm.mount: Deactivated successfully. May 8 00:39:50.627730 kubelet[2724]: E0508 00:39:50.627624 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:39:50.628438 kubelet[2724]: E0508 00:39:50.628278 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:39:50.630283 kubelet[2724]: E0508 00:39:50.630194 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c849db6d6-wzfw7" May 8 00:39:50.630283 kubelet[2724]: E0508 00:39:50.630222 2724 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c849db6d6-wzfw7" May 8 00:39:50.630283 kubelet[2724]: E0508 00:39:50.630230 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:39:50.630283 kubelet[2724]: E0508 00:39:50.630255 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dbhz2" May 8 00:39:50.630440 kubelet[2724]: E0508 00:39:50.630267 2724 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dbhz2" May 8 00:39:50.633189 kubelet[2724]: E0508 00:39:50.632934 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c849db6d6-wzfw7_calico-apiserver(5c48551f-d382-48fc-8b2c-4049dd697e7b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c849db6d6-wzfw7_calico-apiserver(5c48551f-d382-48fc-8b2c-4049dd697e7b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c849db6d6-wzfw7" podUID="5c48551f-d382-48fc-8b2c-4049dd697e7b" May 8 00:39:50.633189 kubelet[2724]: E0508 00:39:50.630194 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c849db6d6-n9bs2" May 8 00:39:50.633189 kubelet[2724]: E0508 00:39:50.632990 2724 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c849db6d6-n9bs2" May 8 00:39:50.633363 kubelet[2724]: E0508 00:39:50.633011 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c849db6d6-n9bs2_calico-apiserver(99de4f5c-61dc-4b3d-a2ad-08772fa94419)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c849db6d6-n9bs2_calico-apiserver(99de4f5c-61dc-4b3d-a2ad-08772fa94419)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c849db6d6-n9bs2" podUID="99de4f5c-61dc-4b3d-a2ad-08772fa94419" May 8 00:39:50.633363 kubelet[2724]: E0508 00:39:50.633037 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:39:50.633363 kubelet[2724]: E0508 00:39:50.633055 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bcbf44d57-tjjwh" May 8 00:39:50.633738 kubelet[2724]: E0508 00:39:50.633063 2724 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bcbf44d57-tjjwh" May 8 00:39:50.633738 kubelet[2724]: E0508 00:39:50.633079 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-bcbf44d57-tjjwh_calico-system(b0cb28c6-493c-4ec2-95eb-08870a5239cb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-bcbf44d57-tjjwh_calico-system(b0cb28c6-493c-4ec2-95eb-08870a5239cb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-bcbf44d57-tjjwh" podUID="b0cb28c6-493c-4ec2-95eb-08870a5239cb" May 8 00:39:50.633738 kubelet[2724]: E0508 00:39:50.633098 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:39:50.633834 kubelet[2724]: E0508 00:39:50.633115 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jq978" May 8 00:39:50.633834 kubelet[2724]: E0508 00:39:50.633128 2724 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jq978" May 8 00:39:50.633834 kubelet[2724]: E0508 00:39:50.633152 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-jq978_kube-system(7cd62a62-c3b8-4d08-90a0-b53c0511c1f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-jq978_kube-system(7cd62a62-c3b8-4d08-90a0-b53c0511c1f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-jq978" podUID="7cd62a62-c3b8-4d08-90a0-b53c0511c1f5" May 8 00:39:50.633961 kubelet[2724]: E0508 00:39:50.633623 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dbhz2_kube-system(46106b43-6a82-4ed1-a0c6-7d6292ac8f6f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dbhz2_kube-system(46106b43-6a82-4ed1-a0c6-7d6292ac8f6f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dbhz2" podUID="46106b43-6a82-4ed1-a0c6-7d6292ac8f6f" May 8 00:39:50.922497 systemd[1]: Created slice kubepods-besteffort-pod91493713_d7eb_4156_9aba_7e866dca9c56.slice - libcontainer container kubepods-besteffort-pod91493713_d7eb_4156_9aba_7e866dca9c56.slice. May 8 00:39:50.934767 kubelet[2724]: I0508 00:39:50.934692 2724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03" May 8 00:39:50.935852 containerd[1544]: time="2025-05-08T00:39:50.935775237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xqv5x,Uid:91493713-d7eb-4156-9aba-7e866dca9c56,Namespace:calico-system,Attempt:0,}" May 8 00:39:50.936663 kubelet[2724]: I0508 00:39:50.936645 2724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2" May 8 00:39:50.945094 containerd[1544]: time="2025-05-08T00:39:50.944509475Z" level=info msg="StopPodSandbox for \"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\"" May 8 00:39:50.945094 containerd[1544]: time="2025-05-08T00:39:50.944984175Z" level=info msg="StopPodSandbox for \"3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03\"" May 8 00:39:50.947136 containerd[1544]: time="2025-05-08T00:39:50.947121989Z" level=info msg="Ensure that sandbox 53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2 in task-service has been cleanup successfully" May 8 00:39:50.951756 containerd[1544]: time="2025-05-08T00:39:50.951664452Z" level=info msg="Ensure that sandbox 3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03 in task-service has been cleanup successfully" May 8 00:39:50.952873 kubelet[2724]: I0508 00:39:50.952808 2724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f" May 8 00:39:50.953245 containerd[1544]: time="2025-05-08T00:39:50.953151626Z" level=info msg="StopPodSandbox for \"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\"" May 8 00:39:50.953467 containerd[1544]: time="2025-05-08T00:39:50.953368768Z" level=info msg="Ensure that sandbox 9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f in task-service has been cleanup successfully" May 8 00:39:50.955022 kubelet[2724]: I0508 00:39:50.955011 2724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09" May 8 00:39:50.955630 containerd[1544]: time="2025-05-08T00:39:50.955547764Z" level=info msg="StopPodSandbox for \"454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09\"" May 8 00:39:50.956045 containerd[1544]: time="2025-05-08T00:39:50.955713784Z" level=info msg="Ensure that sandbox 454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09 in task-service has been cleanup successfully" May 8 00:39:50.957549 kubelet[2724]: I0508 00:39:50.957435 2724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647" May 8 00:39:50.958552 containerd[1544]: time="2025-05-08T00:39:50.958530527Z" level=info msg="StopPodSandbox for \"d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647\"" May 8 00:39:50.958708 containerd[1544]: time="2025-05-08T00:39:50.958637266Z" level=info msg="Ensure that sandbox d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647 in task-service has been cleanup successfully" May 8 00:39:51.008358 containerd[1544]: time="2025-05-08T00:39:51.008326494Z" level=error msg="StopPodSandbox for \"d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647\" failed" error="failed to destroy network for sandbox \"d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:39:51.008994 kubelet[2724]: E0508 00:39:51.008740 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647" May 8 00:39:51.008994 kubelet[2724]: E0508 00:39:51.008775 2724 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647"} May 8 00:39:51.008994 kubelet[2724]: E0508 00:39:51.008818 2724 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"99de4f5c-61dc-4b3d-a2ad-08772fa94419\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:39:51.008994 kubelet[2724]: E0508 00:39:51.008831 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"99de4f5c-61dc-4b3d-a2ad-08772fa94419\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c849db6d6-n9bs2" podUID="99de4f5c-61dc-4b3d-a2ad-08772fa94419" May 8 00:39:51.009533 containerd[1544]: time="2025-05-08T00:39:51.009511458Z" level=error msg="StopPodSandbox for \"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\" failed" error="failed to destroy network for sandbox \"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:39:51.009628 kubelet[2724]: E0508 00:39:51.009609 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2" May 8 00:39:51.009628 kubelet[2724]: E0508 00:39:51.009631 2724 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2"} May 8 00:39:51.009685 kubelet[2724]: E0508 00:39:51.009647 2724 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b0cb28c6-493c-4ec2-95eb-08870a5239cb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:39:51.009685 kubelet[2724]: E0508 00:39:51.009657 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b0cb28c6-493c-4ec2-95eb-08870a5239cb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-bcbf44d57-tjjwh" podUID="b0cb28c6-493c-4ec2-95eb-08870a5239cb" May 8 00:39:51.012216 containerd[1544]: time="2025-05-08T00:39:51.012195258Z" level=error msg="StopPodSandbox for \"454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09\" failed" error="failed to destroy network for sandbox \"454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:39:51.018769 kubelet[2724]: E0508 00:39:51.012292 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09" May 8 00:39:51.018769 kubelet[2724]: E0508 00:39:51.012313 2724 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09"} May 8 00:39:51.018769 kubelet[2724]: E0508 00:39:51.012330 2724 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5c48551f-d382-48fc-8b2c-4049dd697e7b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:39:51.018769 kubelet[2724]: E0508 00:39:51.012346 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5c48551f-d382-48fc-8b2c-4049dd697e7b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c849db6d6-wzfw7" podUID="5c48551f-d382-48fc-8b2c-4049dd697e7b" May 8 00:39:51.019428 containerd[1544]: time="2025-05-08T00:39:51.019289561Z" level=error msg="StopPodSandbox for \"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\" failed" error="failed to destroy network for sandbox \"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:39:51.019523 kubelet[2724]: E0508 00:39:51.019374 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f" May 8 00:39:51.019523 kubelet[2724]: E0508 00:39:51.019442 2724 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f"} May 8 00:39:51.019523 kubelet[2724]: E0508 00:39:51.019457 2724 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7cd62a62-c3b8-4d08-90a0-b53c0511c1f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:39:51.019523 kubelet[2724]: E0508 00:39:51.019468 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7cd62a62-c3b8-4d08-90a0-b53c0511c1f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-jq978" podUID="7cd62a62-c3b8-4d08-90a0-b53c0511c1f5" May 8 00:39:51.019763 kubelet[2724]: E0508 00:39:51.019589 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03" May 8 00:39:51.019763 kubelet[2724]: E0508 00:39:51.019605 2724 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03"} May 8 00:39:51.019763 kubelet[2724]: E0508 00:39:51.019618 2724 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"46106b43-6a82-4ed1-a0c6-7d6292ac8f6f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:39:51.019763 kubelet[2724]: E0508 00:39:51.019630 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"46106b43-6a82-4ed1-a0c6-7d6292ac8f6f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dbhz2" podUID="46106b43-6a82-4ed1-a0c6-7d6292ac8f6f" May 8 00:39:51.028019 containerd[1544]: time="2025-05-08T00:39:51.019522380Z" level=error msg="StopPodSandbox for \"3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03\" failed" error="failed to destroy network for sandbox \"3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:39:51.120869 containerd[1544]: time="2025-05-08T00:39:51.120826761Z" level=error msg="Failed to destroy network for sandbox \"4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:39:51.121137 containerd[1544]: time="2025-05-08T00:39:51.121114606Z" level=error msg="encountered an error cleaning up failed sandbox \"4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:39:51.121208 containerd[1544]: time="2025-05-08T00:39:51.121158158Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xqv5x,Uid:91493713-d7eb-4156-9aba-7e866dca9c56,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:39:51.121355 kubelet[2724]: E0508 00:39:51.121323 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:39:51.121388 kubelet[2724]: E0508 00:39:51.121374 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xqv5x" May 8 00:39:51.121409 kubelet[2724]: E0508 00:39:51.121387 2724 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xqv5x" May 8 00:39:51.121447 kubelet[2724]: E0508 00:39:51.121420 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xqv5x_calico-system(91493713-d7eb-4156-9aba-7e866dca9c56)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xqv5x_calico-system(91493713-d7eb-4156-9aba-7e866dca9c56)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xqv5x" podUID="91493713-d7eb-4156-9aba-7e866dca9c56" May 8 00:39:51.549994 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81-shm.mount: Deactivated successfully. May 8 00:39:52.089675 kubelet[2724]: I0508 00:39:52.089265 2724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81" May 8 00:39:52.092931 containerd[1544]: time="2025-05-08T00:39:52.092905333Z" level=info msg="StopPodSandbox for \"4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81\"" May 8 00:39:52.093106 containerd[1544]: time="2025-05-08T00:39:52.093024717Z" level=info msg="Ensure that sandbox 4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81 in task-service has been cleanup successfully" May 8 00:39:52.128084 containerd[1544]: time="2025-05-08T00:39:52.127801185Z" level=error msg="StopPodSandbox for \"4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81\" failed" error="failed to destroy network for sandbox \"4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:39:52.128574 kubelet[2724]: E0508 00:39:52.127935 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81" May 8 00:39:52.128574 kubelet[2724]: E0508 00:39:52.128001 2724 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81"} May 8 00:39:52.128574 kubelet[2724]: E0508 00:39:52.128023 2724 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"91493713-d7eb-4156-9aba-7e866dca9c56\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:39:52.128574 kubelet[2724]: E0508 00:39:52.128055 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"91493713-d7eb-4156-9aba-7e866dca9c56\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xqv5x" podUID="91493713-d7eb-4156-9aba-7e866dca9c56" May 8 00:39:55.551221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount349353876.mount: Deactivated successfully. May 8 00:39:55.837655 containerd[1544]: time="2025-05-08T00:39:55.814311519Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 8 00:39:55.842365 containerd[1544]: time="2025-05-08T00:39:55.842333088Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:55.879132 containerd[1544]: time="2025-05-08T00:39:55.878184001Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:55.921896 containerd[1544]: time="2025-05-08T00:39:55.921833925Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:55.925278 containerd[1544]: time="2025-05-08T00:39:55.923869734Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 6.033477363s" May 8 00:39:55.925278 containerd[1544]: time="2025-05-08T00:39:55.923892563Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 8 00:39:56.115805 containerd[1544]: time="2025-05-08T00:39:56.115669125Z" level=info msg="CreateContainer within sandbox \"00b39b0ece04afb0b2520cb0d54a7f14d5e2cb16e43b46b624c2aaeb7ea01928\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 8 00:39:56.172488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount708366402.mount: Deactivated successfully. May 8 00:39:56.202177 containerd[1544]: time="2025-05-08T00:39:56.202149663Z" level=info msg="CreateContainer within sandbox \"00b39b0ece04afb0b2520cb0d54a7f14d5e2cb16e43b46b624c2aaeb7ea01928\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c4993cdaf14b8e00243049a63f5b4ce918a9e891c423ed0c2d5297d63ce0e8ac\"" May 8 00:39:56.216705 containerd[1544]: time="2025-05-08T00:39:56.216076791Z" level=info msg="StartContainer for \"c4993cdaf14b8e00243049a63f5b4ce918a9e891c423ed0c2d5297d63ce0e8ac\"" May 8 00:39:56.484075 systemd[1]: Started cri-containerd-c4993cdaf14b8e00243049a63f5b4ce918a9e891c423ed0c2d5297d63ce0e8ac.scope - libcontainer container c4993cdaf14b8e00243049a63f5b4ce918a9e891c423ed0c2d5297d63ce0e8ac. May 8 00:39:56.506210 containerd[1544]: time="2025-05-08T00:39:56.506144852Z" level=info msg="StartContainer for \"c4993cdaf14b8e00243049a63f5b4ce918a9e891c423ed0c2d5297d63ce0e8ac\" returns successfully" May 8 00:39:56.652908 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 8 00:39:56.667221 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 8 00:39:56.691585 systemd[1]: cri-containerd-c4993cdaf14b8e00243049a63f5b4ce918a9e891c423ed0c2d5297d63ce0e8ac.scope: Deactivated successfully. May 8 00:39:56.706824 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4993cdaf14b8e00243049a63f5b4ce918a9e891c423ed0c2d5297d63ce0e8ac-rootfs.mount: Deactivated successfully. May 8 00:39:57.112849 containerd[1544]: time="2025-05-08T00:39:57.108374577Z" level=info msg="shim disconnected" id=c4993cdaf14b8e00243049a63f5b4ce918a9e891c423ed0c2d5297d63ce0e8ac namespace=k8s.io May 8 00:39:57.112849 containerd[1544]: time="2025-05-08T00:39:57.112609589Z" level=warning msg="cleaning up after shim disconnected" id=c4993cdaf14b8e00243049a63f5b4ce918a9e891c423ed0c2d5297d63ce0e8ac namespace=k8s.io May 8 00:39:57.112849 containerd[1544]: time="2025-05-08T00:39:57.112624971Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:39:57.250584 kubelet[2724]: I0508 00:39:57.250410 2724 scope.go:117] "RemoveContainer" containerID="c4993cdaf14b8e00243049a63f5b4ce918a9e891c423ed0c2d5297d63ce0e8ac" May 8 00:39:57.267474 containerd[1544]: time="2025-05-08T00:39:57.267443676Z" level=info msg="CreateContainer within sandbox \"00b39b0ece04afb0b2520cb0d54a7f14d5e2cb16e43b46b624c2aaeb7ea01928\" for container &ContainerMetadata{Name:calico-node,Attempt:1,}" May 8 00:39:57.280117 containerd[1544]: time="2025-05-08T00:39:57.280089382Z" level=info msg="CreateContainer within sandbox \"00b39b0ece04afb0b2520cb0d54a7f14d5e2cb16e43b46b624c2aaeb7ea01928\" for &ContainerMetadata{Name:calico-node,Attempt:1,} returns container id \"d15f6d755ce812a03f1aee2f2bad7ecbb0e6ef5b959593d435b884774de68278\"" May 8 00:39:57.291003 containerd[1544]: time="2025-05-08T00:39:57.290966999Z" level=info msg="StartContainer for \"d15f6d755ce812a03f1aee2f2bad7ecbb0e6ef5b959593d435b884774de68278\"" May 8 00:39:57.319134 systemd[1]: Started cri-containerd-d15f6d755ce812a03f1aee2f2bad7ecbb0e6ef5b959593d435b884774de68278.scope - libcontainer container d15f6d755ce812a03f1aee2f2bad7ecbb0e6ef5b959593d435b884774de68278. May 8 00:39:57.342507 containerd[1544]: time="2025-05-08T00:39:57.342271100Z" level=info msg="StartContainer for \"d15f6d755ce812a03f1aee2f2bad7ecbb0e6ef5b959593d435b884774de68278\" returns successfully" May 8 00:39:57.397803 systemd[1]: cri-containerd-d15f6d755ce812a03f1aee2f2bad7ecbb0e6ef5b959593d435b884774de68278.scope: Deactivated successfully. May 8 00:39:57.418113 containerd[1544]: time="2025-05-08T00:39:57.418046503Z" level=info msg="shim disconnected" id=d15f6d755ce812a03f1aee2f2bad7ecbb0e6ef5b959593d435b884774de68278 namespace=k8s.io May 8 00:39:57.418113 containerd[1544]: time="2025-05-08T00:39:57.418088706Z" level=warning msg="cleaning up after shim disconnected" id=d15f6d755ce812a03f1aee2f2bad7ecbb0e6ef5b959593d435b884774de68278 namespace=k8s.io May 8 00:39:57.418113 containerd[1544]: time="2025-05-08T00:39:57.418094583Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:39:57.430108 containerd[1544]: time="2025-05-08T00:39:57.430065456Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:39:57Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 8 00:39:57.551446 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d15f6d755ce812a03f1aee2f2bad7ecbb0e6ef5b959593d435b884774de68278-rootfs.mount: Deactivated successfully. May 8 00:39:58.211176 kubelet[2724]: I0508 00:39:58.211078 2724 scope.go:117] "RemoveContainer" containerID="c4993cdaf14b8e00243049a63f5b4ce918a9e891c423ed0c2d5297d63ce0e8ac" May 8 00:39:58.301836 kubelet[2724]: I0508 00:39:58.301800 2724 scope.go:117] "RemoveContainer" containerID="d15f6d755ce812a03f1aee2f2bad7ecbb0e6ef5b959593d435b884774de68278" May 8 00:39:58.302194 kubelet[2724]: E0508 00:39:58.301971 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-9fk5s_calico-system(f34e9eef-ef24-4654-80e2-7e5383be17b9)\"" pod="calico-system/calico-node-9fk5s" podUID="f34e9eef-ef24-4654-80e2-7e5383be17b9" May 8 00:39:58.304002 containerd[1544]: time="2025-05-08T00:39:58.303694064Z" level=info msg="RemoveContainer for \"c4993cdaf14b8e00243049a63f5b4ce918a9e891c423ed0c2d5297d63ce0e8ac\"" May 8 00:39:58.313150 containerd[1544]: time="2025-05-08T00:39:58.313126944Z" level=info msg="RemoveContainer for \"c4993cdaf14b8e00243049a63f5b4ce918a9e891c423ed0c2d5297d63ce0e8ac\" returns successfully" May 8 00:39:59.177360 kubelet[2724]: I0508 00:39:59.177277 2724 scope.go:117] "RemoveContainer" containerID="d15f6d755ce812a03f1aee2f2bad7ecbb0e6ef5b959593d435b884774de68278" May 8 00:39:59.177689 kubelet[2724]: E0508 00:39:59.177390 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-9fk5s_calico-system(f34e9eef-ef24-4654-80e2-7e5383be17b9)\"" pod="calico-system/calico-node-9fk5s" podUID="f34e9eef-ef24-4654-80e2-7e5383be17b9" May 8 00:40:01.776246 containerd[1544]: time="2025-05-08T00:40:01.776174203Z" level=info msg="StopPodSandbox for \"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\"" May 8 00:40:01.795490 containerd[1544]: time="2025-05-08T00:40:01.795453493Z" level=error msg="StopPodSandbox for \"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\" failed" error="failed to destroy network for sandbox \"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:01.795624 kubelet[2724]: E0508 00:40:01.795595 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2" May 8 00:40:01.795821 kubelet[2724]: E0508 00:40:01.795633 2724 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2"} May 8 00:40:01.795821 kubelet[2724]: E0508 00:40:01.795657 2724 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b0cb28c6-493c-4ec2-95eb-08870a5239cb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:01.795821 kubelet[2724]: E0508 00:40:01.795672 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b0cb28c6-493c-4ec2-95eb-08870a5239cb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-bcbf44d57-tjjwh" podUID="b0cb28c6-493c-4ec2-95eb-08870a5239cb" May 8 00:40:02.776125 containerd[1544]: time="2025-05-08T00:40:02.775730401Z" level=info msg="StopPodSandbox for \"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\"" May 8 00:40:02.795734 containerd[1544]: time="2025-05-08T00:40:02.795690976Z" level=error msg="StopPodSandbox for \"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\" failed" error="failed to destroy network for sandbox \"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:02.796266 kubelet[2724]: E0508 00:40:02.796157 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f" May 8 00:40:02.796266 kubelet[2724]: E0508 00:40:02.796202 2724 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f"} May 8 00:40:02.796266 kubelet[2724]: E0508 00:40:02.796227 2724 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7cd62a62-c3b8-4d08-90a0-b53c0511c1f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:02.796266 kubelet[2724]: E0508 00:40:02.796243 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7cd62a62-c3b8-4d08-90a0-b53c0511c1f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-jq978" podUID="7cd62a62-c3b8-4d08-90a0-b53c0511c1f5" May 8 00:40:03.777145 containerd[1544]: time="2025-05-08T00:40:03.777007921Z" level=info msg="StopPodSandbox for \"d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647\"" May 8 00:40:03.777690 containerd[1544]: time="2025-05-08T00:40:03.777667479Z" level=info msg="StopPodSandbox for \"3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03\"" May 8 00:40:03.803749 containerd[1544]: time="2025-05-08T00:40:03.803693628Z" level=error msg="StopPodSandbox for \"3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03\" failed" error="failed to destroy network for sandbox \"3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:03.804259 kubelet[2724]: E0508 00:40:03.804168 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03" May 8 00:40:03.804259 kubelet[2724]: E0508 00:40:03.804200 2724 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03"} May 8 00:40:03.804259 kubelet[2724]: E0508 00:40:03.804227 2724 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"46106b43-6a82-4ed1-a0c6-7d6292ac8f6f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:03.804259 kubelet[2724]: E0508 00:40:03.804241 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"46106b43-6a82-4ed1-a0c6-7d6292ac8f6f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dbhz2" podUID="46106b43-6a82-4ed1-a0c6-7d6292ac8f6f" May 8 00:40:03.832657 kubelet[2724]: E0508 00:40:03.816956 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647" May 8 00:40:03.832657 kubelet[2724]: E0508 00:40:03.816988 2724 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647"} May 8 00:40:03.832657 kubelet[2724]: E0508 00:40:03.817008 2724 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"99de4f5c-61dc-4b3d-a2ad-08772fa94419\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:03.832657 kubelet[2724]: E0508 00:40:03.817021 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"99de4f5c-61dc-4b3d-a2ad-08772fa94419\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c849db6d6-n9bs2" podUID="99de4f5c-61dc-4b3d-a2ad-08772fa94419" May 8 00:40:03.845123 containerd[1544]: time="2025-05-08T00:40:03.816795021Z" level=error msg="StopPodSandbox for \"d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647\" failed" error="failed to destroy network for sandbox \"d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:05.777143 containerd[1544]: time="2025-05-08T00:40:05.776537732Z" level=info msg="StopPodSandbox for \"454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09\"" May 8 00:40:05.777143 containerd[1544]: time="2025-05-08T00:40:05.776636298Z" level=info msg="StopPodSandbox for \"4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81\"" May 8 00:40:05.812089 containerd[1544]: time="2025-05-08T00:40:05.812012285Z" level=error msg="StopPodSandbox for \"454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09\" failed" error="failed to destroy network for sandbox \"454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:05.815844 containerd[1544]: time="2025-05-08T00:40:05.813381125Z" level=error msg="StopPodSandbox for \"4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81\" failed" error="failed to destroy network for sandbox \"4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:05.815912 kubelet[2724]: E0508 00:40:05.812345 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09" May 8 00:40:05.815912 kubelet[2724]: E0508 00:40:05.812401 2724 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09"} May 8 00:40:05.815912 kubelet[2724]: E0508 00:40:05.812434 2724 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5c48551f-d382-48fc-8b2c-4049dd697e7b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:05.815912 kubelet[2724]: E0508 00:40:05.812466 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5c48551f-d382-48fc-8b2c-4049dd697e7b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c849db6d6-wzfw7" podUID="5c48551f-d382-48fc-8b2c-4049dd697e7b" May 8 00:40:05.816335 kubelet[2724]: E0508 00:40:05.813562 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81" May 8 00:40:05.816335 kubelet[2724]: E0508 00:40:05.813607 2724 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81"} May 8 00:40:05.816335 kubelet[2724]: E0508 00:40:05.813629 2724 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"91493713-d7eb-4156-9aba-7e866dca9c56\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:05.816335 kubelet[2724]: E0508 00:40:05.813648 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"91493713-d7eb-4156-9aba-7e866dca9c56\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xqv5x" podUID="91493713-d7eb-4156-9aba-7e866dca9c56" May 8 00:40:07.383981 kubelet[2724]: I0508 00:40:07.383717 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:40:13.776718 kubelet[2724]: I0508 00:40:13.776429 2724 scope.go:117] "RemoveContainer" containerID="d15f6d755ce812a03f1aee2f2bad7ecbb0e6ef5b959593d435b884774de68278" May 8 00:40:13.783422 containerd[1544]: time="2025-05-08T00:40:13.783389237Z" level=info msg="CreateContainer within sandbox \"00b39b0ece04afb0b2520cb0d54a7f14d5e2cb16e43b46b624c2aaeb7ea01928\" for container &ContainerMetadata{Name:calico-node,Attempt:2,}" May 8 00:40:13.810711 containerd[1544]: time="2025-05-08T00:40:13.810681393Z" level=info msg="CreateContainer within sandbox \"00b39b0ece04afb0b2520cb0d54a7f14d5e2cb16e43b46b624c2aaeb7ea01928\" for &ContainerMetadata{Name:calico-node,Attempt:2,} returns container id \"c06fe015afe6119dc7075206f7f57107a0db02cf26b17f058649dd8e41be9012\"" May 8 00:40:13.811072 containerd[1544]: time="2025-05-08T00:40:13.811039442Z" level=info msg="StartContainer for \"c06fe015afe6119dc7075206f7f57107a0db02cf26b17f058649dd8e41be9012\"" May 8 00:40:13.836044 systemd[1]: Started cri-containerd-c06fe015afe6119dc7075206f7f57107a0db02cf26b17f058649dd8e41be9012.scope - libcontainer container c06fe015afe6119dc7075206f7f57107a0db02cf26b17f058649dd8e41be9012. May 8 00:40:13.852596 containerd[1544]: time="2025-05-08T00:40:13.852568102Z" level=info msg="StartContainer for \"c06fe015afe6119dc7075206f7f57107a0db02cf26b17f058649dd8e41be9012\" returns successfully" May 8 00:40:13.923596 systemd[1]: cri-containerd-c06fe015afe6119dc7075206f7f57107a0db02cf26b17f058649dd8e41be9012.scope: Deactivated successfully. May 8 00:40:13.937744 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c06fe015afe6119dc7075206f7f57107a0db02cf26b17f058649dd8e41be9012-rootfs.mount: Deactivated successfully. May 8 00:40:13.976399 containerd[1544]: time="2025-05-08T00:40:13.976345571Z" level=info msg="shim disconnected" id=c06fe015afe6119dc7075206f7f57107a0db02cf26b17f058649dd8e41be9012 namespace=k8s.io May 8 00:40:13.976399 containerd[1544]: time="2025-05-08T00:40:13.976392881Z" level=warning msg="cleaning up after shim disconnected" id=c06fe015afe6119dc7075206f7f57107a0db02cf26b17f058649dd8e41be9012 namespace=k8s.io May 8 00:40:13.976399 containerd[1544]: time="2025-05-08T00:40:13.976404748Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:40:14.226041 kubelet[2724]: I0508 00:40:14.226018 2724 scope.go:117] "RemoveContainer" containerID="d15f6d755ce812a03f1aee2f2bad7ecbb0e6ef5b959593d435b884774de68278" May 8 00:40:14.226270 kubelet[2724]: I0508 00:40:14.226258 2724 scope.go:117] "RemoveContainer" containerID="c06fe015afe6119dc7075206f7f57107a0db02cf26b17f058649dd8e41be9012" May 8 00:40:14.226544 kubelet[2724]: E0508 00:40:14.226343 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-9fk5s_calico-system(f34e9eef-ef24-4654-80e2-7e5383be17b9)\"" pod="calico-system/calico-node-9fk5s" podUID="f34e9eef-ef24-4654-80e2-7e5383be17b9" May 8 00:40:14.227011 containerd[1544]: time="2025-05-08T00:40:14.226996611Z" level=info msg="RemoveContainer for \"d15f6d755ce812a03f1aee2f2bad7ecbb0e6ef5b959593d435b884774de68278\"" May 8 00:40:14.236161 containerd[1544]: time="2025-05-08T00:40:14.236105277Z" level=info msg="RemoveContainer for \"d15f6d755ce812a03f1aee2f2bad7ecbb0e6ef5b959593d435b884774de68278\" returns successfully" May 8 00:40:14.775620 containerd[1544]: time="2025-05-08T00:40:14.775426045Z" level=info msg="StopPodSandbox for \"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\"" May 8 00:40:14.800644 containerd[1544]: time="2025-05-08T00:40:14.800602881Z" level=error msg="StopPodSandbox for \"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\" failed" error="failed to destroy network for sandbox \"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:14.801147 kubelet[2724]: E0508 00:40:14.801049 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2" May 8 00:40:14.801147 kubelet[2724]: E0508 00:40:14.801085 2724 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2"} May 8 00:40:14.801147 kubelet[2724]: E0508 00:40:14.801108 2724 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b0cb28c6-493c-4ec2-95eb-08870a5239cb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:14.801147 kubelet[2724]: E0508 00:40:14.801122 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b0cb28c6-493c-4ec2-95eb-08870a5239cb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-bcbf44d57-tjjwh" podUID="b0cb28c6-493c-4ec2-95eb-08870a5239cb" May 8 00:40:15.775815 containerd[1544]: time="2025-05-08T00:40:15.775784940Z" level=info msg="StopPodSandbox for \"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\"" May 8 00:40:15.793677 containerd[1544]: time="2025-05-08T00:40:15.793638699Z" level=error msg="StopPodSandbox for \"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\" failed" error="failed to destroy network for sandbox \"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:15.793974 kubelet[2724]: E0508 00:40:15.793782 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f" May 8 00:40:15.793974 kubelet[2724]: E0508 00:40:15.793829 2724 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f"} May 8 00:40:15.793974 kubelet[2724]: E0508 00:40:15.793853 2724 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7cd62a62-c3b8-4d08-90a0-b53c0511c1f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:15.793974 kubelet[2724]: E0508 00:40:15.793869 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7cd62a62-c3b8-4d08-90a0-b53c0511c1f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-jq978" podUID="7cd62a62-c3b8-4d08-90a0-b53c0511c1f5" May 8 00:40:18.776650 containerd[1544]: time="2025-05-08T00:40:18.775940295Z" level=info msg="StopPodSandbox for \"d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647\"" May 8 00:40:18.777328 containerd[1544]: time="2025-05-08T00:40:18.777110087Z" level=info msg="StopPodSandbox for \"4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81\"" May 8 00:40:18.798160 containerd[1544]: time="2025-05-08T00:40:18.798122821Z" level=error msg="StopPodSandbox for \"4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81\" failed" error="failed to destroy network for sandbox \"4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:18.798425 kubelet[2724]: E0508 00:40:18.798342 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81" May 8 00:40:18.798425 kubelet[2724]: E0508 00:40:18.798379 2724 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81"} May 8 00:40:18.798425 kubelet[2724]: E0508 00:40:18.798401 2724 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"91493713-d7eb-4156-9aba-7e866dca9c56\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:18.798749 kubelet[2724]: E0508 00:40:18.798418 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"91493713-d7eb-4156-9aba-7e866dca9c56\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xqv5x" podUID="91493713-d7eb-4156-9aba-7e866dca9c56" May 8 00:40:18.799331 containerd[1544]: time="2025-05-08T00:40:18.799265519Z" level=error msg="StopPodSandbox for \"d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647\" failed" error="failed to destroy network for sandbox \"d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:18.799400 kubelet[2724]: E0508 00:40:18.799372 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647" May 8 00:40:18.799400 kubelet[2724]: E0508 00:40:18.799392 2724 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647"} May 8 00:40:18.799464 kubelet[2724]: E0508 00:40:18.799408 2724 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"99de4f5c-61dc-4b3d-a2ad-08772fa94419\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:18.799464 kubelet[2724]: E0508 00:40:18.799421 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"99de4f5c-61dc-4b3d-a2ad-08772fa94419\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c849db6d6-n9bs2" podUID="99de4f5c-61dc-4b3d-a2ad-08772fa94419" May 8 00:40:19.241385 kubelet[2724]: I0508 00:40:19.241353 2724 scope.go:117] "RemoveContainer" containerID="c06fe015afe6119dc7075206f7f57107a0db02cf26b17f058649dd8e41be9012" May 8 00:40:19.241500 kubelet[2724]: E0508 00:40:19.241474 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-9fk5s_calico-system(f34e9eef-ef24-4654-80e2-7e5383be17b9)\"" pod="calico-system/calico-node-9fk5s" podUID="f34e9eef-ef24-4654-80e2-7e5383be17b9" May 8 00:40:19.775524 containerd[1544]: time="2025-05-08T00:40:19.775337621Z" level=info msg="StopPodSandbox for \"3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03\"" May 8 00:40:19.791846 containerd[1544]: time="2025-05-08T00:40:19.791814081Z" level=error msg="StopPodSandbox for \"3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03\" failed" error="failed to destroy network for sandbox \"3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:19.792233 kubelet[2724]: E0508 00:40:19.791970 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03" May 8 00:40:19.792233 kubelet[2724]: E0508 00:40:19.792005 2724 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03"} May 8 00:40:19.792233 kubelet[2724]: E0508 00:40:19.792028 2724 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"46106b43-6a82-4ed1-a0c6-7d6292ac8f6f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:19.792233 kubelet[2724]: E0508 00:40:19.792042 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"46106b43-6a82-4ed1-a0c6-7d6292ac8f6f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dbhz2" podUID="46106b43-6a82-4ed1-a0c6-7d6292ac8f6f" May 8 00:40:20.776552 containerd[1544]: time="2025-05-08T00:40:20.776502935Z" level=info msg="StopPodSandbox for \"454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09\"" May 8 00:40:20.797809 containerd[1544]: time="2025-05-08T00:40:20.797772092Z" level=error msg="StopPodSandbox for \"454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09\" failed" error="failed to destroy network for sandbox \"454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:20.798128 kubelet[2724]: E0508 00:40:20.797956 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09" May 8 00:40:20.798128 kubelet[2724]: E0508 00:40:20.798003 2724 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09"} May 8 00:40:20.798128 kubelet[2724]: E0508 00:40:20.798029 2724 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5c48551f-d382-48fc-8b2c-4049dd697e7b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:20.798128 kubelet[2724]: E0508 00:40:20.798044 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5c48551f-d382-48fc-8b2c-4049dd697e7b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c849db6d6-wzfw7" podUID="5c48551f-d382-48fc-8b2c-4049dd697e7b" May 8 00:40:27.777532 containerd[1544]: time="2025-05-08T00:40:27.777105237Z" level=info msg="StopPodSandbox for \"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\"" May 8 00:40:27.777885 containerd[1544]: time="2025-05-08T00:40:27.777871545Z" level=info msg="StopPodSandbox for \"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\"" May 8 00:40:27.805303 containerd[1544]: time="2025-05-08T00:40:27.805214690Z" level=error msg="StopPodSandbox for \"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\" failed" error="failed to destroy network for sandbox \"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:27.805303 containerd[1544]: time="2025-05-08T00:40:27.805214770Z" level=error msg="StopPodSandbox for \"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\" failed" error="failed to destroy network for sandbox \"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:27.805538 kubelet[2724]: E0508 00:40:27.805367 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f" May 8 00:40:27.805538 kubelet[2724]: E0508 00:40:27.805398 2724 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f"} May 8 00:40:27.805538 kubelet[2724]: E0508 00:40:27.805419 2724 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7cd62a62-c3b8-4d08-90a0-b53c0511c1f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:27.805538 kubelet[2724]: E0508 00:40:27.805433 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7cd62a62-c3b8-4d08-90a0-b53c0511c1f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-jq978" podUID="7cd62a62-c3b8-4d08-90a0-b53c0511c1f5" May 8 00:40:27.805921 kubelet[2724]: E0508 00:40:27.805863 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2" May 8 00:40:27.805921 kubelet[2724]: E0508 00:40:27.805879 2724 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2"} May 8 00:40:27.805921 kubelet[2724]: E0508 00:40:27.805892 2724 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b0cb28c6-493c-4ec2-95eb-08870a5239cb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:27.805921 kubelet[2724]: E0508 00:40:27.805908 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b0cb28c6-493c-4ec2-95eb-08870a5239cb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-bcbf44d57-tjjwh" podUID="b0cb28c6-493c-4ec2-95eb-08870a5239cb" May 8 00:40:30.775392 kubelet[2724]: I0508 00:40:30.775086 2724 scope.go:117] "RemoveContainer" containerID="c06fe015afe6119dc7075206f7f57107a0db02cf26b17f058649dd8e41be9012" May 8 00:40:30.775392 kubelet[2724]: E0508 00:40:30.775190 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-9fk5s_calico-system(f34e9eef-ef24-4654-80e2-7e5383be17b9)\"" pod="calico-system/calico-node-9fk5s" podUID="f34e9eef-ef24-4654-80e2-7e5383be17b9" May 8 00:40:30.775762 containerd[1544]: time="2025-05-08T00:40:30.775678848Z" level=info msg="StopPodSandbox for \"3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03\"" May 8 00:40:30.776755 containerd[1544]: time="2025-05-08T00:40:30.776033795Z" level=info msg="StopPodSandbox for \"d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647\"" May 8 00:40:30.777079 containerd[1544]: time="2025-05-08T00:40:30.777065983Z" level=info msg="StopPodSandbox for \"4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81\"" May 8 00:40:30.804277 containerd[1544]: time="2025-05-08T00:40:30.804002904Z" level=error msg="StopPodSandbox for \"d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647\" failed" error="failed to destroy network for sandbox \"d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:30.804446 kubelet[2724]: E0508 00:40:30.804393 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647" May 8 00:40:30.804446 kubelet[2724]: E0508 00:40:30.804425 2724 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647"} May 8 00:40:30.804502 kubelet[2724]: E0508 00:40:30.804449 2724 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"99de4f5c-61dc-4b3d-a2ad-08772fa94419\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:30.804502 kubelet[2724]: E0508 00:40:30.804463 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"99de4f5c-61dc-4b3d-a2ad-08772fa94419\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c849db6d6-n9bs2" podUID="99de4f5c-61dc-4b3d-a2ad-08772fa94419" May 8 00:40:30.812477 containerd[1544]: time="2025-05-08T00:40:30.812396762Z" level=error msg="StopPodSandbox for \"3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03\" failed" error="failed to destroy network for sandbox \"3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:30.812666 kubelet[2724]: E0508 00:40:30.812526 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03" May 8 00:40:30.812666 kubelet[2724]: E0508 00:40:30.812558 2724 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03"} May 8 00:40:30.812666 kubelet[2724]: E0508 00:40:30.812578 2724 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"46106b43-6a82-4ed1-a0c6-7d6292ac8f6f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:30.812666 kubelet[2724]: E0508 00:40:30.812591 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"46106b43-6a82-4ed1-a0c6-7d6292ac8f6f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dbhz2" podUID="46106b43-6a82-4ed1-a0c6-7d6292ac8f6f" May 8 00:40:30.814561 containerd[1544]: time="2025-05-08T00:40:30.814535668Z" level=error msg="StopPodSandbox for \"4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81\" failed" error="failed to destroy network for sandbox \"4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:30.814646 kubelet[2724]: E0508 00:40:30.814623 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81" May 8 00:40:30.814678 kubelet[2724]: E0508 00:40:30.814653 2724 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81"} May 8 00:40:30.814699 kubelet[2724]: E0508 00:40:30.814679 2724 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"91493713-d7eb-4156-9aba-7e866dca9c56\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:30.814699 kubelet[2724]: E0508 00:40:30.814692 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"91493713-d7eb-4156-9aba-7e866dca9c56\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xqv5x" podUID="91493713-d7eb-4156-9aba-7e866dca9c56" May 8 00:40:34.776414 containerd[1544]: time="2025-05-08T00:40:34.776343541Z" level=info msg="StopPodSandbox for \"454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09\"" May 8 00:40:34.803283 containerd[1544]: time="2025-05-08T00:40:34.803180003Z" level=error msg="StopPodSandbox for \"454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09\" failed" error="failed to destroy network for sandbox \"454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:34.803433 kubelet[2724]: E0508 00:40:34.803372 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09" May 8 00:40:34.803433 kubelet[2724]: E0508 00:40:34.803409 2724 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09"} May 8 00:40:34.803433 kubelet[2724]: E0508 00:40:34.803430 2724 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5c48551f-d382-48fc-8b2c-4049dd697e7b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:34.803688 kubelet[2724]: E0508 00:40:34.803443 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5c48551f-d382-48fc-8b2c-4049dd697e7b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c849db6d6-wzfw7" podUID="5c48551f-d382-48fc-8b2c-4049dd697e7b" May 8 00:40:35.034225 systemd[1]: Started sshd@8-139.178.70.100:22-139.178.68.195:40020.service - OpenSSH per-connection server daemon (139.178.68.195:40020). May 8 00:40:35.100849 sshd[4335]: Accepted publickey for core from 139.178.68.195 port 40020 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:40:35.102016 sshd[4335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:40:35.105133 systemd-logind[1518]: New session 10 of user core. May 8 00:40:35.115023 systemd[1]: Started session-10.scope - Session 10 of User core. May 8 00:40:35.470017 sshd[4335]: pam_unix(sshd:session): session closed for user core May 8 00:40:35.485124 systemd-logind[1518]: Session 10 logged out. Waiting for processes to exit. May 8 00:40:35.485662 systemd[1]: sshd@8-139.178.70.100:22-139.178.68.195:40020.service: Deactivated successfully. May 8 00:40:35.487782 systemd[1]: session-10.scope: Deactivated successfully. May 8 00:40:35.489327 systemd-logind[1518]: Removed session 10. May 8 00:40:39.376403 containerd[1544]: time="2025-05-08T00:40:39.376378146Z" level=info msg="StopPodSandbox for \"00b39b0ece04afb0b2520cb0d54a7f14d5e2cb16e43b46b624c2aaeb7ea01928\"" May 8 00:40:39.380196 containerd[1544]: time="2025-05-08T00:40:39.380163304Z" level=info msg="Container to stop \"1161a28ee59dfb660fc18e7813f16c527521a08415b92cc00a88b7ec95385a44\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:40:39.380196 containerd[1544]: time="2025-05-08T00:40:39.380192853Z" level=info msg="Container to stop \"c536c3d7a8e7f5a424785239dd3f132067fef753cb45a5ad263616df93e419a7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:40:39.380196 containerd[1544]: time="2025-05-08T00:40:39.380201956Z" level=info msg="Container to stop \"c06fe015afe6119dc7075206f7f57107a0db02cf26b17f058649dd8e41be9012\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:40:39.382844 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-00b39b0ece04afb0b2520cb0d54a7f14d5e2cb16e43b46b624c2aaeb7ea01928-shm.mount: Deactivated successfully. May 8 00:40:39.393820 systemd[1]: cri-containerd-00b39b0ece04afb0b2520cb0d54a7f14d5e2cb16e43b46b624c2aaeb7ea01928.scope: Deactivated successfully. May 8 00:40:39.412240 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-00b39b0ece04afb0b2520cb0d54a7f14d5e2cb16e43b46b624c2aaeb7ea01928-rootfs.mount: Deactivated successfully. May 8 00:40:39.419958 containerd[1544]: time="2025-05-08T00:40:39.419235455Z" level=info msg="shim disconnected" id=00b39b0ece04afb0b2520cb0d54a7f14d5e2cb16e43b46b624c2aaeb7ea01928 namespace=k8s.io May 8 00:40:39.419958 containerd[1544]: time="2025-05-08T00:40:39.419277786Z" level=warning msg="cleaning up after shim disconnected" id=00b39b0ece04afb0b2520cb0d54a7f14d5e2cb16e43b46b624c2aaeb7ea01928 namespace=k8s.io May 8 00:40:39.419958 containerd[1544]: time="2025-05-08T00:40:39.419286025Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:40:39.434337 containerd[1544]: time="2025-05-08T00:40:39.434308187Z" level=info msg="TearDown network for sandbox \"00b39b0ece04afb0b2520cb0d54a7f14d5e2cb16e43b46b624c2aaeb7ea01928\" successfully" May 8 00:40:39.434337 containerd[1544]: time="2025-05-08T00:40:39.434328509Z" level=info msg="StopPodSandbox for \"00b39b0ece04afb0b2520cb0d54a7f14d5e2cb16e43b46b624c2aaeb7ea01928\" returns successfully" May 8 00:40:39.461741 kubelet[2724]: I0508 00:40:39.461716 2724 memory_manager.go:355] "RemoveStaleState removing state" podUID="f34e9eef-ef24-4654-80e2-7e5383be17b9" containerName="calico-node" May 8 00:40:39.461741 kubelet[2724]: I0508 00:40:39.461744 2724 memory_manager.go:355] "RemoveStaleState removing state" podUID="f34e9eef-ef24-4654-80e2-7e5383be17b9" containerName="calico-node" May 8 00:40:39.462026 kubelet[2724]: I0508 00:40:39.461802 2724 memory_manager.go:355] "RemoveStaleState removing state" podUID="f34e9eef-ef24-4654-80e2-7e5383be17b9" containerName="calico-node" May 8 00:40:39.469439 systemd[1]: Created slice kubepods-besteffort-podbcd9e13a_f143_416d_95b0_bd2ea51e52df.slice - libcontainer container kubepods-besteffort-podbcd9e13a_f143_416d_95b0_bd2ea51e52df.slice. May 8 00:40:39.608862 kubelet[2724]: I0508 00:40:39.608818 2724 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f34e9eef-ef24-4654-80e2-7e5383be17b9-cni-log-dir\") pod \"f34e9eef-ef24-4654-80e2-7e5383be17b9\" (UID: \"f34e9eef-ef24-4654-80e2-7e5383be17b9\") " May 8 00:40:39.608862 kubelet[2724]: I0508 00:40:39.608868 2724 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f34e9eef-ef24-4654-80e2-7e5383be17b9-cni-bin-dir\") pod \"f34e9eef-ef24-4654-80e2-7e5383be17b9\" (UID: \"f34e9eef-ef24-4654-80e2-7e5383be17b9\") " May 8 00:40:39.609106 kubelet[2724]: I0508 00:40:39.608882 2724 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f34e9eef-ef24-4654-80e2-7e5383be17b9-var-lib-calico\") pod \"f34e9eef-ef24-4654-80e2-7e5383be17b9\" (UID: \"f34e9eef-ef24-4654-80e2-7e5383be17b9\") " May 8 00:40:39.609106 kubelet[2724]: I0508 00:40:39.608906 2724 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f34e9eef-ef24-4654-80e2-7e5383be17b9-tigera-ca-bundle\") pod \"f34e9eef-ef24-4654-80e2-7e5383be17b9\" (UID: \"f34e9eef-ef24-4654-80e2-7e5383be17b9\") " May 8 00:40:39.609106 kubelet[2724]: I0508 00:40:39.608917 2724 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f34e9eef-ef24-4654-80e2-7e5383be17b9-xtables-lock\") pod \"f34e9eef-ef24-4654-80e2-7e5383be17b9\" (UID: \"f34e9eef-ef24-4654-80e2-7e5383be17b9\") " May 8 00:40:39.609106 kubelet[2724]: I0508 00:40:39.608927 2724 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f34e9eef-ef24-4654-80e2-7e5383be17b9-var-run-calico\") pod \"f34e9eef-ef24-4654-80e2-7e5383be17b9\" (UID: \"f34e9eef-ef24-4654-80e2-7e5383be17b9\") " May 8 00:40:39.609106 kubelet[2724]: I0508 00:40:39.608940 2724 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xn25c\" (UniqueName: \"kubernetes.io/projected/f34e9eef-ef24-4654-80e2-7e5383be17b9-kube-api-access-xn25c\") pod \"f34e9eef-ef24-4654-80e2-7e5383be17b9\" (UID: \"f34e9eef-ef24-4654-80e2-7e5383be17b9\") " May 8 00:40:39.609106 kubelet[2724]: I0508 00:40:39.608984 2724 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f34e9eef-ef24-4654-80e2-7e5383be17b9-node-certs\") pod \"f34e9eef-ef24-4654-80e2-7e5383be17b9\" (UID: \"f34e9eef-ef24-4654-80e2-7e5383be17b9\") " May 8 00:40:39.609683 kubelet[2724]: I0508 00:40:39.608999 2724 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f34e9eef-ef24-4654-80e2-7e5383be17b9-policysync\") pod \"f34e9eef-ef24-4654-80e2-7e5383be17b9\" (UID: \"f34e9eef-ef24-4654-80e2-7e5383be17b9\") " May 8 00:40:39.609683 kubelet[2724]: I0508 00:40:39.609012 2724 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f34e9eef-ef24-4654-80e2-7e5383be17b9-cni-net-dir\") pod \"f34e9eef-ef24-4654-80e2-7e5383be17b9\" (UID: \"f34e9eef-ef24-4654-80e2-7e5383be17b9\") " May 8 00:40:39.609683 kubelet[2724]: I0508 00:40:39.609025 2724 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f34e9eef-ef24-4654-80e2-7e5383be17b9-flexvol-driver-host\") pod \"f34e9eef-ef24-4654-80e2-7e5383be17b9\" (UID: \"f34e9eef-ef24-4654-80e2-7e5383be17b9\") " May 8 00:40:39.609683 kubelet[2724]: I0508 00:40:39.609053 2724 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f34e9eef-ef24-4654-80e2-7e5383be17b9-lib-modules\") pod \"f34e9eef-ef24-4654-80e2-7e5383be17b9\" (UID: \"f34e9eef-ef24-4654-80e2-7e5383be17b9\") " May 8 00:40:39.613425 kubelet[2724]: I0508 00:40:39.611736 2724 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f34e9eef-ef24-4654-80e2-7e5383be17b9-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "f34e9eef-ef24-4654-80e2-7e5383be17b9" (UID: "f34e9eef-ef24-4654-80e2-7e5383be17b9"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:40:39.613425 kubelet[2724]: I0508 00:40:39.613220 2724 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f34e9eef-ef24-4654-80e2-7e5383be17b9-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "f34e9eef-ef24-4654-80e2-7e5383be17b9" (UID: "f34e9eef-ef24-4654-80e2-7e5383be17b9"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:40:39.613425 kubelet[2724]: I0508 00:40:39.613241 2724 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f34e9eef-ef24-4654-80e2-7e5383be17b9-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "f34e9eef-ef24-4654-80e2-7e5383be17b9" (UID: "f34e9eef-ef24-4654-80e2-7e5383be17b9"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:40:39.613425 kubelet[2724]: I0508 00:40:39.613261 2724 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f34e9eef-ef24-4654-80e2-7e5383be17b9-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "f34e9eef-ef24-4654-80e2-7e5383be17b9" (UID: "f34e9eef-ef24-4654-80e2-7e5383be17b9"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:40:39.616706 kubelet[2724]: I0508 00:40:39.616361 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bcd9e13a-f143-416d-95b0-bd2ea51e52df-cni-log-dir\") pod \"calico-node-zkjb6\" (UID: \"bcd9e13a-f143-416d-95b0-bd2ea51e52df\") " pod="calico-system/calico-node-zkjb6" May 8 00:40:39.616706 kubelet[2724]: I0508 00:40:39.616390 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bcd9e13a-f143-416d-95b0-bd2ea51e52df-policysync\") pod \"calico-node-zkjb6\" (UID: \"bcd9e13a-f143-416d-95b0-bd2ea51e52df\") " pod="calico-system/calico-node-zkjb6" May 8 00:40:39.616706 kubelet[2724]: I0508 00:40:39.616405 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcd9e13a-f143-416d-95b0-bd2ea51e52df-tigera-ca-bundle\") pod \"calico-node-zkjb6\" (UID: \"bcd9e13a-f143-416d-95b0-bd2ea51e52df\") " pod="calico-system/calico-node-zkjb6" May 8 00:40:39.616706 kubelet[2724]: I0508 00:40:39.616423 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bcd9e13a-f143-416d-95b0-bd2ea51e52df-flexvol-driver-host\") pod \"calico-node-zkjb6\" (UID: \"bcd9e13a-f143-416d-95b0-bd2ea51e52df\") " pod="calico-system/calico-node-zkjb6" May 8 00:40:39.616706 kubelet[2724]: I0508 00:40:39.616441 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bcd9e13a-f143-416d-95b0-bd2ea51e52df-node-certs\") pod \"calico-node-zkjb6\" (UID: \"bcd9e13a-f143-416d-95b0-bd2ea51e52df\") " pod="calico-system/calico-node-zkjb6" May 8 00:40:39.616986 kubelet[2724]: I0508 00:40:39.616454 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bcd9e13a-f143-416d-95b0-bd2ea51e52df-xtables-lock\") pod \"calico-node-zkjb6\" (UID: \"bcd9e13a-f143-416d-95b0-bd2ea51e52df\") " pod="calico-system/calico-node-zkjb6" May 8 00:40:39.616986 kubelet[2724]: I0508 00:40:39.616471 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bcd9e13a-f143-416d-95b0-bd2ea51e52df-lib-modules\") pod \"calico-node-zkjb6\" (UID: \"bcd9e13a-f143-416d-95b0-bd2ea51e52df\") " pod="calico-system/calico-node-zkjb6" May 8 00:40:39.616986 kubelet[2724]: I0508 00:40:39.616485 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxl9p\" (UniqueName: \"kubernetes.io/projected/bcd9e13a-f143-416d-95b0-bd2ea51e52df-kube-api-access-dxl9p\") pod \"calico-node-zkjb6\" (UID: \"bcd9e13a-f143-416d-95b0-bd2ea51e52df\") " pod="calico-system/calico-node-zkjb6" May 8 00:40:39.616986 kubelet[2724]: I0508 00:40:39.616498 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bcd9e13a-f143-416d-95b0-bd2ea51e52df-var-run-calico\") pod \"calico-node-zkjb6\" (UID: \"bcd9e13a-f143-416d-95b0-bd2ea51e52df\") " pod="calico-system/calico-node-zkjb6" May 8 00:40:39.616986 kubelet[2724]: I0508 00:40:39.616515 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bcd9e13a-f143-416d-95b0-bd2ea51e52df-var-lib-calico\") pod \"calico-node-zkjb6\" (UID: \"bcd9e13a-f143-416d-95b0-bd2ea51e52df\") " pod="calico-system/calico-node-zkjb6" May 8 00:40:39.617191 kubelet[2724]: I0508 00:40:39.616531 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bcd9e13a-f143-416d-95b0-bd2ea51e52df-cni-bin-dir\") pod \"calico-node-zkjb6\" (UID: \"bcd9e13a-f143-416d-95b0-bd2ea51e52df\") " pod="calico-system/calico-node-zkjb6" May 8 00:40:39.617191 kubelet[2724]: I0508 00:40:39.616542 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bcd9e13a-f143-416d-95b0-bd2ea51e52df-cni-net-dir\") pod \"calico-node-zkjb6\" (UID: \"bcd9e13a-f143-416d-95b0-bd2ea51e52df\") " pod="calico-system/calico-node-zkjb6" May 8 00:40:39.617191 kubelet[2724]: I0508 00:40:39.616557 2724 reconciler_common.go:299] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f34e9eef-ef24-4654-80e2-7e5383be17b9-cni-log-dir\") on node \"localhost\" DevicePath \"\"" May 8 00:40:39.617191 kubelet[2724]: I0508 00:40:39.616564 2724 reconciler_common.go:299] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f34e9eef-ef24-4654-80e2-7e5383be17b9-cni-bin-dir\") on node \"localhost\" DevicePath \"\"" May 8 00:40:39.617191 kubelet[2724]: I0508 00:40:39.616570 2724 reconciler_common.go:299] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f34e9eef-ef24-4654-80e2-7e5383be17b9-var-lib-calico\") on node \"localhost\" DevicePath \"\"" May 8 00:40:39.617191 kubelet[2724]: I0508 00:40:39.616577 2724 reconciler_common.go:299] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f34e9eef-ef24-4654-80e2-7e5383be17b9-var-run-calico\") on node \"localhost\" DevicePath \"\"" May 8 00:40:39.618850 kubelet[2724]: I0508 00:40:39.618735 2724 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f34e9eef-ef24-4654-80e2-7e5383be17b9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f34e9eef-ef24-4654-80e2-7e5383be17b9" (UID: "f34e9eef-ef24-4654-80e2-7e5383be17b9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:40:39.618989 kubelet[2724]: I0508 00:40:39.618975 2724 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f34e9eef-ef24-4654-80e2-7e5383be17b9-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "f34e9eef-ef24-4654-80e2-7e5383be17b9" (UID: "f34e9eef-ef24-4654-80e2-7e5383be17b9"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:40:39.619422 kubelet[2724]: I0508 00:40:39.619350 2724 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f34e9eef-ef24-4654-80e2-7e5383be17b9-policysync" (OuterVolumeSpecName: "policysync") pod "f34e9eef-ef24-4654-80e2-7e5383be17b9" (UID: "f34e9eef-ef24-4654-80e2-7e5383be17b9"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:40:39.619422 kubelet[2724]: I0508 00:40:39.619371 2724 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f34e9eef-ef24-4654-80e2-7e5383be17b9-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "f34e9eef-ef24-4654-80e2-7e5383be17b9" (UID: "f34e9eef-ef24-4654-80e2-7e5383be17b9"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:40:39.619422 kubelet[2724]: I0508 00:40:39.619390 2724 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f34e9eef-ef24-4654-80e2-7e5383be17b9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f34e9eef-ef24-4654-80e2-7e5383be17b9" (UID: "f34e9eef-ef24-4654-80e2-7e5383be17b9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:40:39.621240 systemd[1]: var-lib-kubelet-pods-f34e9eef\x2def24\x2d4654\x2d80e2\x2d7e5383be17b9-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. May 8 00:40:39.622063 systemd[1]: var-lib-kubelet-pods-f34e9eef\x2def24\x2d4654\x2d80e2\x2d7e5383be17b9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxn25c.mount: Deactivated successfully. May 8 00:40:39.622901 kubelet[2724]: I0508 00:40:39.622828 2724 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f34e9eef-ef24-4654-80e2-7e5383be17b9-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "f34e9eef-ef24-4654-80e2-7e5383be17b9" (UID: "f34e9eef-ef24-4654-80e2-7e5383be17b9"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 8 00:40:39.622901 kubelet[2724]: I0508 00:40:39.622878 2724 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f34e9eef-ef24-4654-80e2-7e5383be17b9-kube-api-access-xn25c" (OuterVolumeSpecName: "kube-api-access-xn25c") pod "f34e9eef-ef24-4654-80e2-7e5383be17b9" (UID: "f34e9eef-ef24-4654-80e2-7e5383be17b9"). InnerVolumeSpecName "kube-api-access-xn25c". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 8 00:40:39.624890 kubelet[2724]: I0508 00:40:39.624857 2724 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f34e9eef-ef24-4654-80e2-7e5383be17b9-node-certs" (OuterVolumeSpecName: "node-certs") pod "f34e9eef-ef24-4654-80e2-7e5383be17b9" (UID: "f34e9eef-ef24-4654-80e2-7e5383be17b9"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 8 00:40:39.626555 systemd[1]: var-lib-kubelet-pods-f34e9eef\x2def24\x2d4654\x2d80e2\x2d7e5383be17b9-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. May 8 00:40:39.717370 kubelet[2724]: I0508 00:40:39.717339 2724 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xn25c\" (UniqueName: \"kubernetes.io/projected/f34e9eef-ef24-4654-80e2-7e5383be17b9-kube-api-access-xn25c\") on node \"localhost\" DevicePath \"\"" May 8 00:40:39.717370 kubelet[2724]: I0508 00:40:39.717363 2724 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f34e9eef-ef24-4654-80e2-7e5383be17b9-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 8 00:40:39.717370 kubelet[2724]: I0508 00:40:39.717371 2724 reconciler_common.go:299] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f34e9eef-ef24-4654-80e2-7e5383be17b9-policysync\") on node \"localhost\" DevicePath \"\"" May 8 00:40:39.717370 kubelet[2724]: I0508 00:40:39.717378 2724 reconciler_common.go:299] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f34e9eef-ef24-4654-80e2-7e5383be17b9-cni-net-dir\") on node \"localhost\" DevicePath \"\"" May 8 00:40:39.717568 kubelet[2724]: I0508 00:40:39.717386 2724 reconciler_common.go:299] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f34e9eef-ef24-4654-80e2-7e5383be17b9-node-certs\") on node \"localhost\" DevicePath \"\"" May 8 00:40:39.717568 kubelet[2724]: I0508 00:40:39.717391 2724 reconciler_common.go:299] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f34e9eef-ef24-4654-80e2-7e5383be17b9-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 8 00:40:39.717568 kubelet[2724]: I0508 00:40:39.717398 2724 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f34e9eef-ef24-4654-80e2-7e5383be17b9-lib-modules\") on node \"localhost\" DevicePath \"\"" May 8 00:40:39.717568 kubelet[2724]: I0508 00:40:39.717405 2724 reconciler_common.go:299] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f34e9eef-ef24-4654-80e2-7e5383be17b9-flexvol-driver-host\") on node \"localhost\" DevicePath \"\"" May 8 00:40:39.771970 containerd[1544]: time="2025-05-08T00:40:39.771832654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zkjb6,Uid:bcd9e13a-f143-416d-95b0-bd2ea51e52df,Namespace:calico-system,Attempt:0,}" May 8 00:40:39.776099 containerd[1544]: time="2025-05-08T00:40:39.775567849Z" level=info msg="StopPodSandbox for \"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\"" May 8 00:40:39.784500 systemd[1]: Removed slice kubepods-besteffort-podf34e9eef_ef24_4654_80e2_7e5383be17b9.slice - libcontainer container kubepods-besteffort-podf34e9eef_ef24_4654_80e2_7e5383be17b9.slice. May 8 00:40:39.797778 containerd[1544]: time="2025-05-08T00:40:39.792727442Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:39.797778 containerd[1544]: time="2025-05-08T00:40:39.796007603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:39.797778 containerd[1544]: time="2025-05-08T00:40:39.796017213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:39.797778 containerd[1544]: time="2025-05-08T00:40:39.796092943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:39.811533 containerd[1544]: time="2025-05-08T00:40:39.811510552Z" level=error msg="StopPodSandbox for \"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\" failed" error="failed to destroy network for sandbox \"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:39.811768 kubelet[2724]: E0508 00:40:39.811690 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2" May 8 00:40:39.811768 kubelet[2724]: E0508 00:40:39.811716 2724 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2"} May 8 00:40:39.811768 kubelet[2724]: E0508 00:40:39.811735 2724 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b0cb28c6-493c-4ec2-95eb-08870a5239cb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:39.811768 kubelet[2724]: E0508 00:40:39.811749 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b0cb28c6-493c-4ec2-95eb-08870a5239cb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-bcbf44d57-tjjwh" podUID="b0cb28c6-493c-4ec2-95eb-08870a5239cb" May 8 00:40:39.812042 systemd[1]: Started cri-containerd-8bddced90217b1807c3ca3b582316e6029210dc5355c0bed5c17a954ed929418.scope - libcontainer container 8bddced90217b1807c3ca3b582316e6029210dc5355c0bed5c17a954ed929418. May 8 00:40:39.828175 containerd[1544]: time="2025-05-08T00:40:39.828016536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zkjb6,Uid:bcd9e13a-f143-416d-95b0-bd2ea51e52df,Namespace:calico-system,Attempt:0,} returns sandbox id \"8bddced90217b1807c3ca3b582316e6029210dc5355c0bed5c17a954ed929418\"" May 8 00:40:39.829813 containerd[1544]: time="2025-05-08T00:40:39.829663015Z" level=info msg="CreateContainer within sandbox \"8bddced90217b1807c3ca3b582316e6029210dc5355c0bed5c17a954ed929418\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 8 00:40:39.835891 containerd[1544]: time="2025-05-08T00:40:39.835870799Z" level=info msg="CreateContainer within sandbox \"8bddced90217b1807c3ca3b582316e6029210dc5355c0bed5c17a954ed929418\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e5ec0134d328099028a3d6e1ac985dacce7dacef839e2765703b04af3f1d9367\"" May 8 00:40:39.836487 containerd[1544]: time="2025-05-08T00:40:39.836316729Z" level=info msg="StartContainer for \"e5ec0134d328099028a3d6e1ac985dacce7dacef839e2765703b04af3f1d9367\"" May 8 00:40:39.859068 systemd[1]: Started cri-containerd-e5ec0134d328099028a3d6e1ac985dacce7dacef839e2765703b04af3f1d9367.scope - libcontainer container e5ec0134d328099028a3d6e1ac985dacce7dacef839e2765703b04af3f1d9367. May 8 00:40:39.878583 containerd[1544]: time="2025-05-08T00:40:39.877940348Z" level=info msg="StartContainer for \"e5ec0134d328099028a3d6e1ac985dacce7dacef839e2765703b04af3f1d9367\" returns successfully" May 8 00:40:39.928703 systemd[1]: cri-containerd-e5ec0134d328099028a3d6e1ac985dacce7dacef839e2765703b04af3f1d9367.scope: Deactivated successfully. May 8 00:40:39.950727 containerd[1544]: time="2025-05-08T00:40:39.950673826Z" level=info msg="shim disconnected" id=e5ec0134d328099028a3d6e1ac985dacce7dacef839e2765703b04af3f1d9367 namespace=k8s.io May 8 00:40:39.950727 containerd[1544]: time="2025-05-08T00:40:39.950724449Z" level=warning msg="cleaning up after shim disconnected" id=e5ec0134d328099028a3d6e1ac985dacce7dacef839e2765703b04af3f1d9367 namespace=k8s.io May 8 00:40:39.950727 containerd[1544]: time="2025-05-08T00:40:39.950730998Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:40:39.957803 containerd[1544]: time="2025-05-08T00:40:39.957745362Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:40:39Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 8 00:40:40.268279 kubelet[2724]: I0508 00:40:40.268202 2724 scope.go:117] "RemoveContainer" containerID="c06fe015afe6119dc7075206f7f57107a0db02cf26b17f058649dd8e41be9012" May 8 00:40:40.271302 containerd[1544]: time="2025-05-08T00:40:40.270855990Z" level=info msg="RemoveContainer for \"c06fe015afe6119dc7075206f7f57107a0db02cf26b17f058649dd8e41be9012\"" May 8 00:40:40.273347 containerd[1544]: time="2025-05-08T00:40:40.273307903Z" level=info msg="RemoveContainer for \"c06fe015afe6119dc7075206f7f57107a0db02cf26b17f058649dd8e41be9012\" returns successfully" May 8 00:40:40.274086 containerd[1544]: time="2025-05-08T00:40:40.274018427Z" level=info msg="CreateContainer within sandbox \"8bddced90217b1807c3ca3b582316e6029210dc5355c0bed5c17a954ed929418\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 8 00:40:40.274210 kubelet[2724]: I0508 00:40:40.274128 2724 scope.go:117] "RemoveContainer" containerID="c536c3d7a8e7f5a424785239dd3f132067fef753cb45a5ad263616df93e419a7" May 8 00:40:40.275533 containerd[1544]: time="2025-05-08T00:40:40.275090935Z" level=info msg="RemoveContainer for \"c536c3d7a8e7f5a424785239dd3f132067fef753cb45a5ad263616df93e419a7\"" May 8 00:40:40.276763 containerd[1544]: time="2025-05-08T00:40:40.276750649Z" level=info msg="RemoveContainer for \"c536c3d7a8e7f5a424785239dd3f132067fef753cb45a5ad263616df93e419a7\" returns successfully" May 8 00:40:40.277476 kubelet[2724]: I0508 00:40:40.277467 2724 scope.go:117] "RemoveContainer" containerID="1161a28ee59dfb660fc18e7813f16c527521a08415b92cc00a88b7ec95385a44" May 8 00:40:40.278452 containerd[1544]: time="2025-05-08T00:40:40.278416070Z" level=info msg="RemoveContainer for \"1161a28ee59dfb660fc18e7813f16c527521a08415b92cc00a88b7ec95385a44\"" May 8 00:40:40.279925 containerd[1544]: time="2025-05-08T00:40:40.279910436Z" level=info msg="RemoveContainer for \"1161a28ee59dfb660fc18e7813f16c527521a08415b92cc00a88b7ec95385a44\" returns successfully" May 8 00:40:40.291251 containerd[1544]: time="2025-05-08T00:40:40.290895217Z" level=info msg="CreateContainer within sandbox \"8bddced90217b1807c3ca3b582316e6029210dc5355c0bed5c17a954ed929418\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a72ab648757aa1d01560cf6fbc42965121bf5e7c9a801f6c6bc45c49b464136f\"" May 8 00:40:40.292941 containerd[1544]: time="2025-05-08T00:40:40.291599240Z" level=info msg="StartContainer for \"a72ab648757aa1d01560cf6fbc42965121bf5e7c9a801f6c6bc45c49b464136f\"" May 8 00:40:40.321029 systemd[1]: Started cri-containerd-a72ab648757aa1d01560cf6fbc42965121bf5e7c9a801f6c6bc45c49b464136f.scope - libcontainer container a72ab648757aa1d01560cf6fbc42965121bf5e7c9a801f6c6bc45c49b464136f. May 8 00:40:40.338971 containerd[1544]: time="2025-05-08T00:40:40.338935157Z" level=info msg="StartContainer for \"a72ab648757aa1d01560cf6fbc42965121bf5e7c9a801f6c6bc45c49b464136f\" returns successfully" May 8 00:40:40.479236 systemd[1]: Started sshd@9-139.178.70.100:22-139.178.68.195:40030.service - OpenSSH per-connection server daemon (139.178.68.195:40030). May 8 00:40:40.559715 sshd[4570]: Accepted publickey for core from 139.178.68.195 port 40030 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:40:40.560808 sshd[4570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:40:40.564677 systemd-logind[1518]: New session 11 of user core. May 8 00:40:40.570064 systemd[1]: Started session-11.scope - Session 11 of User core. May 8 00:40:40.775244 containerd[1544]: time="2025-05-08T00:40:40.775161558Z" level=info msg="StopPodSandbox for \"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\"" May 8 00:40:40.865912 containerd[1544]: time="2025-05-08T00:40:40.865828082Z" level=error msg="StopPodSandbox for \"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\" failed" error="failed to destroy network for sandbox \"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:40.866162 kubelet[2724]: E0508 00:40:40.866129 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f" May 8 00:40:40.866162 kubelet[2724]: E0508 00:40:40.866159 2724 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f"} May 8 00:40:40.873377 kubelet[2724]: E0508 00:40:40.866181 2724 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7cd62a62-c3b8-4d08-90a0-b53c0511c1f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:40.873377 kubelet[2724]: E0508 00:40:40.866196 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7cd62a62-c3b8-4d08-90a0-b53c0511c1f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-jq978" podUID="7cd62a62-c3b8-4d08-90a0-b53c0511c1f5" May 8 00:40:41.394255 sshd[4570]: pam_unix(sshd:session): session closed for user core May 8 00:40:41.396903 systemd-logind[1518]: Session 11 logged out. Waiting for processes to exit. May 8 00:40:41.397540 systemd[1]: sshd@9-139.178.70.100:22-139.178.68.195:40030.service: Deactivated successfully. May 8 00:40:41.399145 systemd[1]: session-11.scope: Deactivated successfully. May 8 00:40:41.399870 systemd-logind[1518]: Removed session 11. May 8 00:40:41.786199 kubelet[2724]: I0508 00:40:41.786055 2724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f34e9eef-ef24-4654-80e2-7e5383be17b9" path="/var/lib/kubelet/pods/f34e9eef-ef24-4654-80e2-7e5383be17b9/volumes" May 8 00:40:41.861745 systemd[1]: cri-containerd-a72ab648757aa1d01560cf6fbc42965121bf5e7c9a801f6c6bc45c49b464136f.scope: Deactivated successfully. May 8 00:40:41.874629 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a72ab648757aa1d01560cf6fbc42965121bf5e7c9a801f6c6bc45c49b464136f-rootfs.mount: Deactivated successfully. May 8 00:40:41.943586 containerd[1544]: time="2025-05-08T00:40:41.943536972Z" level=info msg="shim disconnected" id=a72ab648757aa1d01560cf6fbc42965121bf5e7c9a801f6c6bc45c49b464136f namespace=k8s.io May 8 00:40:41.943586 containerd[1544]: time="2025-05-08T00:40:41.943574828Z" level=warning msg="cleaning up after shim disconnected" id=a72ab648757aa1d01560cf6fbc42965121bf5e7c9a801f6c6bc45c49b464136f namespace=k8s.io May 8 00:40:41.943586 containerd[1544]: time="2025-05-08T00:40:41.943580283Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:40:42.298795 containerd[1544]: time="2025-05-08T00:40:42.298715687Z" level=info msg="CreateContainer within sandbox \"8bddced90217b1807c3ca3b582316e6029210dc5355c0bed5c17a954ed929418\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 8 00:40:42.303824 containerd[1544]: time="2025-05-08T00:40:42.303801066Z" level=info msg="CreateContainer within sandbox \"8bddced90217b1807c3ca3b582316e6029210dc5355c0bed5c17a954ed929418\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"56a97b7d7da287a49c01c1f045b3cc8a161216099f38702c9879f24779440232\"" May 8 00:40:42.304310 containerd[1544]: time="2025-05-08T00:40:42.304294743Z" level=info msg="StartContainer for \"56a97b7d7da287a49c01c1f045b3cc8a161216099f38702c9879f24779440232\"" May 8 00:40:42.326063 systemd[1]: Started cri-containerd-56a97b7d7da287a49c01c1f045b3cc8a161216099f38702c9879f24779440232.scope - libcontainer container 56a97b7d7da287a49c01c1f045b3cc8a161216099f38702c9879f24779440232. May 8 00:40:42.344899 containerd[1544]: time="2025-05-08T00:40:42.344872059Z" level=info msg="StartContainer for \"56a97b7d7da287a49c01c1f045b3cc8a161216099f38702c9879f24779440232\" returns successfully" May 8 00:40:42.776604 containerd[1544]: time="2025-05-08T00:40:42.776502456Z" level=info msg="StopPodSandbox for \"4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81\"" May 8 00:40:42.982197 containerd[1544]: 2025-05-08 00:40:42.959 [INFO][4683] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81" May 8 00:40:42.982197 containerd[1544]: 2025-05-08 00:40:42.959 [INFO][4683] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81" iface="eth0" netns="/var/run/netns/cni-cb7c2952-c509-5ce5-4f13-d40d51f94e68" May 8 00:40:42.982197 containerd[1544]: 2025-05-08 00:40:42.960 [INFO][4683] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81" iface="eth0" netns="/var/run/netns/cni-cb7c2952-c509-5ce5-4f13-d40d51f94e68" May 8 00:40:42.982197 containerd[1544]: 2025-05-08 00:40:42.960 [INFO][4683] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81" iface="eth0" netns="/var/run/netns/cni-cb7c2952-c509-5ce5-4f13-d40d51f94e68" May 8 00:40:42.982197 containerd[1544]: 2025-05-08 00:40:42.960 [INFO][4683] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81" May 8 00:40:42.982197 containerd[1544]: 2025-05-08 00:40:42.960 [INFO][4683] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81" May 8 00:40:42.982197 containerd[1544]: 2025-05-08 00:40:42.975 [INFO][4696] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81" HandleID="k8s-pod-network.4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81" Workload="localhost-k8s-csi--node--driver--xqv5x-eth0" May 8 00:40:42.982197 containerd[1544]: 2025-05-08 00:40:42.975 [INFO][4696] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:42.982197 containerd[1544]: 2025-05-08 00:40:42.975 [INFO][4696] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:42.982197 containerd[1544]: 2025-05-08 00:40:42.979 [WARNING][4696] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81" HandleID="k8s-pod-network.4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81" Workload="localhost-k8s-csi--node--driver--xqv5x-eth0" May 8 00:40:42.982197 containerd[1544]: 2025-05-08 00:40:42.979 [INFO][4696] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81" HandleID="k8s-pod-network.4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81" Workload="localhost-k8s-csi--node--driver--xqv5x-eth0" May 8 00:40:42.982197 containerd[1544]: 2025-05-08 00:40:42.979 [INFO][4696] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:42.982197 containerd[1544]: 2025-05-08 00:40:42.980 [INFO][4683] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81" May 8 00:40:42.986336 containerd[1544]: time="2025-05-08T00:40:42.982331568Z" level=info msg="TearDown network for sandbox \"4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81\" successfully" May 8 00:40:42.986336 containerd[1544]: time="2025-05-08T00:40:42.982351001Z" level=info msg="StopPodSandbox for \"4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81\" returns successfully" May 8 00:40:42.986336 containerd[1544]: time="2025-05-08T00:40:42.984801686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xqv5x,Uid:91493713-d7eb-4156-9aba-7e866dca9c56,Namespace:calico-system,Attempt:1,}" May 8 00:40:42.984321 systemd[1]: run-netns-cni\x2dcb7c2952\x2dc509\x2d5ce5\x2d4f13\x2dd40d51f94e68.mount: Deactivated successfully. May 8 00:40:43.193883 systemd-networkd[1454]: cali6e899bd61ee: Link UP May 8 00:40:43.194224 systemd-networkd[1454]: cali6e899bd61ee: Gained carrier May 8 00:40:43.211194 containerd[1544]: 2025-05-08 00:40:43.050 [INFO][4705] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 8 00:40:43.211194 containerd[1544]: 2025-05-08 00:40:43.058 [INFO][4705] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--xqv5x-eth0 csi-node-driver- calico-system 91493713-d7eb-4156-9aba-7e866dca9c56 925 0 2025-05-08 00:39:39 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5b5cc68cd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-xqv5x eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6e899bd61ee [] []}} ContainerID="e7b2f5467137e23a66b4d5533fb8007cb131d3925e0df80bff3dc4e1a6c19c5e" Namespace="calico-system" Pod="csi-node-driver-xqv5x" WorkloadEndpoint="localhost-k8s-csi--node--driver--xqv5x-" May 8 00:40:43.211194 containerd[1544]: 2025-05-08 00:40:43.058 [INFO][4705] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e7b2f5467137e23a66b4d5533fb8007cb131d3925e0df80bff3dc4e1a6c19c5e" Namespace="calico-system" Pod="csi-node-driver-xqv5x" WorkloadEndpoint="localhost-k8s-csi--node--driver--xqv5x-eth0" May 8 00:40:43.211194 containerd[1544]: 2025-05-08 00:40:43.088 [INFO][4717] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e7b2f5467137e23a66b4d5533fb8007cb131d3925e0df80bff3dc4e1a6c19c5e" HandleID="k8s-pod-network.e7b2f5467137e23a66b4d5533fb8007cb131d3925e0df80bff3dc4e1a6c19c5e" Workload="localhost-k8s-csi--node--driver--xqv5x-eth0" May 8 00:40:43.211194 containerd[1544]: 2025-05-08 00:40:43.093 [INFO][4717] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e7b2f5467137e23a66b4d5533fb8007cb131d3925e0df80bff3dc4e1a6c19c5e" HandleID="k8s-pod-network.e7b2f5467137e23a66b4d5533fb8007cb131d3925e0df80bff3dc4e1a6c19c5e" Workload="localhost-k8s-csi--node--driver--xqv5x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000291560), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-xqv5x", "timestamp":"2025-05-08 00:40:43.088812672 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:40:43.211194 containerd[1544]: 2025-05-08 00:40:43.093 [INFO][4717] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:43.211194 containerd[1544]: 2025-05-08 00:40:43.093 [INFO][4717] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:43.211194 containerd[1544]: 2025-05-08 00:40:43.093 [INFO][4717] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:40:43.211194 containerd[1544]: 2025-05-08 00:40:43.094 [INFO][4717] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e7b2f5467137e23a66b4d5533fb8007cb131d3925e0df80bff3dc4e1a6c19c5e" host="localhost" May 8 00:40:43.211194 containerd[1544]: 2025-05-08 00:40:43.096 [INFO][4717] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:40:43.211194 containerd[1544]: 2025-05-08 00:40:43.100 [INFO][4717] ipam/ipam.go 521: Ran out of existing affine blocks for host host="localhost" May 8 00:40:43.211194 containerd[1544]: 2025-05-08 00:40:43.101 [INFO][4717] ipam/ipam.go 538: Tried all affine blocks. Looking for an affine block with space, or a new unclaimed block host="localhost" May 8 00:40:43.211194 containerd[1544]: 2025-05-08 00:40:43.104 [INFO][4717] ipam/ipam_block_reader_writer.go 154: Found free block: 192.168.88.128/26 May 8 00:40:43.211194 containerd[1544]: 2025-05-08 00:40:43.104 [INFO][4717] ipam/ipam.go 550: Found unclaimed block host="localhost" subnet=192.168.88.128/26 May 8 00:40:43.211194 containerd[1544]: 2025-05-08 00:40:43.104 [INFO][4717] ipam/ipam_block_reader_writer.go 171: Trying to create affinity in pending state host="localhost" subnet=192.168.88.128/26 May 8 00:40:43.211194 containerd[1544]: 2025-05-08 00:40:43.108 [INFO][4717] ipam/ipam_block_reader_writer.go 201: Successfully created pending affinity for block host="localhost" subnet=192.168.88.128/26 May 8 00:40:43.211194 containerd[1544]: 2025-05-08 00:40:43.108 [INFO][4717] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:40:43.211194 containerd[1544]: 2025-05-08 00:40:43.109 [INFO][4717] ipam/ipam.go 160: The referenced block doesn't exist, trying to create it cidr=192.168.88.128/26 host="localhost" May 8 00:40:43.211194 containerd[1544]: 2025-05-08 00:40:43.111 [INFO][4717] ipam/ipam.go 167: Wrote affinity as pending cidr=192.168.88.128/26 host="localhost" May 8 00:40:43.211194 containerd[1544]: 2025-05-08 00:40:43.111 [INFO][4717] ipam/ipam.go 176: Attempting to claim the block cidr=192.168.88.128/26 host="localhost" May 8 00:40:43.211194 containerd[1544]: 2025-05-08 00:40:43.112 [INFO][4717] ipam/ipam_block_reader_writer.go 223: Attempting to create a new block host="localhost" subnet=192.168.88.128/26 May 8 00:40:43.211194 containerd[1544]: 2025-05-08 00:40:43.120 [INFO][4717] ipam/ipam_block_reader_writer.go 264: Successfully created block May 8 00:40:43.211194 containerd[1544]: 2025-05-08 00:40:43.120 [INFO][4717] ipam/ipam_block_reader_writer.go 275: Confirming affinity host="localhost" subnet=192.168.88.128/26 May 8 00:40:43.211194 containerd[1544]: 2025-05-08 00:40:43.124 [INFO][4717] ipam/ipam_block_reader_writer.go 290: Successfully confirmed affinity host="localhost" subnet=192.168.88.128/26 May 8 00:40:43.211194 containerd[1544]: 2025-05-08 00:40:43.124 [INFO][4717] ipam/ipam.go 585: Block '192.168.88.128/26' has 64 free ips which is more than 1 ips required. host="localhost" subnet=192.168.88.128/26 May 8 00:40:43.211194 containerd[1544]: 2025-05-08 00:40:43.124 [INFO][4717] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e7b2f5467137e23a66b4d5533fb8007cb131d3925e0df80bff3dc4e1a6c19c5e" host="localhost" May 8 00:40:43.211194 containerd[1544]: 2025-05-08 00:40:43.125 [INFO][4717] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e7b2f5467137e23a66b4d5533fb8007cb131d3925e0df80bff3dc4e1a6c19c5e May 8 00:40:43.211194 containerd[1544]: 2025-05-08 00:40:43.137 [INFO][4717] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e7b2f5467137e23a66b4d5533fb8007cb131d3925e0df80bff3dc4e1a6c19c5e" host="localhost" May 8 00:40:43.211194 containerd[1544]: 2025-05-08 00:40:43.150 [INFO][4717] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.128/26] block=192.168.88.128/26 handle="k8s-pod-network.e7b2f5467137e23a66b4d5533fb8007cb131d3925e0df80bff3dc4e1a6c19c5e" host="localhost" May 8 00:40:43.211194 containerd[1544]: 2025-05-08 00:40:43.150 [INFO][4717] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.128/26] handle="k8s-pod-network.e7b2f5467137e23a66b4d5533fb8007cb131d3925e0df80bff3dc4e1a6c19c5e" host="localhost" May 8 00:40:43.217496 containerd[1544]: 2025-05-08 00:40:43.150 [INFO][4717] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:43.217496 containerd[1544]: 2025-05-08 00:40:43.150 [INFO][4717] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.128/26] IPv6=[] ContainerID="e7b2f5467137e23a66b4d5533fb8007cb131d3925e0df80bff3dc4e1a6c19c5e" HandleID="k8s-pod-network.e7b2f5467137e23a66b4d5533fb8007cb131d3925e0df80bff3dc4e1a6c19c5e" Workload="localhost-k8s-csi--node--driver--xqv5x-eth0" May 8 00:40:43.217496 containerd[1544]: 2025-05-08 00:40:43.152 [INFO][4705] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e7b2f5467137e23a66b4d5533fb8007cb131d3925e0df80bff3dc4e1a6c19c5e" Namespace="calico-system" Pod="csi-node-driver-xqv5x" WorkloadEndpoint="localhost-k8s-csi--node--driver--xqv5x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xqv5x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"91493713-d7eb-4156-9aba-7e866dca9c56", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-xqv5x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.128/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6e899bd61ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:43.217496 containerd[1544]: 2025-05-08 00:40:43.161 [INFO][4705] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.128/32] ContainerID="e7b2f5467137e23a66b4d5533fb8007cb131d3925e0df80bff3dc4e1a6c19c5e" Namespace="calico-system" Pod="csi-node-driver-xqv5x" WorkloadEndpoint="localhost-k8s-csi--node--driver--xqv5x-eth0" May 8 00:40:43.217496 containerd[1544]: 2025-05-08 00:40:43.161 [INFO][4705] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6e899bd61ee ContainerID="e7b2f5467137e23a66b4d5533fb8007cb131d3925e0df80bff3dc4e1a6c19c5e" Namespace="calico-system" Pod="csi-node-driver-xqv5x" WorkloadEndpoint="localhost-k8s-csi--node--driver--xqv5x-eth0" May 8 00:40:43.217496 containerd[1544]: 2025-05-08 00:40:43.186 [INFO][4705] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e7b2f5467137e23a66b4d5533fb8007cb131d3925e0df80bff3dc4e1a6c19c5e" Namespace="calico-system" Pod="csi-node-driver-xqv5x" WorkloadEndpoint="localhost-k8s-csi--node--driver--xqv5x-eth0" May 8 00:40:43.217496 containerd[1544]: 2025-05-08 00:40:43.187 [INFO][4705] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e7b2f5467137e23a66b4d5533fb8007cb131d3925e0df80bff3dc4e1a6c19c5e" Namespace="calico-system" Pod="csi-node-driver-xqv5x" WorkloadEndpoint="localhost-k8s-csi--node--driver--xqv5x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xqv5x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"91493713-d7eb-4156-9aba-7e866dca9c56", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e7b2f5467137e23a66b4d5533fb8007cb131d3925e0df80bff3dc4e1a6c19c5e", Pod:"csi-node-driver-xqv5x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.128/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6e899bd61ee", MAC:"2e:dd:a8:d8:f3:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:43.217496 containerd[1544]: 2025-05-08 00:40:43.209 [INFO][4705] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e7b2f5467137e23a66b4d5533fb8007cb131d3925e0df80bff3dc4e1a6c19c5e" Namespace="calico-system" Pod="csi-node-driver-xqv5x" WorkloadEndpoint="localhost-k8s-csi--node--driver--xqv5x-eth0" May 8 00:40:43.235868 containerd[1544]: time="2025-05-08T00:40:43.235756486Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:43.235868 containerd[1544]: time="2025-05-08T00:40:43.235810143Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:43.235868 containerd[1544]: time="2025-05-08T00:40:43.235844599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:43.239839 containerd[1544]: time="2025-05-08T00:40:43.236158669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:43.257103 systemd[1]: Started cri-containerd-e7b2f5467137e23a66b4d5533fb8007cb131d3925e0df80bff3dc4e1a6c19c5e.scope - libcontainer container e7b2f5467137e23a66b4d5533fb8007cb131d3925e0df80bff3dc4e1a6c19c5e. May 8 00:40:43.268373 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:40:43.278930 containerd[1544]: time="2025-05-08T00:40:43.278823310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xqv5x,Uid:91493713-d7eb-4156-9aba-7e866dca9c56,Namespace:calico-system,Attempt:1,} returns sandbox id \"e7b2f5467137e23a66b4d5533fb8007cb131d3925e0df80bff3dc4e1a6c19c5e\"" May 8 00:40:43.294876 containerd[1544]: time="2025-05-08T00:40:43.292571417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 8 00:40:43.777617 containerd[1544]: time="2025-05-08T00:40:43.777059374Z" level=info msg="StopPodSandbox for \"d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647\"" May 8 00:40:43.777617 containerd[1544]: time="2025-05-08T00:40:43.777552035Z" level=info msg="StopPodSandbox for \"3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03\"" May 8 00:40:43.831345 kubelet[2724]: I0508 00:40:43.829368 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zkjb6" podStartSLOduration=4.825147326 podStartE2EDuration="4.825147326s" podCreationTimestamp="2025-05-08 00:40:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:40:43.316159264 +0000 UTC m=+75.665785882" watchObservedRunningTime="2025-05-08 00:40:43.825147326 +0000 UTC m=+76.174773929" May 8 00:40:43.857712 containerd[1544]: 2025-05-08 00:40:43.824 [INFO][4834] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03" May 8 00:40:43.857712 containerd[1544]: 2025-05-08 00:40:43.824 [INFO][4834] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03" iface="eth0" netns="/var/run/netns/cni-a20ffd73-56ce-6910-0c85-181a2fb64161" May 8 00:40:43.857712 containerd[1544]: 2025-05-08 00:40:43.824 [INFO][4834] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03" iface="eth0" netns="/var/run/netns/cni-a20ffd73-56ce-6910-0c85-181a2fb64161" May 8 00:40:43.857712 containerd[1544]: 2025-05-08 00:40:43.825 [INFO][4834] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03" iface="eth0" netns="/var/run/netns/cni-a20ffd73-56ce-6910-0c85-181a2fb64161" May 8 00:40:43.857712 containerd[1544]: 2025-05-08 00:40:43.825 [INFO][4834] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03" May 8 00:40:43.857712 containerd[1544]: 2025-05-08 00:40:43.825 [INFO][4834] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03" May 8 00:40:43.857712 containerd[1544]: 2025-05-08 00:40:43.850 [INFO][4848] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03" HandleID="k8s-pod-network.3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03" Workload="localhost-k8s-coredns--668d6bf9bc--dbhz2-eth0" May 8 00:40:43.857712 containerd[1544]: 2025-05-08 00:40:43.850 [INFO][4848] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:43.857712 containerd[1544]: 2025-05-08 00:40:43.850 [INFO][4848] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:43.857712 containerd[1544]: 2025-05-08 00:40:43.854 [WARNING][4848] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03" HandleID="k8s-pod-network.3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03" Workload="localhost-k8s-coredns--668d6bf9bc--dbhz2-eth0" May 8 00:40:43.857712 containerd[1544]: 2025-05-08 00:40:43.854 [INFO][4848] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03" HandleID="k8s-pod-network.3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03" Workload="localhost-k8s-coredns--668d6bf9bc--dbhz2-eth0" May 8 00:40:43.857712 containerd[1544]: 2025-05-08 00:40:43.854 [INFO][4848] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:43.857712 containerd[1544]: 2025-05-08 00:40:43.856 [INFO][4834] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03" May 8 00:40:43.858958 containerd[1544]: time="2025-05-08T00:40:43.858698440Z" level=info msg="TearDown network for sandbox \"3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03\" successfully" May 8 00:40:43.858958 containerd[1544]: time="2025-05-08T00:40:43.858867604Z" level=info msg="StopPodSandbox for \"3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03\" returns successfully" May 8 00:40:43.859905 containerd[1544]: time="2025-05-08T00:40:43.859462897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dbhz2,Uid:46106b43-6a82-4ed1-a0c6-7d6292ac8f6f,Namespace:kube-system,Attempt:1,}" May 8 00:40:43.862709 containerd[1544]: 2025-05-08 00:40:43.827 [INFO][4833] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647" May 8 00:40:43.862709 containerd[1544]: 2025-05-08 00:40:43.827 [INFO][4833] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647" iface="eth0" netns="/var/run/netns/cni-8fe4388a-4256-113b-ff29-b65bc718cb46" May 8 00:40:43.862709 containerd[1544]: 2025-05-08 00:40:43.827 [INFO][4833] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647" iface="eth0" netns="/var/run/netns/cni-8fe4388a-4256-113b-ff29-b65bc718cb46" May 8 00:40:43.862709 containerd[1544]: 2025-05-08 00:40:43.827 [INFO][4833] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647" iface="eth0" netns="/var/run/netns/cni-8fe4388a-4256-113b-ff29-b65bc718cb46" May 8 00:40:43.862709 containerd[1544]: 2025-05-08 00:40:43.827 [INFO][4833] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647" May 8 00:40:43.862709 containerd[1544]: 2025-05-08 00:40:43.827 [INFO][4833] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647" May 8 00:40:43.862709 containerd[1544]: 2025-05-08 00:40:43.850 [INFO][4853] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647" HandleID="k8s-pod-network.d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647" Workload="localhost-k8s-calico--apiserver--6c849db6d6--n9bs2-eth0" May 8 00:40:43.862709 containerd[1544]: 2025-05-08 00:40:43.851 [INFO][4853] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:43.862709 containerd[1544]: 2025-05-08 00:40:43.854 [INFO][4853] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:43.862709 containerd[1544]: 2025-05-08 00:40:43.859 [WARNING][4853] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647" HandleID="k8s-pod-network.d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647" Workload="localhost-k8s-calico--apiserver--6c849db6d6--n9bs2-eth0" May 8 00:40:43.862709 containerd[1544]: 2025-05-08 00:40:43.859 [INFO][4853] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647" HandleID="k8s-pod-network.d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647" Workload="localhost-k8s-calico--apiserver--6c849db6d6--n9bs2-eth0" May 8 00:40:43.862709 containerd[1544]: 2025-05-08 00:40:43.860 [INFO][4853] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:43.862709 containerd[1544]: 2025-05-08 00:40:43.861 [INFO][4833] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647" May 8 00:40:43.863034 containerd[1544]: time="2025-05-08T00:40:43.862806199Z" level=info msg="TearDown network for sandbox \"d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647\" successfully" May 8 00:40:43.863034 containerd[1544]: time="2025-05-08T00:40:43.862818713Z" level=info msg="StopPodSandbox for \"d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647\" returns successfully" May 8 00:40:43.863304 containerd[1544]: time="2025-05-08T00:40:43.863288502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c849db6d6-n9bs2,Uid:99de4f5c-61dc-4b3d-a2ad-08772fa94419,Namespace:calico-apiserver,Attempt:1,}" May 8 00:40:43.879819 systemd[1]: run-netns-cni\x2d8fe4388a\x2d4256\x2d113b\x2dff29\x2db65bc718cb46.mount: Deactivated successfully. May 8 00:40:43.879991 systemd[1]: run-netns-cni\x2da20ffd73\x2d56ce\x2d6910\x2d0c85\x2d181a2fb64161.mount: Deactivated successfully. May 8 00:40:43.956409 systemd-networkd[1454]: calic5c34036b7c: Link UP May 8 00:40:43.956717 systemd-networkd[1454]: calic5c34036b7c: Gained carrier May 8 00:40:43.965476 containerd[1544]: 2025-05-08 00:40:43.894 [INFO][4862] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 8 00:40:43.965476 containerd[1544]: 2025-05-08 00:40:43.906 [INFO][4862] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--dbhz2-eth0 coredns-668d6bf9bc- kube-system 46106b43-6a82-4ed1-a0c6-7d6292ac8f6f 954 0 2025-05-08 00:39:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-dbhz2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic5c34036b7c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="41c59c4fa7543990f5e69fb1549b6d6eee24c11a64345eb672fdbd85e1f4ded4" Namespace="kube-system" Pod="coredns-668d6bf9bc-dbhz2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dbhz2-" May 8 00:40:43.965476 containerd[1544]: 2025-05-08 00:40:43.907 [INFO][4862] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="41c59c4fa7543990f5e69fb1549b6d6eee24c11a64345eb672fdbd85e1f4ded4" Namespace="kube-system" Pod="coredns-668d6bf9bc-dbhz2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dbhz2-eth0" May 8 00:40:43.965476 containerd[1544]: 2025-05-08 00:40:43.927 [INFO][4888] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="41c59c4fa7543990f5e69fb1549b6d6eee24c11a64345eb672fdbd85e1f4ded4" HandleID="k8s-pod-network.41c59c4fa7543990f5e69fb1549b6d6eee24c11a64345eb672fdbd85e1f4ded4" Workload="localhost-k8s-coredns--668d6bf9bc--dbhz2-eth0" May 8 00:40:43.965476 containerd[1544]: 2025-05-08 00:40:43.935 [INFO][4888] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="41c59c4fa7543990f5e69fb1549b6d6eee24c11a64345eb672fdbd85e1f4ded4" HandleID="k8s-pod-network.41c59c4fa7543990f5e69fb1549b6d6eee24c11a64345eb672fdbd85e1f4ded4" Workload="localhost-k8s-coredns--668d6bf9bc--dbhz2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003842c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-dbhz2", "timestamp":"2025-05-08 00:40:43.927026504 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:40:43.965476 containerd[1544]: 2025-05-08 00:40:43.935 [INFO][4888] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:43.965476 containerd[1544]: 2025-05-08 00:40:43.935 [INFO][4888] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:43.965476 containerd[1544]: 2025-05-08 00:40:43.935 [INFO][4888] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:40:43.965476 containerd[1544]: 2025-05-08 00:40:43.938 [INFO][4888] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.41c59c4fa7543990f5e69fb1549b6d6eee24c11a64345eb672fdbd85e1f4ded4" host="localhost" May 8 00:40:43.965476 containerd[1544]: 2025-05-08 00:40:43.940 [INFO][4888] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:40:43.965476 containerd[1544]: 2025-05-08 00:40:43.944 [INFO][4888] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:40:43.965476 containerd[1544]: 2025-05-08 00:40:43.946 [INFO][4888] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:40:43.965476 containerd[1544]: 2025-05-08 00:40:43.947 [INFO][4888] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:40:43.965476 containerd[1544]: 2025-05-08 00:40:43.947 [INFO][4888] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.41c59c4fa7543990f5e69fb1549b6d6eee24c11a64345eb672fdbd85e1f4ded4" host="localhost" May 8 00:40:43.965476 containerd[1544]: 2025-05-08 00:40:43.948 [INFO][4888] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.41c59c4fa7543990f5e69fb1549b6d6eee24c11a64345eb672fdbd85e1f4ded4 May 8 00:40:43.965476 containerd[1544]: 2025-05-08 00:40:43.950 [INFO][4888] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.41c59c4fa7543990f5e69fb1549b6d6eee24c11a64345eb672fdbd85e1f4ded4" host="localhost" May 8 00:40:43.965476 containerd[1544]: 2025-05-08 00:40:43.953 [INFO][4888] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.41c59c4fa7543990f5e69fb1549b6d6eee24c11a64345eb672fdbd85e1f4ded4" host="localhost" May 8 00:40:43.965476 containerd[1544]: 2025-05-08 00:40:43.953 [INFO][4888] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.41c59c4fa7543990f5e69fb1549b6d6eee24c11a64345eb672fdbd85e1f4ded4" host="localhost" May 8 00:40:43.965476 containerd[1544]: 2025-05-08 00:40:43.953 [INFO][4888] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:43.965476 containerd[1544]: 2025-05-08 00:40:43.953 [INFO][4888] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="41c59c4fa7543990f5e69fb1549b6d6eee24c11a64345eb672fdbd85e1f4ded4" HandleID="k8s-pod-network.41c59c4fa7543990f5e69fb1549b6d6eee24c11a64345eb672fdbd85e1f4ded4" Workload="localhost-k8s-coredns--668d6bf9bc--dbhz2-eth0" May 8 00:40:43.966918 containerd[1544]: 2025-05-08 00:40:43.954 [INFO][4862] cni-plugin/k8s.go 386: Populated endpoint ContainerID="41c59c4fa7543990f5e69fb1549b6d6eee24c11a64345eb672fdbd85e1f4ded4" Namespace="kube-system" Pod="coredns-668d6bf9bc-dbhz2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dbhz2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--dbhz2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"46106b43-6a82-4ed1-a0c6-7d6292ac8f6f", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-dbhz2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic5c34036b7c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:43.966918 containerd[1544]: 2025-05-08 00:40:43.955 [INFO][4862] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="41c59c4fa7543990f5e69fb1549b6d6eee24c11a64345eb672fdbd85e1f4ded4" Namespace="kube-system" Pod="coredns-668d6bf9bc-dbhz2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dbhz2-eth0" May 8 00:40:43.966918 containerd[1544]: 2025-05-08 00:40:43.955 [INFO][4862] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic5c34036b7c ContainerID="41c59c4fa7543990f5e69fb1549b6d6eee24c11a64345eb672fdbd85e1f4ded4" Namespace="kube-system" Pod="coredns-668d6bf9bc-dbhz2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dbhz2-eth0" May 8 00:40:43.966918 containerd[1544]: 2025-05-08 00:40:43.956 [INFO][4862] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="41c59c4fa7543990f5e69fb1549b6d6eee24c11a64345eb672fdbd85e1f4ded4" Namespace="kube-system" Pod="coredns-668d6bf9bc-dbhz2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dbhz2-eth0" May 8 00:40:43.966918 containerd[1544]: 2025-05-08 00:40:43.957 [INFO][4862] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="41c59c4fa7543990f5e69fb1549b6d6eee24c11a64345eb672fdbd85e1f4ded4" Namespace="kube-system" Pod="coredns-668d6bf9bc-dbhz2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dbhz2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--dbhz2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"46106b43-6a82-4ed1-a0c6-7d6292ac8f6f", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"41c59c4fa7543990f5e69fb1549b6d6eee24c11a64345eb672fdbd85e1f4ded4", Pod:"coredns-668d6bf9bc-dbhz2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic5c34036b7c", MAC:"62:d4:a5:85:b4:7b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:43.966918 containerd[1544]: 2025-05-08 00:40:43.964 [INFO][4862] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="41c59c4fa7543990f5e69fb1549b6d6eee24c11a64345eb672fdbd85e1f4ded4" Namespace="kube-system" Pod="coredns-668d6bf9bc-dbhz2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dbhz2-eth0" May 8 00:40:43.984384 containerd[1544]: time="2025-05-08T00:40:43.984120252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:43.984384 containerd[1544]: time="2025-05-08T00:40:43.984157355Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:43.984384 containerd[1544]: time="2025-05-08T00:40:43.984167532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:43.984384 containerd[1544]: time="2025-05-08T00:40:43.984221374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:44.002035 systemd[1]: Started cri-containerd-41c59c4fa7543990f5e69fb1549b6d6eee24c11a64345eb672fdbd85e1f4ded4.scope - libcontainer container 41c59c4fa7543990f5e69fb1549b6d6eee24c11a64345eb672fdbd85e1f4ded4. May 8 00:40:44.010460 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:40:44.033358 containerd[1544]: time="2025-05-08T00:40:44.033284036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dbhz2,Uid:46106b43-6a82-4ed1-a0c6-7d6292ac8f6f,Namespace:kube-system,Attempt:1,} returns sandbox id \"41c59c4fa7543990f5e69fb1549b6d6eee24c11a64345eb672fdbd85e1f4ded4\"" May 8 00:40:44.035959 containerd[1544]: time="2025-05-08T00:40:44.035911271Z" level=info msg="CreateContainer within sandbox \"41c59c4fa7543990f5e69fb1549b6d6eee24c11a64345eb672fdbd85e1f4ded4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:40:44.056779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1753409444.mount: Deactivated successfully. May 8 00:40:44.060057 containerd[1544]: time="2025-05-08T00:40:44.060033041Z" level=info msg="CreateContainer within sandbox \"41c59c4fa7543990f5e69fb1549b6d6eee24c11a64345eb672fdbd85e1f4ded4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"76e7a0092853a5d5a817cd5217cca6f999be8037b6f74092a4e2e77573573a7d\"" May 8 00:40:44.061164 containerd[1544]: time="2025-05-08T00:40:44.061130324Z" level=info msg="StartContainer for \"76e7a0092853a5d5a817cd5217cca6f999be8037b6f74092a4e2e77573573a7d\"" May 8 00:40:44.065608 systemd-networkd[1454]: cali23c28f979f3: Link UP May 8 00:40:44.065707 systemd-networkd[1454]: cali23c28f979f3: Gained carrier May 8 00:40:44.077829 containerd[1544]: 2025-05-08 00:40:43.894 [INFO][4872] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 8 00:40:44.077829 containerd[1544]: 2025-05-08 00:40:43.905 [INFO][4872] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6c849db6d6--n9bs2-eth0 calico-apiserver-6c849db6d6- calico-apiserver 99de4f5c-61dc-4b3d-a2ad-08772fa94419 955 0 2025-05-08 00:39:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c849db6d6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6c849db6d6-n9bs2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali23c28f979f3 [] []}} ContainerID="fb0dc8d3fa8a698f2ab25e991279cfaa1e3d32570693cb9d02c18574bf3fc012" Namespace="calico-apiserver" Pod="calico-apiserver-6c849db6d6-n9bs2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c849db6d6--n9bs2-" May 8 00:40:44.077829 containerd[1544]: 2025-05-08 00:40:43.906 [INFO][4872] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fb0dc8d3fa8a698f2ab25e991279cfaa1e3d32570693cb9d02c18574bf3fc012" Namespace="calico-apiserver" Pod="calico-apiserver-6c849db6d6-n9bs2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c849db6d6--n9bs2-eth0" May 8 00:40:44.077829 containerd[1544]: 2025-05-08 00:40:43.930 [INFO][4887] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fb0dc8d3fa8a698f2ab25e991279cfaa1e3d32570693cb9d02c18574bf3fc012" HandleID="k8s-pod-network.fb0dc8d3fa8a698f2ab25e991279cfaa1e3d32570693cb9d02c18574bf3fc012" Workload="localhost-k8s-calico--apiserver--6c849db6d6--n9bs2-eth0" May 8 00:40:44.077829 containerd[1544]: 2025-05-08 00:40:43.938 [INFO][4887] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fb0dc8d3fa8a698f2ab25e991279cfaa1e3d32570693cb9d02c18574bf3fc012" HandleID="k8s-pod-network.fb0dc8d3fa8a698f2ab25e991279cfaa1e3d32570693cb9d02c18574bf3fc012" Workload="localhost-k8s-calico--apiserver--6c849db6d6--n9bs2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031ad10), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6c849db6d6-n9bs2", "timestamp":"2025-05-08 00:40:43.930381378 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:40:44.077829 containerd[1544]: 2025-05-08 00:40:43.938 [INFO][4887] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:44.077829 containerd[1544]: 2025-05-08 00:40:43.953 [INFO][4887] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:44.077829 containerd[1544]: 2025-05-08 00:40:43.953 [INFO][4887] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:40:44.077829 containerd[1544]: 2025-05-08 00:40:44.038 [INFO][4887] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fb0dc8d3fa8a698f2ab25e991279cfaa1e3d32570693cb9d02c18574bf3fc012" host="localhost" May 8 00:40:44.077829 containerd[1544]: 2025-05-08 00:40:44.042 [INFO][4887] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:40:44.077829 containerd[1544]: 2025-05-08 00:40:44.049 [INFO][4887] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:40:44.077829 containerd[1544]: 2025-05-08 00:40:44.050 [INFO][4887] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:40:44.077829 containerd[1544]: 2025-05-08 00:40:44.051 [INFO][4887] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:40:44.077829 containerd[1544]: 2025-05-08 00:40:44.051 [INFO][4887] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fb0dc8d3fa8a698f2ab25e991279cfaa1e3d32570693cb9d02c18574bf3fc012" host="localhost" May 8 00:40:44.077829 containerd[1544]: 2025-05-08 00:40:44.052 [INFO][4887] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fb0dc8d3fa8a698f2ab25e991279cfaa1e3d32570693cb9d02c18574bf3fc012 May 8 00:40:44.077829 containerd[1544]: 2025-05-08 00:40:44.057 [INFO][4887] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fb0dc8d3fa8a698f2ab25e991279cfaa1e3d32570693cb9d02c18574bf3fc012" host="localhost" May 8 00:40:44.077829 containerd[1544]: 2025-05-08 00:40:44.060 [INFO][4887] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.fb0dc8d3fa8a698f2ab25e991279cfaa1e3d32570693cb9d02c18574bf3fc012" host="localhost" May 8 00:40:44.077829 containerd[1544]: 2025-05-08 00:40:44.061 [INFO][4887] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.fb0dc8d3fa8a698f2ab25e991279cfaa1e3d32570693cb9d02c18574bf3fc012" host="localhost" May 8 00:40:44.077829 containerd[1544]: 2025-05-08 00:40:44.061 [INFO][4887] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:44.077829 containerd[1544]: 2025-05-08 00:40:44.061 [INFO][4887] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="fb0dc8d3fa8a698f2ab25e991279cfaa1e3d32570693cb9d02c18574bf3fc012" HandleID="k8s-pod-network.fb0dc8d3fa8a698f2ab25e991279cfaa1e3d32570693cb9d02c18574bf3fc012" Workload="localhost-k8s-calico--apiserver--6c849db6d6--n9bs2-eth0" May 8 00:40:44.078688 containerd[1544]: 2025-05-08 00:40:44.063 [INFO][4872] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fb0dc8d3fa8a698f2ab25e991279cfaa1e3d32570693cb9d02c18574bf3fc012" Namespace="calico-apiserver" Pod="calico-apiserver-6c849db6d6-n9bs2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c849db6d6--n9bs2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c849db6d6--n9bs2-eth0", GenerateName:"calico-apiserver-6c849db6d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"99de4f5c-61dc-4b3d-a2ad-08772fa94419", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c849db6d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6c849db6d6-n9bs2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali23c28f979f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:44.078688 containerd[1544]: 2025-05-08 00:40:44.063 [INFO][4872] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="fb0dc8d3fa8a698f2ab25e991279cfaa1e3d32570693cb9d02c18574bf3fc012" Namespace="calico-apiserver" Pod="calico-apiserver-6c849db6d6-n9bs2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c849db6d6--n9bs2-eth0" May 8 00:40:44.078688 containerd[1544]: 2025-05-08 00:40:44.063 [INFO][4872] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali23c28f979f3 ContainerID="fb0dc8d3fa8a698f2ab25e991279cfaa1e3d32570693cb9d02c18574bf3fc012" Namespace="calico-apiserver" Pod="calico-apiserver-6c849db6d6-n9bs2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c849db6d6--n9bs2-eth0" May 8 00:40:44.078688 containerd[1544]: 2025-05-08 00:40:44.064 [INFO][4872] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fb0dc8d3fa8a698f2ab25e991279cfaa1e3d32570693cb9d02c18574bf3fc012" Namespace="calico-apiserver" Pod="calico-apiserver-6c849db6d6-n9bs2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c849db6d6--n9bs2-eth0" May 8 00:40:44.078688 containerd[1544]: 2025-05-08 00:40:44.064 [INFO][4872] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fb0dc8d3fa8a698f2ab25e991279cfaa1e3d32570693cb9d02c18574bf3fc012" Namespace="calico-apiserver" Pod="calico-apiserver-6c849db6d6-n9bs2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c849db6d6--n9bs2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c849db6d6--n9bs2-eth0", GenerateName:"calico-apiserver-6c849db6d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"99de4f5c-61dc-4b3d-a2ad-08772fa94419", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c849db6d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fb0dc8d3fa8a698f2ab25e991279cfaa1e3d32570693cb9d02c18574bf3fc012", Pod:"calico-apiserver-6c849db6d6-n9bs2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali23c28f979f3", MAC:"66:bc:3a:76:3a:e0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:44.078688 containerd[1544]: 2025-05-08 00:40:44.075 [INFO][4872] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fb0dc8d3fa8a698f2ab25e991279cfaa1e3d32570693cb9d02c18574bf3fc012" Namespace="calico-apiserver" Pod="calico-apiserver-6c849db6d6-n9bs2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c849db6d6--n9bs2-eth0" May 8 00:40:44.083050 systemd[1]: Started cri-containerd-76e7a0092853a5d5a817cd5217cca6f999be8037b6f74092a4e2e77573573a7d.scope - libcontainer container 76e7a0092853a5d5a817cd5217cca6f999be8037b6f74092a4e2e77573573a7d. May 8 00:40:44.097707 containerd[1544]: time="2025-05-08T00:40:44.097279546Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:44.097707 containerd[1544]: time="2025-05-08T00:40:44.097404589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:44.097707 containerd[1544]: time="2025-05-08T00:40:44.097456918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:44.097824 containerd[1544]: time="2025-05-08T00:40:44.097568780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:44.117026 systemd[1]: Started cri-containerd-fb0dc8d3fa8a698f2ab25e991279cfaa1e3d32570693cb9d02c18574bf3fc012.scope - libcontainer container fb0dc8d3fa8a698f2ab25e991279cfaa1e3d32570693cb9d02c18574bf3fc012. May 8 00:40:44.118808 containerd[1544]: time="2025-05-08T00:40:44.118740100Z" level=info msg="StartContainer for \"76e7a0092853a5d5a817cd5217cca6f999be8037b6f74092a4e2e77573573a7d\" returns successfully" May 8 00:40:44.125125 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:40:44.144377 containerd[1544]: time="2025-05-08T00:40:44.144355681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c849db6d6-n9bs2,Uid:99de4f5c-61dc-4b3d-a2ad-08772fa94419,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"fb0dc8d3fa8a698f2ab25e991279cfaa1e3d32570693cb9d02c18574bf3fc012\"" May 8 00:40:44.800733 systemd-networkd[1454]: cali6e899bd61ee: Gained IPv6LL May 8 00:40:44.842277 containerd[1544]: time="2025-05-08T00:40:44.841756653Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:44.849281 containerd[1544]: time="2025-05-08T00:40:44.849250216Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 8 00:40:44.870502 containerd[1544]: time="2025-05-08T00:40:44.870391126Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 1.577794035s" May 8 00:40:44.870502 containerd[1544]: time="2025-05-08T00:40:44.870417480Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 8 00:40:44.872613 containerd[1544]: time="2025-05-08T00:40:44.872590121Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:44.873755 containerd[1544]: time="2025-05-08T00:40:44.873208743Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:44.878223 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1964716441.mount: Deactivated successfully. May 8 00:40:44.920218 containerd[1544]: time="2025-05-08T00:40:44.920189611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 8 00:40:44.951171 containerd[1544]: time="2025-05-08T00:40:44.951126711Z" level=info msg="CreateContainer within sandbox \"e7b2f5467137e23a66b4d5533fb8007cb131d3925e0df80bff3dc4e1a6c19c5e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 8 00:40:44.957964 kernel: bpftool[5188]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 8 00:40:44.972833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4264044116.mount: Deactivated successfully. May 8 00:40:44.975794 containerd[1544]: time="2025-05-08T00:40:44.975724512Z" level=info msg="CreateContainer within sandbox \"e7b2f5467137e23a66b4d5533fb8007cb131d3925e0df80bff3dc4e1a6c19c5e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"dfd364116fad1ad73bc767e64374a88c37cb4e06587e20a7ca0577e5cb914f68\"" May 8 00:40:44.977708 containerd[1544]: time="2025-05-08T00:40:44.976399255Z" level=info msg="StartContainer for \"dfd364116fad1ad73bc767e64374a88c37cb4e06587e20a7ca0577e5cb914f68\"" May 8 00:40:45.005452 systemd[1]: Started cri-containerd-dfd364116fad1ad73bc767e64374a88c37cb4e06587e20a7ca0577e5cb914f68.scope - libcontainer container dfd364116fad1ad73bc767e64374a88c37cb4e06587e20a7ca0577e5cb914f68. May 8 00:40:45.031888 containerd[1544]: time="2025-05-08T00:40:45.031864195Z" level=info msg="StartContainer for \"dfd364116fad1ad73bc767e64374a88c37cb4e06587e20a7ca0577e5cb914f68\" returns successfully" May 8 00:40:45.331435 kubelet[2724]: I0508 00:40:45.331400 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-dbhz2" podStartSLOduration=72.331388386 podStartE2EDuration="1m12.331388386s" podCreationTimestamp="2025-05-08 00:39:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:40:44.311895952 +0000 UTC m=+76.661522564" watchObservedRunningTime="2025-05-08 00:40:45.331388386 +0000 UTC m=+77.681014989" May 8 00:40:45.342218 systemd-networkd[1454]: vxlan.calico: Link UP May 8 00:40:45.342223 systemd-networkd[1454]: vxlan.calico: Gained carrier May 8 00:40:45.429036 systemd-networkd[1454]: cali23c28f979f3: Gained IPv6LL May 8 00:40:45.750032 systemd-networkd[1454]: calic5c34036b7c: Gained IPv6LL May 8 00:40:45.775968 containerd[1544]: time="2025-05-08T00:40:45.775815579Z" level=info msg="StopPodSandbox for \"454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09\"" May 8 00:40:45.871533 containerd[1544]: 2025-05-08 00:40:45.851 [INFO][5320] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09" May 8 00:40:45.871533 containerd[1544]: 2025-05-08 00:40:45.851 [INFO][5320] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09" iface="eth0" netns="/var/run/netns/cni-b96116f9-20e9-a715-09fa-746566ce51de" May 8 00:40:45.871533 containerd[1544]: 2025-05-08 00:40:45.852 [INFO][5320] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09" iface="eth0" netns="/var/run/netns/cni-b96116f9-20e9-a715-09fa-746566ce51de" May 8 00:40:45.871533 containerd[1544]: 2025-05-08 00:40:45.852 [INFO][5320] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09" iface="eth0" netns="/var/run/netns/cni-b96116f9-20e9-a715-09fa-746566ce51de" May 8 00:40:45.871533 containerd[1544]: 2025-05-08 00:40:45.852 [INFO][5320] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09" May 8 00:40:45.871533 containerd[1544]: 2025-05-08 00:40:45.852 [INFO][5320] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09" May 8 00:40:45.871533 containerd[1544]: 2025-05-08 00:40:45.864 [INFO][5327] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09" HandleID="k8s-pod-network.454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09" Workload="localhost-k8s-calico--apiserver--6c849db6d6--wzfw7-eth0" May 8 00:40:45.871533 containerd[1544]: 2025-05-08 00:40:45.864 [INFO][5327] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:45.871533 containerd[1544]: 2025-05-08 00:40:45.864 [INFO][5327] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:45.871533 containerd[1544]: 2025-05-08 00:40:45.868 [WARNING][5327] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09" HandleID="k8s-pod-network.454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09" Workload="localhost-k8s-calico--apiserver--6c849db6d6--wzfw7-eth0" May 8 00:40:45.871533 containerd[1544]: 2025-05-08 00:40:45.868 [INFO][5327] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09" HandleID="k8s-pod-network.454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09" Workload="localhost-k8s-calico--apiserver--6c849db6d6--wzfw7-eth0" May 8 00:40:45.871533 containerd[1544]: 2025-05-08 00:40:45.869 [INFO][5327] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:45.871533 containerd[1544]: 2025-05-08 00:40:45.870 [INFO][5320] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09" May 8 00:40:45.873088 containerd[1544]: time="2025-05-08T00:40:45.871654813Z" level=info msg="TearDown network for sandbox \"454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09\" successfully" May 8 00:40:45.873088 containerd[1544]: time="2025-05-08T00:40:45.871672314Z" level=info msg="StopPodSandbox for \"454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09\" returns successfully" May 8 00:40:45.873088 containerd[1544]: time="2025-05-08T00:40:45.872061562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c849db6d6-wzfw7,Uid:5c48551f-d382-48fc-8b2c-4049dd697e7b,Namespace:calico-apiserver,Attempt:1,}" May 8 00:40:45.876317 systemd[1]: run-netns-cni\x2db96116f9\x2d20e9\x2da715\x2d09fa\x2d746566ce51de.mount: Deactivated successfully. May 8 00:40:45.945662 systemd-networkd[1454]: caliedd93c7c832: Link UP May 8 00:40:45.945866 systemd-networkd[1454]: caliedd93c7c832: Gained carrier May 8 00:40:45.957537 containerd[1544]: 2025-05-08 00:40:45.901 [INFO][5334] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6c849db6d6--wzfw7-eth0 calico-apiserver-6c849db6d6- calico-apiserver 5c48551f-d382-48fc-8b2c-4049dd697e7b 1000 0 2025-05-08 00:39:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c849db6d6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6c849db6d6-wzfw7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliedd93c7c832 [] []}} ContainerID="9a6a8c7695d333e4c18ac61c06568baa0a16eeffe6f8e5b8115a684c72b68a27" Namespace="calico-apiserver" Pod="calico-apiserver-6c849db6d6-wzfw7" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c849db6d6--wzfw7-" May 8 00:40:45.957537 containerd[1544]: 2025-05-08 00:40:45.901 [INFO][5334] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9a6a8c7695d333e4c18ac61c06568baa0a16eeffe6f8e5b8115a684c72b68a27" Namespace="calico-apiserver" Pod="calico-apiserver-6c849db6d6-wzfw7" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c849db6d6--wzfw7-eth0" May 8 00:40:45.957537 containerd[1544]: 2025-05-08 00:40:45.919 [INFO][5347] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9a6a8c7695d333e4c18ac61c06568baa0a16eeffe6f8e5b8115a684c72b68a27" HandleID="k8s-pod-network.9a6a8c7695d333e4c18ac61c06568baa0a16eeffe6f8e5b8115a684c72b68a27" Workload="localhost-k8s-calico--apiserver--6c849db6d6--wzfw7-eth0" May 8 00:40:45.957537 containerd[1544]: 2025-05-08 00:40:45.924 [INFO][5347] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9a6a8c7695d333e4c18ac61c06568baa0a16eeffe6f8e5b8115a684c72b68a27" HandleID="k8s-pod-network.9a6a8c7695d333e4c18ac61c06568baa0a16eeffe6f8e5b8115a684c72b68a27" Workload="localhost-k8s-calico--apiserver--6c849db6d6--wzfw7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000291110), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6c849db6d6-wzfw7", "timestamp":"2025-05-08 00:40:45.919399481 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:40:45.957537 containerd[1544]: 2025-05-08 00:40:45.924 [INFO][5347] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:45.957537 containerd[1544]: 2025-05-08 00:40:45.924 [INFO][5347] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:45.957537 containerd[1544]: 2025-05-08 00:40:45.924 [INFO][5347] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:40:45.957537 containerd[1544]: 2025-05-08 00:40:45.929 [INFO][5347] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9a6a8c7695d333e4c18ac61c06568baa0a16eeffe6f8e5b8115a684c72b68a27" host="localhost" May 8 00:40:45.957537 containerd[1544]: 2025-05-08 00:40:45.931 [INFO][5347] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:40:45.957537 containerd[1544]: 2025-05-08 00:40:45.933 [INFO][5347] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:40:45.957537 containerd[1544]: 2025-05-08 00:40:45.934 [INFO][5347] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:40:45.957537 containerd[1544]: 2025-05-08 00:40:45.935 [INFO][5347] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:40:45.957537 containerd[1544]: 2025-05-08 00:40:45.935 [INFO][5347] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9a6a8c7695d333e4c18ac61c06568baa0a16eeffe6f8e5b8115a684c72b68a27" host="localhost" May 8 00:40:45.957537 containerd[1544]: 2025-05-08 00:40:45.936 [INFO][5347] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9a6a8c7695d333e4c18ac61c06568baa0a16eeffe6f8e5b8115a684c72b68a27 May 8 00:40:45.957537 containerd[1544]: 2025-05-08 00:40:45.938 [INFO][5347] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9a6a8c7695d333e4c18ac61c06568baa0a16eeffe6f8e5b8115a684c72b68a27" host="localhost" May 8 00:40:45.957537 containerd[1544]: 2025-05-08 00:40:45.941 [INFO][5347] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.9a6a8c7695d333e4c18ac61c06568baa0a16eeffe6f8e5b8115a684c72b68a27" host="localhost" May 8 00:40:45.957537 containerd[1544]: 2025-05-08 00:40:45.941 [INFO][5347] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.9a6a8c7695d333e4c18ac61c06568baa0a16eeffe6f8e5b8115a684c72b68a27" host="localhost" May 8 00:40:45.957537 containerd[1544]: 2025-05-08 00:40:45.941 [INFO][5347] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:45.957537 containerd[1544]: 2025-05-08 00:40:45.941 [INFO][5347] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="9a6a8c7695d333e4c18ac61c06568baa0a16eeffe6f8e5b8115a684c72b68a27" HandleID="k8s-pod-network.9a6a8c7695d333e4c18ac61c06568baa0a16eeffe6f8e5b8115a684c72b68a27" Workload="localhost-k8s-calico--apiserver--6c849db6d6--wzfw7-eth0" May 8 00:40:45.959146 containerd[1544]: 2025-05-08 00:40:45.943 [INFO][5334] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9a6a8c7695d333e4c18ac61c06568baa0a16eeffe6f8e5b8115a684c72b68a27" Namespace="calico-apiserver" Pod="calico-apiserver-6c849db6d6-wzfw7" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c849db6d6--wzfw7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c849db6d6--wzfw7-eth0", GenerateName:"calico-apiserver-6c849db6d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"5c48551f-d382-48fc-8b2c-4049dd697e7b", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c849db6d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6c849db6d6-wzfw7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliedd93c7c832", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:45.959146 containerd[1544]: 2025-05-08 00:40:45.943 [INFO][5334] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="9a6a8c7695d333e4c18ac61c06568baa0a16eeffe6f8e5b8115a684c72b68a27" Namespace="calico-apiserver" Pod="calico-apiserver-6c849db6d6-wzfw7" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c849db6d6--wzfw7-eth0" May 8 00:40:45.959146 containerd[1544]: 2025-05-08 00:40:45.943 [INFO][5334] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliedd93c7c832 ContainerID="9a6a8c7695d333e4c18ac61c06568baa0a16eeffe6f8e5b8115a684c72b68a27" Namespace="calico-apiserver" Pod="calico-apiserver-6c849db6d6-wzfw7" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c849db6d6--wzfw7-eth0" May 8 00:40:45.959146 containerd[1544]: 2025-05-08 00:40:45.945 [INFO][5334] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9a6a8c7695d333e4c18ac61c06568baa0a16eeffe6f8e5b8115a684c72b68a27" Namespace="calico-apiserver" Pod="calico-apiserver-6c849db6d6-wzfw7" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c849db6d6--wzfw7-eth0" May 8 00:40:45.959146 containerd[1544]: 2025-05-08 00:40:45.946 [INFO][5334] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9a6a8c7695d333e4c18ac61c06568baa0a16eeffe6f8e5b8115a684c72b68a27" Namespace="calico-apiserver" Pod="calico-apiserver-6c849db6d6-wzfw7" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c849db6d6--wzfw7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c849db6d6--wzfw7-eth0", GenerateName:"calico-apiserver-6c849db6d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"5c48551f-d382-48fc-8b2c-4049dd697e7b", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c849db6d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9a6a8c7695d333e4c18ac61c06568baa0a16eeffe6f8e5b8115a684c72b68a27", Pod:"calico-apiserver-6c849db6d6-wzfw7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliedd93c7c832", MAC:"12:35:1c:80:9b:4a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:45.959146 containerd[1544]: 2025-05-08 00:40:45.955 [INFO][5334] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9a6a8c7695d333e4c18ac61c06568baa0a16eeffe6f8e5b8115a684c72b68a27" Namespace="calico-apiserver" Pod="calico-apiserver-6c849db6d6-wzfw7" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c849db6d6--wzfw7-eth0" May 8 00:40:45.976097 containerd[1544]: time="2025-05-08T00:40:45.975791391Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:45.976097 containerd[1544]: time="2025-05-08T00:40:45.975831519Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:45.976097 containerd[1544]: time="2025-05-08T00:40:45.975842575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:45.976097 containerd[1544]: time="2025-05-08T00:40:45.975889962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:46.020057 systemd[1]: Started cri-containerd-9a6a8c7695d333e4c18ac61c06568baa0a16eeffe6f8e5b8115a684c72b68a27.scope - libcontainer container 9a6a8c7695d333e4c18ac61c06568baa0a16eeffe6f8e5b8115a684c72b68a27. May 8 00:40:46.029734 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:40:46.050242 containerd[1544]: time="2025-05-08T00:40:46.050222836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c849db6d6-wzfw7,Uid:5c48551f-d382-48fc-8b2c-4049dd697e7b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"9a6a8c7695d333e4c18ac61c06568baa0a16eeffe6f8e5b8115a684c72b68a27\"" May 8 00:40:46.408170 systemd[1]: Started sshd@10-139.178.70.100:22-139.178.68.195:49280.service - OpenSSH per-connection server daemon (139.178.68.195:49280). May 8 00:40:46.453184 systemd-networkd[1454]: vxlan.calico: Gained IPv6LL May 8 00:40:46.467424 sshd[5405]: Accepted publickey for core from 139.178.68.195 port 49280 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:40:46.469090 sshd[5405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:40:46.471996 systemd-logind[1518]: New session 12 of user core. May 8 00:40:46.477267 systemd[1]: Started session-12.scope - Session 12 of User core. May 8 00:40:46.937995 sshd[5405]: pam_unix(sshd:session): session closed for user core May 8 00:40:46.943672 systemd[1]: sshd@10-139.178.70.100:22-139.178.68.195:49280.service: Deactivated successfully. May 8 00:40:46.945074 systemd[1]: session-12.scope: Deactivated successfully. May 8 00:40:46.945757 systemd-logind[1518]: Session 12 logged out. Waiting for processes to exit. May 8 00:40:46.951177 systemd[1]: Started sshd@11-139.178.70.100:22-139.178.68.195:49284.service - OpenSSH per-connection server daemon (139.178.68.195:49284). May 8 00:40:46.952446 systemd-logind[1518]: Removed session 12. May 8 00:40:46.986037 sshd[5420]: Accepted publickey for core from 139.178.68.195 port 49284 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:40:46.986383 sshd[5420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:40:46.989254 systemd-logind[1518]: New session 13 of user core. May 8 00:40:46.994059 systemd[1]: Started session-13.scope - Session 13 of User core. May 8 00:40:47.143590 sshd[5420]: pam_unix(sshd:session): session closed for user core May 8 00:40:47.150305 systemd[1]: sshd@11-139.178.70.100:22-139.178.68.195:49284.service: Deactivated successfully. May 8 00:40:47.152515 systemd[1]: session-13.scope: Deactivated successfully. May 8 00:40:47.154046 systemd-logind[1518]: Session 13 logged out. Waiting for processes to exit. May 8 00:40:47.160431 systemd[1]: Started sshd@12-139.178.70.100:22-139.178.68.195:49298.service - OpenSSH per-connection server daemon (139.178.68.195:49298). May 8 00:40:47.165034 systemd-logind[1518]: Removed session 13. May 8 00:40:47.198025 sshd[5431]: Accepted publickey for core from 139.178.68.195 port 49298 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:40:47.198935 sshd[5431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:40:47.201519 systemd-logind[1518]: New session 14 of user core. May 8 00:40:47.206105 systemd[1]: Started session-14.scope - Session 14 of User core. May 8 00:40:47.334593 sshd[5431]: pam_unix(sshd:session): session closed for user core May 8 00:40:47.336714 systemd[1]: sshd@12-139.178.70.100:22-139.178.68.195:49298.service: Deactivated successfully. May 8 00:40:47.338315 systemd[1]: session-14.scope: Deactivated successfully. May 8 00:40:47.338724 systemd-logind[1518]: Session 14 logged out. Waiting for processes to exit. May 8 00:40:47.339455 systemd-logind[1518]: Removed session 14. May 8 00:40:47.413058 systemd-networkd[1454]: caliedd93c7c832: Gained IPv6LL May 8 00:40:50.660470 containerd[1544]: time="2025-05-08T00:40:50.660290510Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:50.673209 containerd[1544]: time="2025-05-08T00:40:50.673121410Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" May 8 00:40:50.684762 containerd[1544]: time="2025-05-08T00:40:50.684690272Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:50.711497 containerd[1544]: time="2025-05-08T00:40:50.711409209Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:50.720071 containerd[1544]: time="2025-05-08T00:40:50.712336307Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 5.792120383s" May 8 00:40:50.720071 containerd[1544]: time="2025-05-08T00:40:50.712356248Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 8 00:40:50.720071 containerd[1544]: time="2025-05-08T00:40:50.713233490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 8 00:40:50.732414 containerd[1544]: time="2025-05-08T00:40:50.732382709Z" level=info msg="CreateContainer within sandbox \"fb0dc8d3fa8a698f2ab25e991279cfaa1e3d32570693cb9d02c18574bf3fc012\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 8 00:40:51.887629 containerd[1544]: time="2025-05-08T00:40:51.887591489Z" level=info msg="CreateContainer within sandbox \"fb0dc8d3fa8a698f2ab25e991279cfaa1e3d32570693cb9d02c18574bf3fc012\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4b43f309007f67a48ddc57214f8cf84b077ca01a859480f8052e9bb42c9f30a9\"" May 8 00:40:51.888404 containerd[1544]: time="2025-05-08T00:40:51.888345022Z" level=info msg="StartContainer for \"4b43f309007f67a48ddc57214f8cf84b077ca01a859480f8052e9bb42c9f30a9\"" May 8 00:40:51.957144 systemd[1]: Started cri-containerd-4b43f309007f67a48ddc57214f8cf84b077ca01a859480f8052e9bb42c9f30a9.scope - libcontainer container 4b43f309007f67a48ddc57214f8cf84b077ca01a859480f8052e9bb42c9f30a9. May 8 00:40:51.995472 containerd[1544]: time="2025-05-08T00:40:51.995435832Z" level=info msg="StartContainer for \"4b43f309007f67a48ddc57214f8cf84b077ca01a859480f8052e9bb42c9f30a9\" returns successfully" May 8 00:40:52.348916 systemd[1]: Started sshd@13-139.178.70.100:22-139.178.68.195:49302.service - OpenSSH per-connection server daemon (139.178.68.195:49302). May 8 00:40:52.665205 sshd[5508]: Accepted publickey for core from 139.178.68.195 port 49302 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:40:52.679665 sshd[5508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:40:52.689819 systemd-logind[1518]: New session 15 of user core. May 8 00:40:52.694232 systemd[1]: Started session-15.scope - Session 15 of User core. May 8 00:40:52.776198 containerd[1544]: time="2025-05-08T00:40:52.776167810Z" level=info msg="StopPodSandbox for \"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\"" May 8 00:40:52.900985 kubelet[2724]: I0508 00:40:52.900354 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6c849db6d6-n9bs2" podStartSLOduration=68.332819958 podStartE2EDuration="1m14.90033878s" podCreationTimestamp="2025-05-08 00:39:38 +0000 UTC" firstStartedPulling="2025-05-08 00:40:44.145531308 +0000 UTC m=+76.495157907" lastFinishedPulling="2025-05-08 00:40:50.713050125 +0000 UTC m=+83.062676729" observedRunningTime="2025-05-08 00:40:52.428394438 +0000 UTC m=+84.778021047" watchObservedRunningTime="2025-05-08 00:40:52.90033878 +0000 UTC m=+85.249965382" May 8 00:40:53.045243 containerd[1544]: 2025-05-08 00:40:52.952 [INFO][5532] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f" May 8 00:40:53.045243 containerd[1544]: 2025-05-08 00:40:52.954 [INFO][5532] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f" iface="eth0" netns="/var/run/netns/cni-5dfa6ff3-0246-02bc-1154-30f8189db26f" May 8 00:40:53.045243 containerd[1544]: 2025-05-08 00:40:52.954 [INFO][5532] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f" iface="eth0" netns="/var/run/netns/cni-5dfa6ff3-0246-02bc-1154-30f8189db26f" May 8 00:40:53.045243 containerd[1544]: 2025-05-08 00:40:52.954 [INFO][5532] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f" iface="eth0" netns="/var/run/netns/cni-5dfa6ff3-0246-02bc-1154-30f8189db26f" May 8 00:40:53.045243 containerd[1544]: 2025-05-08 00:40:52.954 [INFO][5532] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f" May 8 00:40:53.045243 containerd[1544]: 2025-05-08 00:40:52.954 [INFO][5532] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f" May 8 00:40:53.045243 containerd[1544]: 2025-05-08 00:40:53.028 [INFO][5544] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f" HandleID="k8s-pod-network.9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f" Workload="localhost-k8s-coredns--668d6bf9bc--jq978-eth0" May 8 00:40:53.045243 containerd[1544]: 2025-05-08 00:40:53.029 [INFO][5544] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:53.045243 containerd[1544]: 2025-05-08 00:40:53.029 [INFO][5544] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:53.045243 containerd[1544]: 2025-05-08 00:40:53.038 [WARNING][5544] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f" HandleID="k8s-pod-network.9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f" Workload="localhost-k8s-coredns--668d6bf9bc--jq978-eth0" May 8 00:40:53.045243 containerd[1544]: 2025-05-08 00:40:53.038 [INFO][5544] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f" HandleID="k8s-pod-network.9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f" Workload="localhost-k8s-coredns--668d6bf9bc--jq978-eth0" May 8 00:40:53.045243 containerd[1544]: 2025-05-08 00:40:53.039 [INFO][5544] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:53.045243 containerd[1544]: 2025-05-08 00:40:53.042 [INFO][5532] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f" May 8 00:40:53.045243 containerd[1544]: time="2025-05-08T00:40:53.045234913Z" level=info msg="TearDown network for sandbox \"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\" successfully" May 8 00:40:53.050501 containerd[1544]: time="2025-05-08T00:40:53.045251530Z" level=info msg="StopPodSandbox for \"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\" returns successfully" May 8 00:40:53.050501 containerd[1544]: time="2025-05-08T00:40:53.048021263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jq978,Uid:7cd62a62-c3b8-4d08-90a0-b53c0511c1f5,Namespace:kube-system,Attempt:1,}" May 8 00:40:53.048342 systemd[1]: run-netns-cni\x2d5dfa6ff3\x2d0246\x2d02bc\x2d1154\x2d30f8189db26f.mount: Deactivated successfully. May 8 00:40:53.104799 sshd[5508]: pam_unix(sshd:session): session closed for user core May 8 00:40:53.107727 systemd[1]: sshd@13-139.178.70.100:22-139.178.68.195:49302.service: Deactivated successfully. May 8 00:40:53.110416 systemd[1]: session-15.scope: Deactivated successfully. May 8 00:40:53.112328 systemd-logind[1518]: Session 15 logged out. Waiting for processes to exit. May 8 00:40:53.112910 systemd-logind[1518]: Removed session 15. May 8 00:40:53.212560 systemd-networkd[1454]: cali50f1c447ce3: Link UP May 8 00:40:53.212741 systemd-networkd[1454]: cali50f1c447ce3: Gained carrier May 8 00:40:53.233187 containerd[1544]: 2025-05-08 00:40:53.124 [INFO][5555] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--jq978-eth0 coredns-668d6bf9bc- kube-system 7cd62a62-c3b8-4d08-90a0-b53c0511c1f5 1066 0 2025-05-08 00:39:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-jq978 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali50f1c447ce3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="935d64eab31ded4afc3d33f2403e3cb68119b5df25da5f25e05a7314530a2ace" Namespace="kube-system" Pod="coredns-668d6bf9bc-jq978" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jq978-" May 8 00:40:53.233187 containerd[1544]: 2025-05-08 00:40:53.134 [INFO][5555] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="935d64eab31ded4afc3d33f2403e3cb68119b5df25da5f25e05a7314530a2ace" Namespace="kube-system" Pod="coredns-668d6bf9bc-jq978" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jq978-eth0" May 8 00:40:53.233187 containerd[1544]: 2025-05-08 00:40:53.172 [INFO][5565] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="935d64eab31ded4afc3d33f2403e3cb68119b5df25da5f25e05a7314530a2ace" HandleID="k8s-pod-network.935d64eab31ded4afc3d33f2403e3cb68119b5df25da5f25e05a7314530a2ace" Workload="localhost-k8s-coredns--668d6bf9bc--jq978-eth0" May 8 00:40:53.233187 containerd[1544]: 2025-05-08 00:40:53.177 [INFO][5565] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="935d64eab31ded4afc3d33f2403e3cb68119b5df25da5f25e05a7314530a2ace" HandleID="k8s-pod-network.935d64eab31ded4afc3d33f2403e3cb68119b5df25da5f25e05a7314530a2ace" Workload="localhost-k8s-coredns--668d6bf9bc--jq978-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00042c750), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-jq978", "timestamp":"2025-05-08 00:40:53.172688098 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:40:53.233187 containerd[1544]: 2025-05-08 00:40:53.177 [INFO][5565] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:53.233187 containerd[1544]: 2025-05-08 00:40:53.177 [INFO][5565] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:53.233187 containerd[1544]: 2025-05-08 00:40:53.177 [INFO][5565] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:40:53.233187 containerd[1544]: 2025-05-08 00:40:53.178 [INFO][5565] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.935d64eab31ded4afc3d33f2403e3cb68119b5df25da5f25e05a7314530a2ace" host="localhost" May 8 00:40:53.233187 containerd[1544]: 2025-05-08 00:40:53.181 [INFO][5565] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:40:53.233187 containerd[1544]: 2025-05-08 00:40:53.185 [INFO][5565] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:40:53.233187 containerd[1544]: 2025-05-08 00:40:53.186 [INFO][5565] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:40:53.233187 containerd[1544]: 2025-05-08 00:40:53.188 [INFO][5565] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:40:53.233187 containerd[1544]: 2025-05-08 00:40:53.188 [INFO][5565] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.935d64eab31ded4afc3d33f2403e3cb68119b5df25da5f25e05a7314530a2ace" host="localhost" May 8 00:40:53.233187 containerd[1544]: 2025-05-08 00:40:53.190 [INFO][5565] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.935d64eab31ded4afc3d33f2403e3cb68119b5df25da5f25e05a7314530a2ace May 8 00:40:53.233187 containerd[1544]: 2025-05-08 00:40:53.195 [INFO][5565] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.935d64eab31ded4afc3d33f2403e3cb68119b5df25da5f25e05a7314530a2ace" host="localhost" May 8 00:40:53.233187 containerd[1544]: 2025-05-08 00:40:53.201 [INFO][5565] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.935d64eab31ded4afc3d33f2403e3cb68119b5df25da5f25e05a7314530a2ace" host="localhost" May 8 00:40:53.233187 containerd[1544]: 2025-05-08 00:40:53.201 [INFO][5565] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.935d64eab31ded4afc3d33f2403e3cb68119b5df25da5f25e05a7314530a2ace" host="localhost" May 8 00:40:53.233187 containerd[1544]: 2025-05-08 00:40:53.201 [INFO][5565] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:53.233187 containerd[1544]: 2025-05-08 00:40:53.201 [INFO][5565] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="935d64eab31ded4afc3d33f2403e3cb68119b5df25da5f25e05a7314530a2ace" HandleID="k8s-pod-network.935d64eab31ded4afc3d33f2403e3cb68119b5df25da5f25e05a7314530a2ace" Workload="localhost-k8s-coredns--668d6bf9bc--jq978-eth0" May 8 00:40:53.235546 containerd[1544]: 2025-05-08 00:40:53.205 [INFO][5555] cni-plugin/k8s.go 386: Populated endpoint ContainerID="935d64eab31ded4afc3d33f2403e3cb68119b5df25da5f25e05a7314530a2ace" Namespace="kube-system" Pod="coredns-668d6bf9bc-jq978" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jq978-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--jq978-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7cd62a62-c3b8-4d08-90a0-b53c0511c1f5", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-jq978", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali50f1c447ce3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:53.235546 containerd[1544]: 2025-05-08 00:40:53.205 [INFO][5555] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="935d64eab31ded4afc3d33f2403e3cb68119b5df25da5f25e05a7314530a2ace" Namespace="kube-system" Pod="coredns-668d6bf9bc-jq978" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jq978-eth0" May 8 00:40:53.235546 containerd[1544]: 2025-05-08 00:40:53.205 [INFO][5555] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali50f1c447ce3 ContainerID="935d64eab31ded4afc3d33f2403e3cb68119b5df25da5f25e05a7314530a2ace" Namespace="kube-system" Pod="coredns-668d6bf9bc-jq978" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jq978-eth0" May 8 00:40:53.235546 containerd[1544]: 2025-05-08 00:40:53.216 [INFO][5555] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="935d64eab31ded4afc3d33f2403e3cb68119b5df25da5f25e05a7314530a2ace" Namespace="kube-system" Pod="coredns-668d6bf9bc-jq978" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jq978-eth0" May 8 00:40:53.235546 containerd[1544]: 2025-05-08 00:40:53.217 [INFO][5555] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="935d64eab31ded4afc3d33f2403e3cb68119b5df25da5f25e05a7314530a2ace" Namespace="kube-system" Pod="coredns-668d6bf9bc-jq978" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jq978-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--jq978-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7cd62a62-c3b8-4d08-90a0-b53c0511c1f5", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"935d64eab31ded4afc3d33f2403e3cb68119b5df25da5f25e05a7314530a2ace", Pod:"coredns-668d6bf9bc-jq978", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali50f1c447ce3", MAC:"26:79:34:33:0d:d3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:53.235546 containerd[1544]: 2025-05-08 00:40:53.230 [INFO][5555] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="935d64eab31ded4afc3d33f2403e3cb68119b5df25da5f25e05a7314530a2ace" Namespace="kube-system" Pod="coredns-668d6bf9bc-jq978" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jq978-eth0" May 8 00:40:53.283484 containerd[1544]: time="2025-05-08T00:40:53.283409061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:53.285035 containerd[1544]: time="2025-05-08T00:40:53.284970061Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:53.285035 containerd[1544]: time="2025-05-08T00:40:53.284988504Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:53.285276 containerd[1544]: time="2025-05-08T00:40:53.285233834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:53.312238 systemd[1]: Started cri-containerd-935d64eab31ded4afc3d33f2403e3cb68119b5df25da5f25e05a7314530a2ace.scope - libcontainer container 935d64eab31ded4afc3d33f2403e3cb68119b5df25da5f25e05a7314530a2ace. May 8 00:40:53.340476 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:40:53.388845 containerd[1544]: time="2025-05-08T00:40:53.388811560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jq978,Uid:7cd62a62-c3b8-4d08-90a0-b53c0511c1f5,Namespace:kube-system,Attempt:1,} returns sandbox id \"935d64eab31ded4afc3d33f2403e3cb68119b5df25da5f25e05a7314530a2ace\"" May 8 00:40:53.502639 containerd[1544]: time="2025-05-08T00:40:53.502363496Z" level=info msg="CreateContainer within sandbox \"935d64eab31ded4afc3d33f2403e3cb68119b5df25da5f25e05a7314530a2ace\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:40:53.724441 containerd[1544]: time="2025-05-08T00:40:53.724164492Z" level=info msg="CreateContainer within sandbox \"935d64eab31ded4afc3d33f2403e3cb68119b5df25da5f25e05a7314530a2ace\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"46a19205a2bb4c685606a08d54c11f86f0c366ad66e7d7f724503e4a29eff4a2\"" May 8 00:40:53.725053 containerd[1544]: time="2025-05-08T00:40:53.725033833Z" level=info msg="StartContainer for \"46a19205a2bb4c685606a08d54c11f86f0c366ad66e7d7f724503e4a29eff4a2\"" May 8 00:40:53.749129 systemd[1]: Started cri-containerd-46a19205a2bb4c685606a08d54c11f86f0c366ad66e7d7f724503e4a29eff4a2.scope - libcontainer container 46a19205a2bb4c685606a08d54c11f86f0c366ad66e7d7f724503e4a29eff4a2. May 8 00:40:53.890382 containerd[1544]: time="2025-05-08T00:40:53.890343340Z" level=info msg="StartContainer for \"46a19205a2bb4c685606a08d54c11f86f0c366ad66e7d7f724503e4a29eff4a2\" returns successfully" May 8 00:40:54.174313 containerd[1544]: time="2025-05-08T00:40:54.173980560Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:54.183978 containerd[1544]: time="2025-05-08T00:40:54.183917715Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 8 00:40:54.190917 containerd[1544]: time="2025-05-08T00:40:54.190872460Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:54.194721 containerd[1544]: time="2025-05-08T00:40:54.194600851Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:54.195524 containerd[1544]: time="2025-05-08T00:40:54.195071479Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 3.481812016s" May 8 00:40:54.195524 containerd[1544]: time="2025-05-08T00:40:54.195091560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 8 00:40:54.196169 containerd[1544]: time="2025-05-08T00:40:54.196155719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 8 00:40:54.199548 containerd[1544]: time="2025-05-08T00:40:54.199341007Z" level=info msg="CreateContainer within sandbox \"e7b2f5467137e23a66b4d5533fb8007cb131d3925e0df80bff3dc4e1a6c19c5e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 8 00:40:54.323721 containerd[1544]: time="2025-05-08T00:40:54.323660011Z" level=info msg="CreateContainer within sandbox \"e7b2f5467137e23a66b4d5533fb8007cb131d3925e0df80bff3dc4e1a6c19c5e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4aa45f7f671fdc3debcc67c24c6ef70ddc85f8163de35fc2d6ad1d16be7f2624\"" May 8 00:40:54.324604 containerd[1544]: time="2025-05-08T00:40:54.324246715Z" level=info msg="StartContainer for \"4aa45f7f671fdc3debcc67c24c6ef70ddc85f8163de35fc2d6ad1d16be7f2624\"" May 8 00:40:54.352054 systemd[1]: Started cri-containerd-4aa45f7f671fdc3debcc67c24c6ef70ddc85f8163de35fc2d6ad1d16be7f2624.scope - libcontainer container 4aa45f7f671fdc3debcc67c24c6ef70ddc85f8163de35fc2d6ad1d16be7f2624. May 8 00:40:54.383905 containerd[1544]: time="2025-05-08T00:40:54.383883315Z" level=info msg="StartContainer for \"4aa45f7f671fdc3debcc67c24c6ef70ddc85f8163de35fc2d6ad1d16be7f2624\" returns successfully" May 8 00:40:54.389036 systemd-networkd[1454]: cali50f1c447ce3: Gained IPv6LL May 8 00:40:54.468447 kubelet[2724]: I0508 00:40:54.468128 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-xqv5x" podStartSLOduration=64.552301064 podStartE2EDuration="1m15.468112426s" podCreationTimestamp="2025-05-08 00:39:39 +0000 UTC" firstStartedPulling="2025-05-08 00:40:43.280251021 +0000 UTC m=+75.629877619" lastFinishedPulling="2025-05-08 00:40:54.196062375 +0000 UTC m=+86.545688981" observedRunningTime="2025-05-08 00:40:54.467015805 +0000 UTC m=+86.816642416" watchObservedRunningTime="2025-05-08 00:40:54.468112426 +0000 UTC m=+86.817739032" May 8 00:40:54.468447 kubelet[2724]: I0508 00:40:54.468204 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-jq978" podStartSLOduration=81.468199231 podStartE2EDuration="1m21.468199231s" podCreationTimestamp="2025-05-08 00:39:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:40:54.441980743 +0000 UTC m=+86.791607365" watchObservedRunningTime="2025-05-08 00:40:54.468199231 +0000 UTC m=+86.817825836" May 8 00:40:54.591430 containerd[1544]: time="2025-05-08T00:40:54.591391057Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:54.591908 containerd[1544]: time="2025-05-08T00:40:54.591875786Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 8 00:40:54.593435 containerd[1544]: time="2025-05-08T00:40:54.593408693Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 397.061762ms" May 8 00:40:54.593435 containerd[1544]: time="2025-05-08T00:40:54.593429339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 8 00:40:54.595638 containerd[1544]: time="2025-05-08T00:40:54.595476334Z" level=info msg="CreateContainer within sandbox \"9a6a8c7695d333e4c18ac61c06568baa0a16eeffe6f8e5b8115a684c72b68a27\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 8 00:40:54.601529 containerd[1544]: time="2025-05-08T00:40:54.601500160Z" level=info msg="CreateContainer within sandbox \"9a6a8c7695d333e4c18ac61c06568baa0a16eeffe6f8e5b8115a684c72b68a27\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"56b767a55d39b95030bbb24042c56501b055b3034e2d291325ccbcf5a78e34d9\"" May 8 00:40:54.602852 containerd[1544]: time="2025-05-08T00:40:54.602825619Z" level=info msg="StartContainer for \"56b767a55d39b95030bbb24042c56501b055b3034e2d291325ccbcf5a78e34d9\"" May 8 00:40:54.627092 systemd[1]: Started cri-containerd-56b767a55d39b95030bbb24042c56501b055b3034e2d291325ccbcf5a78e34d9.scope - libcontainer container 56b767a55d39b95030bbb24042c56501b055b3034e2d291325ccbcf5a78e34d9. May 8 00:40:54.682205 containerd[1544]: time="2025-05-08T00:40:54.682171516Z" level=info msg="StartContainer for \"56b767a55d39b95030bbb24042c56501b055b3034e2d291325ccbcf5a78e34d9\" returns successfully" May 8 00:40:54.776467 containerd[1544]: time="2025-05-08T00:40:54.776399353Z" level=info msg="StopPodSandbox for \"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\"" May 8 00:40:55.236516 containerd[1544]: 2025-05-08 00:40:54.998 [INFO][5759] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2" May 8 00:40:55.236516 containerd[1544]: 2025-05-08 00:40:54.998 [INFO][5759] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2" iface="eth0" netns="/var/run/netns/cni-9c7d3c68-e15c-757f-bd04-598d3f2c793b" May 8 00:40:55.236516 containerd[1544]: 2025-05-08 00:40:54.998 [INFO][5759] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2" iface="eth0" netns="/var/run/netns/cni-9c7d3c68-e15c-757f-bd04-598d3f2c793b" May 8 00:40:55.236516 containerd[1544]: 2025-05-08 00:40:54.999 [INFO][5759] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2" iface="eth0" netns="/var/run/netns/cni-9c7d3c68-e15c-757f-bd04-598d3f2c793b" May 8 00:40:55.236516 containerd[1544]: 2025-05-08 00:40:54.999 [INFO][5759] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2" May 8 00:40:55.236516 containerd[1544]: 2025-05-08 00:40:54.999 [INFO][5759] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2" May 8 00:40:55.236516 containerd[1544]: 2025-05-08 00:40:55.208 [INFO][5766] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2" HandleID="k8s-pod-network.53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2" Workload="localhost-k8s-calico--kube--controllers--bcbf44d57--tjjwh-eth0" May 8 00:40:55.236516 containerd[1544]: 2025-05-08 00:40:55.208 [INFO][5766] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:55.236516 containerd[1544]: 2025-05-08 00:40:55.208 [INFO][5766] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:55.236516 containerd[1544]: 2025-05-08 00:40:55.229 [WARNING][5766] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2" HandleID="k8s-pod-network.53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2" Workload="localhost-k8s-calico--kube--controllers--bcbf44d57--tjjwh-eth0" May 8 00:40:55.236516 containerd[1544]: 2025-05-08 00:40:55.230 [INFO][5766] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2" HandleID="k8s-pod-network.53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2" Workload="localhost-k8s-calico--kube--controllers--bcbf44d57--tjjwh-eth0" May 8 00:40:55.236516 containerd[1544]: 2025-05-08 00:40:55.234 [INFO][5766] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:55.236516 containerd[1544]: 2025-05-08 00:40:55.235 [INFO][5759] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2" May 8 00:40:55.257194 containerd[1544]: time="2025-05-08T00:40:55.238016150Z" level=info msg="TearDown network for sandbox \"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\" successfully" May 8 00:40:55.257194 containerd[1544]: time="2025-05-08T00:40:55.238033639Z" level=info msg="StopPodSandbox for \"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\" returns successfully" May 8 00:40:55.257194 containerd[1544]: time="2025-05-08T00:40:55.238691922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bcbf44d57-tjjwh,Uid:b0cb28c6-493c-4ec2-95eb-08870a5239cb,Namespace:calico-system,Attempt:1,}" May 8 00:40:55.238164 systemd[1]: run-netns-cni\x2d9c7d3c68\x2de15c\x2d757f\x2dbd04\x2d598d3f2c793b.mount: Deactivated successfully. May 8 00:40:55.440570 systemd-networkd[1454]: cali1413b71bf3f: Link UP May 8 00:40:55.441328 systemd-networkd[1454]: cali1413b71bf3f: Gained carrier May 8 00:40:55.463208 containerd[1544]: 2025-05-08 00:40:55.337 [INFO][5773] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--bcbf44d57--tjjwh-eth0 calico-kube-controllers-bcbf44d57- calico-system b0cb28c6-493c-4ec2-95eb-08870a5239cb 1092 0 2025-05-08 00:39:39 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:bcbf44d57 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-bcbf44d57-tjjwh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1413b71bf3f [] []}} ContainerID="844af64f5617eb17acb95e86f94f55a233bffa2ab5c67690f5e8d84e269f91ae" Namespace="calico-system" Pod="calico-kube-controllers-bcbf44d57-tjjwh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bcbf44d57--tjjwh-" May 8 00:40:55.463208 containerd[1544]: 2025-05-08 00:40:55.338 [INFO][5773] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="844af64f5617eb17acb95e86f94f55a233bffa2ab5c67690f5e8d84e269f91ae" Namespace="calico-system" Pod="calico-kube-controllers-bcbf44d57-tjjwh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bcbf44d57--tjjwh-eth0" May 8 00:40:55.463208 containerd[1544]: 2025-05-08 00:40:55.385 [INFO][5784] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="844af64f5617eb17acb95e86f94f55a233bffa2ab5c67690f5e8d84e269f91ae" HandleID="k8s-pod-network.844af64f5617eb17acb95e86f94f55a233bffa2ab5c67690f5e8d84e269f91ae" Workload="localhost-k8s-calico--kube--controllers--bcbf44d57--tjjwh-eth0" May 8 00:40:55.463208 containerd[1544]: 2025-05-08 00:40:55.394 [INFO][5784] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="844af64f5617eb17acb95e86f94f55a233bffa2ab5c67690f5e8d84e269f91ae" HandleID="k8s-pod-network.844af64f5617eb17acb95e86f94f55a233bffa2ab5c67690f5e8d84e269f91ae" Workload="localhost-k8s-calico--kube--controllers--bcbf44d57--tjjwh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050de0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-bcbf44d57-tjjwh", "timestamp":"2025-05-08 00:40:55.3853757 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:40:55.463208 containerd[1544]: 2025-05-08 00:40:55.394 [INFO][5784] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:55.463208 containerd[1544]: 2025-05-08 00:40:55.394 [INFO][5784] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:55.463208 containerd[1544]: 2025-05-08 00:40:55.394 [INFO][5784] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:40:55.463208 containerd[1544]: 2025-05-08 00:40:55.396 [INFO][5784] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.844af64f5617eb17acb95e86f94f55a233bffa2ab5c67690f5e8d84e269f91ae" host="localhost" May 8 00:40:55.463208 containerd[1544]: 2025-05-08 00:40:55.400 [INFO][5784] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:40:55.463208 containerd[1544]: 2025-05-08 00:40:55.403 [INFO][5784] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:40:55.463208 containerd[1544]: 2025-05-08 00:40:55.412 [INFO][5784] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:40:55.463208 containerd[1544]: 2025-05-08 00:40:55.417 [INFO][5784] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:40:55.463208 containerd[1544]: 2025-05-08 00:40:55.417 [INFO][5784] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.844af64f5617eb17acb95e86f94f55a233bffa2ab5c67690f5e8d84e269f91ae" host="localhost" May 8 00:40:55.463208 containerd[1544]: 2025-05-08 00:40:55.420 [INFO][5784] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.844af64f5617eb17acb95e86f94f55a233bffa2ab5c67690f5e8d84e269f91ae May 8 00:40:55.463208 containerd[1544]: 2025-05-08 00:40:55.423 [INFO][5784] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.844af64f5617eb17acb95e86f94f55a233bffa2ab5c67690f5e8d84e269f91ae" host="localhost" May 8 00:40:55.463208 containerd[1544]: 2025-05-08 00:40:55.434 [INFO][5784] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.844af64f5617eb17acb95e86f94f55a233bffa2ab5c67690f5e8d84e269f91ae" host="localhost" May 8 00:40:55.463208 containerd[1544]: 2025-05-08 00:40:55.434 [INFO][5784] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.844af64f5617eb17acb95e86f94f55a233bffa2ab5c67690f5e8d84e269f91ae" host="localhost" May 8 00:40:55.463208 containerd[1544]: 2025-05-08 00:40:55.434 [INFO][5784] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:55.463208 containerd[1544]: 2025-05-08 00:40:55.434 [INFO][5784] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="844af64f5617eb17acb95e86f94f55a233bffa2ab5c67690f5e8d84e269f91ae" HandleID="k8s-pod-network.844af64f5617eb17acb95e86f94f55a233bffa2ab5c67690f5e8d84e269f91ae" Workload="localhost-k8s-calico--kube--controllers--bcbf44d57--tjjwh-eth0" May 8 00:40:55.478597 containerd[1544]: 2025-05-08 00:40:55.436 [INFO][5773] cni-plugin/k8s.go 386: Populated endpoint ContainerID="844af64f5617eb17acb95e86f94f55a233bffa2ab5c67690f5e8d84e269f91ae" Namespace="calico-system" Pod="calico-kube-controllers-bcbf44d57-tjjwh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bcbf44d57--tjjwh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--bcbf44d57--tjjwh-eth0", GenerateName:"calico-kube-controllers-bcbf44d57-", Namespace:"calico-system", SelfLink:"", UID:"b0cb28c6-493c-4ec2-95eb-08870a5239cb", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bcbf44d57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-bcbf44d57-tjjwh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1413b71bf3f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:55.478597 containerd[1544]: 2025-05-08 00:40:55.436 [INFO][5773] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="844af64f5617eb17acb95e86f94f55a233bffa2ab5c67690f5e8d84e269f91ae" Namespace="calico-system" Pod="calico-kube-controllers-bcbf44d57-tjjwh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bcbf44d57--tjjwh-eth0" May 8 00:40:55.478597 containerd[1544]: 2025-05-08 00:40:55.436 [INFO][5773] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1413b71bf3f ContainerID="844af64f5617eb17acb95e86f94f55a233bffa2ab5c67690f5e8d84e269f91ae" Namespace="calico-system" Pod="calico-kube-controllers-bcbf44d57-tjjwh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bcbf44d57--tjjwh-eth0" May 8 00:40:55.478597 containerd[1544]: 2025-05-08 00:40:55.440 [INFO][5773] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="844af64f5617eb17acb95e86f94f55a233bffa2ab5c67690f5e8d84e269f91ae" Namespace="calico-system" Pod="calico-kube-controllers-bcbf44d57-tjjwh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bcbf44d57--tjjwh-eth0" May 8 00:40:55.478597 containerd[1544]: 2025-05-08 00:40:55.442 [INFO][5773] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="844af64f5617eb17acb95e86f94f55a233bffa2ab5c67690f5e8d84e269f91ae" Namespace="calico-system" Pod="calico-kube-controllers-bcbf44d57-tjjwh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bcbf44d57--tjjwh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--bcbf44d57--tjjwh-eth0", GenerateName:"calico-kube-controllers-bcbf44d57-", Namespace:"calico-system", SelfLink:"", UID:"b0cb28c6-493c-4ec2-95eb-08870a5239cb", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bcbf44d57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"844af64f5617eb17acb95e86f94f55a233bffa2ab5c67690f5e8d84e269f91ae", Pod:"calico-kube-controllers-bcbf44d57-tjjwh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1413b71bf3f", MAC:"0a:95:67:c0:9c:02", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:55.478597 containerd[1544]: 2025-05-08 00:40:55.458 [INFO][5773] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="844af64f5617eb17acb95e86f94f55a233bffa2ab5c67690f5e8d84e269f91ae" Namespace="calico-system" Pod="calico-kube-controllers-bcbf44d57-tjjwh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bcbf44d57--tjjwh-eth0" May 8 00:40:55.497099 containerd[1544]: time="2025-05-08T00:40:55.496900895Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:55.497099 containerd[1544]: time="2025-05-08T00:40:55.496966159Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:55.497099 containerd[1544]: time="2025-05-08T00:40:55.496978435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:55.498564 containerd[1544]: time="2025-05-08T00:40:55.497086608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:55.520106 systemd[1]: Started cri-containerd-844af64f5617eb17acb95e86f94f55a233bffa2ab5c67690f5e8d84e269f91ae.scope - libcontainer container 844af64f5617eb17acb95e86f94f55a233bffa2ab5c67690f5e8d84e269f91ae. May 8 00:40:55.534935 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:40:55.582035 containerd[1544]: time="2025-05-08T00:40:55.582005862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bcbf44d57-tjjwh,Uid:b0cb28c6-493c-4ec2-95eb-08870a5239cb,Namespace:calico-system,Attempt:1,} returns sandbox id \"844af64f5617eb17acb95e86f94f55a233bffa2ab5c67690f5e8d84e269f91ae\"" May 8 00:40:55.848081 kubelet[2724]: I0508 00:40:55.847353 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6c849db6d6-wzfw7" podStartSLOduration=69.304430822 podStartE2EDuration="1m17.847335088s" podCreationTimestamp="2025-05-08 00:39:38 +0000 UTC" firstStartedPulling="2025-05-08 00:40:46.050994677 +0000 UTC m=+78.400621274" lastFinishedPulling="2025-05-08 00:40:54.593898941 +0000 UTC m=+86.943525540" observedRunningTime="2025-05-08 00:40:55.583978057 +0000 UTC m=+87.933604682" watchObservedRunningTime="2025-05-08 00:40:55.847335088 +0000 UTC m=+88.196961697" May 8 00:40:55.856638 containerd[1544]: time="2025-05-08T00:40:55.856398170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 8 00:40:56.229445 kubelet[2724]: I0508 00:40:56.201938 2724 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 8 00:40:56.229445 kubelet[2724]: I0508 00:40:56.229414 2724 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 8 00:40:57.205130 systemd-networkd[1454]: cali1413b71bf3f: Gained IPv6LL May 8 00:40:58.144465 systemd[1]: Started sshd@14-139.178.70.100:22-139.178.68.195:43802.service - OpenSSH per-connection server daemon (139.178.68.195:43802). May 8 00:40:58.451931 containerd[1544]: time="2025-05-08T00:40:58.451764594Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:58.502260 containerd[1544]: time="2025-05-08T00:40:58.471445627Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 8 00:40:58.502260 containerd[1544]: time="2025-05-08T00:40:58.475937436Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:58.510865 containerd[1544]: time="2025-05-08T00:40:58.510824649Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:58.521387 containerd[1544]: time="2025-05-08T00:40:58.521366913Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 2.664921189s" May 8 00:40:58.529327 containerd[1544]: time="2025-05-08T00:40:58.521389427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 8 00:40:58.888571 sshd[5863]: Accepted publickey for core from 139.178.68.195 port 43802 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:40:58.895650 sshd[5863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:40:58.908929 systemd-logind[1518]: New session 16 of user core. May 8 00:40:58.913053 systemd[1]: Started session-16.scope - Session 16 of User core. May 8 00:40:59.420044 containerd[1544]: time="2025-05-08T00:40:59.419920019Z" level=info msg="CreateContainer within sandbox \"844af64f5617eb17acb95e86f94f55a233bffa2ab5c67690f5e8d84e269f91ae\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 8 00:40:59.732972 containerd[1544]: time="2025-05-08T00:40:59.730549178Z" level=info msg="CreateContainer within sandbox \"844af64f5617eb17acb95e86f94f55a233bffa2ab5c67690f5e8d84e269f91ae\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"7c6ed22393d2e9a3a8cb3139e592bbdc8a1ed027c3c17996514e90acf91d4201\"" May 8 00:40:59.854893 containerd[1544]: time="2025-05-08T00:40:59.854836634Z" level=info msg="StartContainer for \"7c6ed22393d2e9a3a8cb3139e592bbdc8a1ed027c3c17996514e90acf91d4201\"" May 8 00:41:00.024038 systemd[1]: Started cri-containerd-7c6ed22393d2e9a3a8cb3139e592bbdc8a1ed027c3c17996514e90acf91d4201.scope - libcontainer container 7c6ed22393d2e9a3a8cb3139e592bbdc8a1ed027c3c17996514e90acf91d4201. May 8 00:41:00.070255 containerd[1544]: time="2025-05-08T00:41:00.070119882Z" level=info msg="StartContainer for \"7c6ed22393d2e9a3a8cb3139e592bbdc8a1ed027c3c17996514e90acf91d4201\" returns successfully" May 8 00:41:00.618936 kubelet[2724]: I0508 00:41:00.604981 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-bcbf44d57-tjjwh" podStartSLOduration=78.389536098 podStartE2EDuration="1m21.585349697s" podCreationTimestamp="2025-05-08 00:39:39 +0000 UTC" firstStartedPulling="2025-05-08 00:40:55.850690136 +0000 UTC m=+88.200316738" lastFinishedPulling="2025-05-08 00:40:59.046503734 +0000 UTC m=+91.396130337" observedRunningTime="2025-05-08 00:41:00.562082708 +0000 UTC m=+92.911709331" watchObservedRunningTime="2025-05-08 00:41:00.585349697 +0000 UTC m=+92.934976301" May 8 00:41:01.283623 sshd[5863]: pam_unix(sshd:session): session closed for user core May 8 00:41:01.310884 systemd[1]: sshd@14-139.178.70.100:22-139.178.68.195:43802.service: Deactivated successfully. May 8 00:41:01.312294 systemd[1]: session-16.scope: Deactivated successfully. May 8 00:41:01.313356 systemd-logind[1518]: Session 16 logged out. Waiting for processes to exit. May 8 00:41:01.314057 systemd-logind[1518]: Removed session 16. May 8 00:41:06.293092 systemd[1]: Started sshd@15-139.178.70.100:22-139.178.68.195:37608.service - OpenSSH per-connection server daemon (139.178.68.195:37608). May 8 00:41:06.456772 sshd[5953]: Accepted publickey for core from 139.178.68.195 port 37608 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:41:06.458048 sshd[5953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:41:06.462865 systemd-logind[1518]: New session 17 of user core. May 8 00:41:06.476206 systemd[1]: Started session-17.scope - Session 17 of User core. May 8 00:41:07.169320 sshd[5953]: pam_unix(sshd:session): session closed for user core May 8 00:41:07.182214 systemd[1]: Started sshd@16-139.178.70.100:22-139.178.68.195:37616.service - OpenSSH per-connection server daemon (139.178.68.195:37616). May 8 00:41:07.182660 systemd[1]: sshd@15-139.178.70.100:22-139.178.68.195:37608.service: Deactivated successfully. May 8 00:41:07.185435 systemd[1]: session-17.scope: Deactivated successfully. May 8 00:41:07.187680 systemd-logind[1518]: Session 17 logged out. Waiting for processes to exit. May 8 00:41:07.188967 systemd-logind[1518]: Removed session 17. May 8 00:41:07.271002 sshd[5963]: Accepted publickey for core from 139.178.68.195 port 37616 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:41:07.271966 sshd[5963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:41:07.274887 systemd-logind[1518]: New session 18 of user core. May 8 00:41:07.282103 systemd[1]: Started session-18.scope - Session 18 of User core. May 8 00:41:08.934785 sshd[5963]: pam_unix(sshd:session): session closed for user core May 8 00:41:08.941401 systemd[1]: sshd@16-139.178.70.100:22-139.178.68.195:37616.service: Deactivated successfully. May 8 00:41:08.943822 systemd[1]: session-18.scope: Deactivated successfully. May 8 00:41:08.945137 systemd-logind[1518]: Session 18 logged out. Waiting for processes to exit. May 8 00:41:08.949229 systemd[1]: Started sshd@17-139.178.70.100:22-139.178.68.195:37620.service - OpenSSH per-connection server daemon (139.178.68.195:37620). May 8 00:41:08.951085 systemd-logind[1518]: Removed session 18. May 8 00:41:09.236915 sshd[5975]: Accepted publickey for core from 139.178.68.195 port 37620 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:41:09.238216 sshd[5975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:41:09.242223 systemd-logind[1518]: New session 19 of user core. May 8 00:41:09.246059 systemd[1]: Started session-19.scope - Session 19 of User core. May 8 00:41:10.543649 systemd[1]: Started sshd@18-139.178.70.100:22-139.178.68.195:37624.service - OpenSSH per-connection server daemon (139.178.68.195:37624). May 8 00:41:10.551705 sshd[5975]: pam_unix(sshd:session): session closed for user core May 8 00:41:10.561038 systemd-logind[1518]: Session 19 logged out. Waiting for processes to exit. May 8 00:41:10.561458 systemd[1]: sshd@17-139.178.70.100:22-139.178.68.195:37620.service: Deactivated successfully. May 8 00:41:10.562724 systemd[1]: session-19.scope: Deactivated successfully. May 8 00:41:10.563899 systemd-logind[1518]: Removed session 19. May 8 00:41:10.824979 sshd[5993]: Accepted publickey for core from 139.178.68.195 port 37624 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:41:10.826561 sshd[5993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:41:10.832229 systemd-logind[1518]: New session 20 of user core. May 8 00:41:10.837192 systemd[1]: Started session-20.scope - Session 20 of User core. May 8 00:41:11.377090 sshd[5993]: pam_unix(sshd:session): session closed for user core May 8 00:41:11.383333 systemd[1]: sshd@18-139.178.70.100:22-139.178.68.195:37624.service: Deactivated successfully. May 8 00:41:11.384884 systemd[1]: session-20.scope: Deactivated successfully. May 8 00:41:11.386055 systemd-logind[1518]: Session 20 logged out. Waiting for processes to exit. May 8 00:41:11.393729 systemd[1]: Started sshd@19-139.178.70.100:22-139.178.68.195:37632.service - OpenSSH per-connection server daemon (139.178.68.195:37632). May 8 00:41:11.396153 systemd-logind[1518]: Removed session 20. May 8 00:41:11.453890 sshd[6006]: Accepted publickey for core from 139.178.68.195 port 37632 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:41:11.454931 sshd[6006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:41:11.459463 systemd-logind[1518]: New session 21 of user core. May 8 00:41:11.465216 systemd[1]: Started session-21.scope - Session 21 of User core. May 8 00:41:11.595849 sshd[6006]: pam_unix(sshd:session): session closed for user core May 8 00:41:11.599141 systemd[1]: sshd@19-139.178.70.100:22-139.178.68.195:37632.service: Deactivated successfully. May 8 00:41:11.601377 systemd[1]: session-21.scope: Deactivated successfully. May 8 00:41:11.602200 systemd-logind[1518]: Session 21 logged out. Waiting for processes to exit. May 8 00:41:11.602866 systemd-logind[1518]: Removed session 21. May 8 00:41:16.606801 systemd[1]: Started sshd@20-139.178.70.100:22-139.178.68.195:59520.service - OpenSSH per-connection server daemon (139.178.68.195:59520). May 8 00:41:16.724297 sshd[6051]: Accepted publickey for core from 139.178.68.195 port 59520 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:41:16.725477 sshd[6051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:41:16.728222 systemd-logind[1518]: New session 22 of user core. May 8 00:41:16.732056 systemd[1]: Started session-22.scope - Session 22 of User core. May 8 00:41:17.062814 sshd[6051]: pam_unix(sshd:session): session closed for user core May 8 00:41:17.065835 systemd-logind[1518]: Session 22 logged out. Waiting for processes to exit. May 8 00:41:17.066557 systemd[1]: sshd@20-139.178.70.100:22-139.178.68.195:59520.service: Deactivated successfully. May 8 00:41:17.068862 systemd[1]: session-22.scope: Deactivated successfully. May 8 00:41:17.069929 systemd-logind[1518]: Removed session 22. May 8 00:41:22.075900 systemd[1]: Started sshd@21-139.178.70.100:22-139.178.68.195:59522.service - OpenSSH per-connection server daemon (139.178.68.195:59522). May 8 00:41:22.116586 sshd[6064]: Accepted publickey for core from 139.178.68.195 port 59522 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:41:22.117836 sshd[6064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:41:22.120926 systemd-logind[1518]: New session 23 of user core. May 8 00:41:22.126125 systemd[1]: Started session-23.scope - Session 23 of User core. May 8 00:41:22.262127 sshd[6064]: pam_unix(sshd:session): session closed for user core May 8 00:41:22.264528 systemd[1]: sshd@21-139.178.70.100:22-139.178.68.195:59522.service: Deactivated successfully. May 8 00:41:22.266303 systemd[1]: session-23.scope: Deactivated successfully. May 8 00:41:22.267560 systemd-logind[1518]: Session 23 logged out. Waiting for processes to exit. May 8 00:41:22.268674 systemd-logind[1518]: Removed session 23. May 8 00:41:27.277124 systemd[1]: Started sshd@22-139.178.70.100:22-139.178.68.195:58438.service - OpenSSH per-connection server daemon (139.178.68.195:58438). May 8 00:41:27.496842 sshd[6083]: Accepted publickey for core from 139.178.68.195 port 58438 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:41:27.498345 sshd[6083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:41:27.503854 systemd-logind[1518]: New session 24 of user core. May 8 00:41:27.510221 systemd[1]: Started session-24.scope - Session 24 of User core. May 8 00:41:27.950566 containerd[1544]: time="2025-05-08T00:41:27.950510983Z" level=info msg="StopPodSandbox for \"00b39b0ece04afb0b2520cb0d54a7f14d5e2cb16e43b46b624c2aaeb7ea01928\"" May 8 00:41:27.956200 containerd[1544]: time="2025-05-08T00:41:27.956165133Z" level=info msg="TearDown network for sandbox \"00b39b0ece04afb0b2520cb0d54a7f14d5e2cb16e43b46b624c2aaeb7ea01928\" successfully" May 8 00:41:27.956893 containerd[1544]: time="2025-05-08T00:41:27.956777142Z" level=info msg="StopPodSandbox for \"00b39b0ece04afb0b2520cb0d54a7f14d5e2cb16e43b46b624c2aaeb7ea01928\" returns successfully" May 8 00:41:28.012530 sshd[6083]: pam_unix(sshd:session): session closed for user core May 8 00:41:28.015339 systemd[1]: sshd@22-139.178.70.100:22-139.178.68.195:58438.service: Deactivated successfully. May 8 00:41:28.015473 systemd-logind[1518]: Session 24 logged out. Waiting for processes to exit. May 8 00:41:28.016863 systemd[1]: session-24.scope: Deactivated successfully. May 8 00:41:28.019198 systemd-logind[1518]: Removed session 24. May 8 00:41:28.051486 containerd[1544]: time="2025-05-08T00:41:28.051448269Z" level=info msg="RemovePodSandbox for \"00b39b0ece04afb0b2520cb0d54a7f14d5e2cb16e43b46b624c2aaeb7ea01928\"" May 8 00:41:28.052725 containerd[1544]: time="2025-05-08T00:41:28.052707021Z" level=info msg="Forcibly stopping sandbox \"00b39b0ece04afb0b2520cb0d54a7f14d5e2cb16e43b46b624c2aaeb7ea01928\"" May 8 00:41:28.052774 containerd[1544]: time="2025-05-08T00:41:28.052762229Z" level=info msg="TearDown network for sandbox \"00b39b0ece04afb0b2520cb0d54a7f14d5e2cb16e43b46b624c2aaeb7ea01928\" successfully" May 8 00:41:28.095327 containerd[1544]: time="2025-05-08T00:41:28.095289208Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"00b39b0ece04afb0b2520cb0d54a7f14d5e2cb16e43b46b624c2aaeb7ea01928\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:28.101284 containerd[1544]: time="2025-05-08T00:41:28.101245375Z" level=info msg="RemovePodSandbox \"00b39b0ece04afb0b2520cb0d54a7f14d5e2cb16e43b46b624c2aaeb7ea01928\" returns successfully" May 8 00:41:28.116256 containerd[1544]: time="2025-05-08T00:41:28.110195224Z" level=info msg="StopPodSandbox for \"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\"" May 8 00:41:29.110725 containerd[1544]: 2025-05-08 00:41:28.681 [WARNING][6121] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--bcbf44d57--tjjwh-eth0", GenerateName:"calico-kube-controllers-bcbf44d57-", Namespace:"calico-system", SelfLink:"", UID:"b0cb28c6-493c-4ec2-95eb-08870a5239cb", ResourceVersion:"1151", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bcbf44d57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"844af64f5617eb17acb95e86f94f55a233bffa2ab5c67690f5e8d84e269f91ae", Pod:"calico-kube-controllers-bcbf44d57-tjjwh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1413b71bf3f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:29.110725 containerd[1544]: 2025-05-08 00:41:28.684 [INFO][6121] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2" May 8 00:41:29.110725 containerd[1544]: 2025-05-08 00:41:28.684 [INFO][6121] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2" iface="eth0" netns="" May 8 00:41:29.110725 containerd[1544]: 2025-05-08 00:41:28.684 [INFO][6121] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2" May 8 00:41:29.110725 containerd[1544]: 2025-05-08 00:41:28.684 [INFO][6121] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2" May 8 00:41:29.110725 containerd[1544]: 2025-05-08 00:41:29.097 [INFO][6128] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2" HandleID="k8s-pod-network.53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2" Workload="localhost-k8s-calico--kube--controllers--bcbf44d57--tjjwh-eth0" May 8 00:41:29.110725 containerd[1544]: 2025-05-08 00:41:29.098 [INFO][6128] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:29.110725 containerd[1544]: 2025-05-08 00:41:29.098 [INFO][6128] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:29.110725 containerd[1544]: 2025-05-08 00:41:29.107 [WARNING][6128] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2" HandleID="k8s-pod-network.53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2" Workload="localhost-k8s-calico--kube--controllers--bcbf44d57--tjjwh-eth0" May 8 00:41:29.110725 containerd[1544]: 2025-05-08 00:41:29.107 [INFO][6128] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2" HandleID="k8s-pod-network.53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2" Workload="localhost-k8s-calico--kube--controllers--bcbf44d57--tjjwh-eth0" May 8 00:41:29.110725 containerd[1544]: 2025-05-08 00:41:29.108 [INFO][6128] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:29.110725 containerd[1544]: 2025-05-08 00:41:29.109 [INFO][6121] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2" May 8 00:41:29.110725 containerd[1544]: time="2025-05-08T00:41:29.110551539Z" level=info msg="TearDown network for sandbox \"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\" successfully" May 8 00:41:29.110725 containerd[1544]: time="2025-05-08T00:41:29.110570429Z" level=info msg="StopPodSandbox for \"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\" returns successfully" May 8 00:41:29.112530 containerd[1544]: time="2025-05-08T00:41:29.111486800Z" level=info msg="RemovePodSandbox for \"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\"" May 8 00:41:29.112530 containerd[1544]: time="2025-05-08T00:41:29.111504449Z" level=info msg="Forcibly stopping sandbox \"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\"" May 8 00:41:29.173049 containerd[1544]: 2025-05-08 00:41:29.139 [WARNING][6145] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--bcbf44d57--tjjwh-eth0", GenerateName:"calico-kube-controllers-bcbf44d57-", Namespace:"calico-system", SelfLink:"", UID:"b0cb28c6-493c-4ec2-95eb-08870a5239cb", ResourceVersion:"1151", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bcbf44d57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"844af64f5617eb17acb95e86f94f55a233bffa2ab5c67690f5e8d84e269f91ae", Pod:"calico-kube-controllers-bcbf44d57-tjjwh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1413b71bf3f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:29.173049 containerd[1544]: 2025-05-08 00:41:29.139 [INFO][6145] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2" May 8 00:41:29.173049 containerd[1544]: 2025-05-08 00:41:29.139 [INFO][6145] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2" iface="eth0" netns="" May 8 00:41:29.173049 containerd[1544]: 2025-05-08 00:41:29.139 [INFO][6145] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2" May 8 00:41:29.173049 containerd[1544]: 2025-05-08 00:41:29.139 [INFO][6145] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2" May 8 00:41:29.173049 containerd[1544]: 2025-05-08 00:41:29.158 [INFO][6152] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2" HandleID="k8s-pod-network.53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2" Workload="localhost-k8s-calico--kube--controllers--bcbf44d57--tjjwh-eth0" May 8 00:41:29.173049 containerd[1544]: 2025-05-08 00:41:29.158 [INFO][6152] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:29.173049 containerd[1544]: 2025-05-08 00:41:29.158 [INFO][6152] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:29.173049 containerd[1544]: 2025-05-08 00:41:29.166 [WARNING][6152] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2" HandleID="k8s-pod-network.53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2" Workload="localhost-k8s-calico--kube--controllers--bcbf44d57--tjjwh-eth0" May 8 00:41:29.173049 containerd[1544]: 2025-05-08 00:41:29.166 [INFO][6152] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2" HandleID="k8s-pod-network.53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2" Workload="localhost-k8s-calico--kube--controllers--bcbf44d57--tjjwh-eth0" May 8 00:41:29.173049 containerd[1544]: 2025-05-08 00:41:29.168 [INFO][6152] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:29.173049 containerd[1544]: 2025-05-08 00:41:29.170 [INFO][6145] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2" May 8 00:41:29.175290 containerd[1544]: time="2025-05-08T00:41:29.173165121Z" level=info msg="TearDown network for sandbox \"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\" successfully" May 8 00:41:29.212560 containerd[1544]: time="2025-05-08T00:41:29.212516359Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:29.212560 containerd[1544]: time="2025-05-08T00:41:29.212564427Z" level=info msg="RemovePodSandbox \"53d0c2941f68e8af2046d048a7591d28699ef65c5144151213271c972eb0deb2\" returns successfully" May 8 00:41:29.216093 containerd[1544]: time="2025-05-08T00:41:29.212873808Z" level=info msg="StopPodSandbox for \"454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09\"" May 8 00:41:29.279595 containerd[1544]: 2025-05-08 00:41:29.255 [WARNING][6170] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c849db6d6--wzfw7-eth0", GenerateName:"calico-apiserver-6c849db6d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"5c48551f-d382-48fc-8b2c-4049dd697e7b", ResourceVersion:"1105", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c849db6d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9a6a8c7695d333e4c18ac61c06568baa0a16eeffe6f8e5b8115a684c72b68a27", Pod:"calico-apiserver-6c849db6d6-wzfw7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliedd93c7c832", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:29.279595 containerd[1544]: 2025-05-08 00:41:29.255 [INFO][6170] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09" May 8 00:41:29.279595 containerd[1544]: 2025-05-08 00:41:29.255 [INFO][6170] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09" iface="eth0" netns="" May 8 00:41:29.279595 containerd[1544]: 2025-05-08 00:41:29.255 [INFO][6170] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09" May 8 00:41:29.279595 containerd[1544]: 2025-05-08 00:41:29.255 [INFO][6170] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09" May 8 00:41:29.279595 containerd[1544]: 2025-05-08 00:41:29.272 [INFO][6177] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09" HandleID="k8s-pod-network.454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09" Workload="localhost-k8s-calico--apiserver--6c849db6d6--wzfw7-eth0" May 8 00:41:29.279595 containerd[1544]: 2025-05-08 00:41:29.272 [INFO][6177] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:29.279595 containerd[1544]: 2025-05-08 00:41:29.272 [INFO][6177] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:29.279595 containerd[1544]: 2025-05-08 00:41:29.276 [WARNING][6177] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09" HandleID="k8s-pod-network.454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09" Workload="localhost-k8s-calico--apiserver--6c849db6d6--wzfw7-eth0" May 8 00:41:29.279595 containerd[1544]: 2025-05-08 00:41:29.276 [INFO][6177] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09" HandleID="k8s-pod-network.454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09" Workload="localhost-k8s-calico--apiserver--6c849db6d6--wzfw7-eth0" May 8 00:41:29.279595 containerd[1544]: 2025-05-08 00:41:29.277 [INFO][6177] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:29.279595 containerd[1544]: 2025-05-08 00:41:29.278 [INFO][6170] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09" May 8 00:41:29.294206 containerd[1544]: time="2025-05-08T00:41:29.279628342Z" level=info msg="TearDown network for sandbox \"454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09\" successfully" May 8 00:41:29.294206 containerd[1544]: time="2025-05-08T00:41:29.279645779Z" level=info msg="StopPodSandbox for \"454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09\" returns successfully" May 8 00:41:29.294206 containerd[1544]: time="2025-05-08T00:41:29.279965654Z" level=info msg="RemovePodSandbox for \"454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09\"" May 8 00:41:29.294206 containerd[1544]: time="2025-05-08T00:41:29.279982805Z" level=info msg="Forcibly stopping sandbox \"454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09\"" May 8 00:41:29.389123 containerd[1544]: 2025-05-08 00:41:29.344 [WARNING][6195] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c849db6d6--wzfw7-eth0", GenerateName:"calico-apiserver-6c849db6d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"5c48551f-d382-48fc-8b2c-4049dd697e7b", ResourceVersion:"1105", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c849db6d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9a6a8c7695d333e4c18ac61c06568baa0a16eeffe6f8e5b8115a684c72b68a27", Pod:"calico-apiserver-6c849db6d6-wzfw7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliedd93c7c832", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:29.389123 containerd[1544]: 2025-05-08 00:41:29.344 [INFO][6195] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09" May 8 00:41:29.389123 containerd[1544]: 2025-05-08 00:41:29.344 [INFO][6195] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09" iface="eth0" netns="" May 8 00:41:29.389123 containerd[1544]: 2025-05-08 00:41:29.344 [INFO][6195] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09" May 8 00:41:29.389123 containerd[1544]: 2025-05-08 00:41:29.344 [INFO][6195] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09" May 8 00:41:29.389123 containerd[1544]: 2025-05-08 00:41:29.379 [INFO][6202] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09" HandleID="k8s-pod-network.454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09" Workload="localhost-k8s-calico--apiserver--6c849db6d6--wzfw7-eth0" May 8 00:41:29.389123 containerd[1544]: 2025-05-08 00:41:29.379 [INFO][6202] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:29.389123 containerd[1544]: 2025-05-08 00:41:29.379 [INFO][6202] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:29.389123 containerd[1544]: 2025-05-08 00:41:29.384 [WARNING][6202] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09" HandleID="k8s-pod-network.454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09" Workload="localhost-k8s-calico--apiserver--6c849db6d6--wzfw7-eth0" May 8 00:41:29.389123 containerd[1544]: 2025-05-08 00:41:29.384 [INFO][6202] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09" HandleID="k8s-pod-network.454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09" Workload="localhost-k8s-calico--apiserver--6c849db6d6--wzfw7-eth0" May 8 00:41:29.389123 containerd[1544]: 2025-05-08 00:41:29.386 [INFO][6202] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:29.389123 containerd[1544]: 2025-05-08 00:41:29.387 [INFO][6195] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09" May 8 00:41:29.389123 containerd[1544]: time="2025-05-08T00:41:29.389091517Z" level=info msg="TearDown network for sandbox \"454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09\" successfully" May 8 00:41:29.446409 containerd[1544]: time="2025-05-08T00:41:29.446259684Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:29.446409 containerd[1544]: time="2025-05-08T00:41:29.446326982Z" level=info msg="RemovePodSandbox \"454495cf7103668a4c018a1a8cf28d5b0be9d756f3239f810586da57d59d3c09\" returns successfully" May 8 00:41:29.446977 containerd[1544]: time="2025-05-08T00:41:29.446776039Z" level=info msg="StopPodSandbox for \"4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81\"" May 8 00:41:29.520208 containerd[1544]: 2025-05-08 00:41:29.486 [WARNING][6220] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xqv5x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"91493713-d7eb-4156-9aba-7e866dca9c56", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e7b2f5467137e23a66b4d5533fb8007cb131d3925e0df80bff3dc4e1a6c19c5e", Pod:"csi-node-driver-xqv5x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.128/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6e899bd61ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:29.520208 containerd[1544]: 2025-05-08 00:41:29.486 [INFO][6220] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81" May 8 00:41:29.520208 containerd[1544]: 2025-05-08 00:41:29.486 [INFO][6220] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81" iface="eth0" netns="" May 8 00:41:29.520208 containerd[1544]: 2025-05-08 00:41:29.486 [INFO][6220] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81" May 8 00:41:29.520208 containerd[1544]: 2025-05-08 00:41:29.486 [INFO][6220] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81" May 8 00:41:29.520208 containerd[1544]: 2025-05-08 00:41:29.511 [INFO][6227] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81" HandleID="k8s-pod-network.4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81" Workload="localhost-k8s-csi--node--driver--xqv5x-eth0" May 8 00:41:29.520208 containerd[1544]: 2025-05-08 00:41:29.511 [INFO][6227] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:29.520208 containerd[1544]: 2025-05-08 00:41:29.511 [INFO][6227] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:29.520208 containerd[1544]: 2025-05-08 00:41:29.515 [WARNING][6227] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81" HandleID="k8s-pod-network.4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81" Workload="localhost-k8s-csi--node--driver--xqv5x-eth0" May 8 00:41:29.520208 containerd[1544]: 2025-05-08 00:41:29.515 [INFO][6227] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81" HandleID="k8s-pod-network.4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81" Workload="localhost-k8s-csi--node--driver--xqv5x-eth0" May 8 00:41:29.520208 containerd[1544]: 2025-05-08 00:41:29.517 [INFO][6227] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:29.520208 containerd[1544]: 2025-05-08 00:41:29.518 [INFO][6220] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81" May 8 00:41:29.521574 containerd[1544]: time="2025-05-08T00:41:29.520829884Z" level=info msg="TearDown network for sandbox \"4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81\" successfully" May 8 00:41:29.521574 containerd[1544]: time="2025-05-08T00:41:29.520868052Z" level=info msg="StopPodSandbox for \"4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81\" returns successfully" May 8 00:41:29.521574 containerd[1544]: time="2025-05-08T00:41:29.521302320Z" level=info msg="RemovePodSandbox for \"4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81\"" May 8 00:41:29.521574 containerd[1544]: time="2025-05-08T00:41:29.521324096Z" level=info msg="Forcibly stopping sandbox \"4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81\"" May 8 00:41:29.599304 containerd[1544]: 2025-05-08 00:41:29.566 [WARNING][6245] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xqv5x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"91493713-d7eb-4156-9aba-7e866dca9c56", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e7b2f5467137e23a66b4d5533fb8007cb131d3925e0df80bff3dc4e1a6c19c5e", Pod:"csi-node-driver-xqv5x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.128/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6e899bd61ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:29.599304 containerd[1544]: 2025-05-08 00:41:29.567 [INFO][6245] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81" May 8 00:41:29.599304 containerd[1544]: 2025-05-08 00:41:29.567 [INFO][6245] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81" iface="eth0" netns="" May 8 00:41:29.599304 containerd[1544]: 2025-05-08 00:41:29.567 [INFO][6245] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81" May 8 00:41:29.599304 containerd[1544]: 2025-05-08 00:41:29.567 [INFO][6245] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81" May 8 00:41:29.599304 containerd[1544]: 2025-05-08 00:41:29.590 [INFO][6252] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81" HandleID="k8s-pod-network.4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81" Workload="localhost-k8s-csi--node--driver--xqv5x-eth0" May 8 00:41:29.599304 containerd[1544]: 2025-05-08 00:41:29.590 [INFO][6252] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:29.599304 containerd[1544]: 2025-05-08 00:41:29.590 [INFO][6252] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:29.599304 containerd[1544]: 2025-05-08 00:41:29.594 [WARNING][6252] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81" HandleID="k8s-pod-network.4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81" Workload="localhost-k8s-csi--node--driver--xqv5x-eth0" May 8 00:41:29.599304 containerd[1544]: 2025-05-08 00:41:29.594 [INFO][6252] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81" HandleID="k8s-pod-network.4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81" Workload="localhost-k8s-csi--node--driver--xqv5x-eth0" May 8 00:41:29.599304 containerd[1544]: 2025-05-08 00:41:29.595 [INFO][6252] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:29.599304 containerd[1544]: 2025-05-08 00:41:29.597 [INFO][6245] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81" May 8 00:41:29.599304 containerd[1544]: time="2025-05-08T00:41:29.598387989Z" level=info msg="TearDown network for sandbox \"4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81\" successfully" May 8 00:41:29.602376 containerd[1544]: time="2025-05-08T00:41:29.602342176Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:29.602494 containerd[1544]: time="2025-05-08T00:41:29.602483371Z" level=info msg="RemovePodSandbox \"4fb7fc4a46a6334f1a73d9a2263416daaa68d87e73f1dc5a020f59bbdc6f6b81\" returns successfully" May 8 00:41:29.602839 containerd[1544]: time="2025-05-08T00:41:29.602826769Z" level=info msg="StopPodSandbox for \"d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647\"" May 8 00:41:29.659413 containerd[1544]: 2025-05-08 00:41:29.630 [WARNING][6270] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c849db6d6--n9bs2-eth0", GenerateName:"calico-apiserver-6c849db6d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"99de4f5c-61dc-4b3d-a2ad-08772fa94419", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c849db6d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fb0dc8d3fa8a698f2ab25e991279cfaa1e3d32570693cb9d02c18574bf3fc012", Pod:"calico-apiserver-6c849db6d6-n9bs2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali23c28f979f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:29.659413 containerd[1544]: 2025-05-08 00:41:29.630 [INFO][6270] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647" May 8 00:41:29.659413 containerd[1544]: 2025-05-08 00:41:29.630 [INFO][6270] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647" iface="eth0" netns="" May 8 00:41:29.659413 containerd[1544]: 2025-05-08 00:41:29.631 [INFO][6270] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647" May 8 00:41:29.659413 containerd[1544]: 2025-05-08 00:41:29.631 [INFO][6270] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647" May 8 00:41:29.659413 containerd[1544]: 2025-05-08 00:41:29.650 [INFO][6277] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647" HandleID="k8s-pod-network.d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647" Workload="localhost-k8s-calico--apiserver--6c849db6d6--n9bs2-eth0" May 8 00:41:29.659413 containerd[1544]: 2025-05-08 00:41:29.650 [INFO][6277] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:29.659413 containerd[1544]: 2025-05-08 00:41:29.650 [INFO][6277] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:29.659413 containerd[1544]: 2025-05-08 00:41:29.654 [WARNING][6277] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647" HandleID="k8s-pod-network.d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647" Workload="localhost-k8s-calico--apiserver--6c849db6d6--n9bs2-eth0" May 8 00:41:29.659413 containerd[1544]: 2025-05-08 00:41:29.654 [INFO][6277] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647" HandleID="k8s-pod-network.d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647" Workload="localhost-k8s-calico--apiserver--6c849db6d6--n9bs2-eth0" May 8 00:41:29.659413 containerd[1544]: 2025-05-08 00:41:29.656 [INFO][6277] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:29.659413 containerd[1544]: 2025-05-08 00:41:29.657 [INFO][6270] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647" May 8 00:41:29.660645 containerd[1544]: time="2025-05-08T00:41:29.659747316Z" level=info msg="TearDown network for sandbox \"d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647\" successfully" May 8 00:41:29.660645 containerd[1544]: time="2025-05-08T00:41:29.659784785Z" level=info msg="StopPodSandbox for \"d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647\" returns successfully" May 8 00:41:29.660780 containerd[1544]: time="2025-05-08T00:41:29.660748398Z" level=info msg="RemovePodSandbox for \"d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647\"" May 8 00:41:29.660780 containerd[1544]: time="2025-05-08T00:41:29.660766853Z" level=info msg="Forcibly stopping sandbox \"d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647\"" May 8 00:41:29.721963 containerd[1544]: 2025-05-08 00:41:29.697 [WARNING][6295] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c849db6d6--n9bs2-eth0", GenerateName:"calico-apiserver-6c849db6d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"99de4f5c-61dc-4b3d-a2ad-08772fa94419", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c849db6d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fb0dc8d3fa8a698f2ab25e991279cfaa1e3d32570693cb9d02c18574bf3fc012", Pod:"calico-apiserver-6c849db6d6-n9bs2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali23c28f979f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:29.721963 containerd[1544]: 2025-05-08 00:41:29.697 [INFO][6295] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647" May 8 00:41:29.721963 containerd[1544]: 2025-05-08 00:41:29.697 [INFO][6295] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647" iface="eth0" netns="" May 8 00:41:29.721963 containerd[1544]: 2025-05-08 00:41:29.697 [INFO][6295] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647" May 8 00:41:29.721963 containerd[1544]: 2025-05-08 00:41:29.697 [INFO][6295] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647" May 8 00:41:29.721963 containerd[1544]: 2025-05-08 00:41:29.713 [INFO][6302] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647" HandleID="k8s-pod-network.d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647" Workload="localhost-k8s-calico--apiserver--6c849db6d6--n9bs2-eth0" May 8 00:41:29.721963 containerd[1544]: 2025-05-08 00:41:29.713 [INFO][6302] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:29.721963 containerd[1544]: 2025-05-08 00:41:29.713 [INFO][6302] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:29.721963 containerd[1544]: 2025-05-08 00:41:29.718 [WARNING][6302] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647" HandleID="k8s-pod-network.d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647" Workload="localhost-k8s-calico--apiserver--6c849db6d6--n9bs2-eth0" May 8 00:41:29.721963 containerd[1544]: 2025-05-08 00:41:29.718 [INFO][6302] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647" HandleID="k8s-pod-network.d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647" Workload="localhost-k8s-calico--apiserver--6c849db6d6--n9bs2-eth0" May 8 00:41:29.721963 containerd[1544]: 2025-05-08 00:41:29.719 [INFO][6302] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:29.721963 containerd[1544]: 2025-05-08 00:41:29.720 [INFO][6295] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647" May 8 00:41:29.728565 containerd[1544]: time="2025-05-08T00:41:29.721986481Z" level=info msg="TearDown network for sandbox \"d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647\" successfully" May 8 00:41:29.740625 containerd[1544]: time="2025-05-08T00:41:29.740573617Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:29.740625 containerd[1544]: time="2025-05-08T00:41:29.740621330Z" level=info msg="RemovePodSandbox \"d8a6674b0d2aab34e09deeaf33ab669748a262a0699f7d621b131bd460157647\" returns successfully" May 8 00:41:29.741189 containerd[1544]: time="2025-05-08T00:41:29.740934237Z" level=info msg="StopPodSandbox for \"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\"" May 8 00:41:29.812619 containerd[1544]: 2025-05-08 00:41:29.773 [WARNING][6320] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--jq978-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7cd62a62-c3b8-4d08-90a0-b53c0511c1f5", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"935d64eab31ded4afc3d33f2403e3cb68119b5df25da5f25e05a7314530a2ace", Pod:"coredns-668d6bf9bc-jq978", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali50f1c447ce3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:29.812619 containerd[1544]: 2025-05-08 00:41:29.773 [INFO][6320] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f" May 8 00:41:29.812619 containerd[1544]: 2025-05-08 00:41:29.773 [INFO][6320] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f" iface="eth0" netns="" May 8 00:41:29.812619 containerd[1544]: 2025-05-08 00:41:29.773 [INFO][6320] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f" May 8 00:41:29.812619 containerd[1544]: 2025-05-08 00:41:29.773 [INFO][6320] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f" May 8 00:41:29.812619 containerd[1544]: 2025-05-08 00:41:29.798 [INFO][6328] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f" HandleID="k8s-pod-network.9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f" Workload="localhost-k8s-coredns--668d6bf9bc--jq978-eth0" May 8 00:41:29.812619 containerd[1544]: 2025-05-08 00:41:29.798 [INFO][6328] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:29.812619 containerd[1544]: 2025-05-08 00:41:29.798 [INFO][6328] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:29.812619 containerd[1544]: 2025-05-08 00:41:29.805 [WARNING][6328] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f" HandleID="k8s-pod-network.9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f" Workload="localhost-k8s-coredns--668d6bf9bc--jq978-eth0" May 8 00:41:29.812619 containerd[1544]: 2025-05-08 00:41:29.805 [INFO][6328] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f" HandleID="k8s-pod-network.9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f" Workload="localhost-k8s-coredns--668d6bf9bc--jq978-eth0" May 8 00:41:29.812619 containerd[1544]: 2025-05-08 00:41:29.807 [INFO][6328] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:29.812619 containerd[1544]: 2025-05-08 00:41:29.809 [INFO][6320] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f" May 8 00:41:29.813395 containerd[1544]: time="2025-05-08T00:41:29.813054648Z" level=info msg="TearDown network for sandbox \"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\" successfully" May 8 00:41:29.813395 containerd[1544]: time="2025-05-08T00:41:29.813087462Z" level=info msg="StopPodSandbox for \"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\" returns successfully" May 8 00:41:29.816675 containerd[1544]: time="2025-05-08T00:41:29.816423023Z" level=info msg="RemovePodSandbox for \"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\"" May 8 00:41:29.816675 containerd[1544]: time="2025-05-08T00:41:29.816454194Z" level=info msg="Forcibly stopping sandbox \"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\"" May 8 00:41:29.875385 containerd[1544]: 2025-05-08 00:41:29.854 [WARNING][6346] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--jq978-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7cd62a62-c3b8-4d08-90a0-b53c0511c1f5", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"935d64eab31ded4afc3d33f2403e3cb68119b5df25da5f25e05a7314530a2ace", Pod:"coredns-668d6bf9bc-jq978", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali50f1c447ce3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:29.875385 containerd[1544]: 2025-05-08 00:41:29.854 [INFO][6346] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f" May 8 00:41:29.875385 containerd[1544]: 2025-05-08 00:41:29.854 [INFO][6346] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f" iface="eth0" netns="" May 8 00:41:29.875385 containerd[1544]: 2025-05-08 00:41:29.854 [INFO][6346] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f" May 8 00:41:29.875385 containerd[1544]: 2025-05-08 00:41:29.854 [INFO][6346] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f" May 8 00:41:29.875385 containerd[1544]: 2025-05-08 00:41:29.868 [INFO][6353] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f" HandleID="k8s-pod-network.9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f" Workload="localhost-k8s-coredns--668d6bf9bc--jq978-eth0" May 8 00:41:29.875385 containerd[1544]: 2025-05-08 00:41:29.868 [INFO][6353] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:29.875385 containerd[1544]: 2025-05-08 00:41:29.868 [INFO][6353] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:29.875385 containerd[1544]: 2025-05-08 00:41:29.872 [WARNING][6353] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f" HandleID="k8s-pod-network.9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f" Workload="localhost-k8s-coredns--668d6bf9bc--jq978-eth0" May 8 00:41:29.875385 containerd[1544]: 2025-05-08 00:41:29.872 [INFO][6353] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f" HandleID="k8s-pod-network.9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f" Workload="localhost-k8s-coredns--668d6bf9bc--jq978-eth0" May 8 00:41:29.875385 containerd[1544]: 2025-05-08 00:41:29.873 [INFO][6353] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:29.875385 containerd[1544]: 2025-05-08 00:41:29.874 [INFO][6346] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f" May 8 00:41:29.878784 containerd[1544]: time="2025-05-08T00:41:29.875418414Z" level=info msg="TearDown network for sandbox \"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\" successfully" May 8 00:41:29.878784 containerd[1544]: time="2025-05-08T00:41:29.876781113Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:29.878784 containerd[1544]: time="2025-05-08T00:41:29.876815036Z" level=info msg="RemovePodSandbox \"9a53abcfcad55788df49c1b34e0196fc623271a974539f22e3b207ecff864f7f\" returns successfully" May 8 00:41:29.878784 containerd[1544]: time="2025-05-08T00:41:29.877186286Z" level=info msg="StopPodSandbox for \"3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03\"" May 8 00:41:29.954654 containerd[1544]: 2025-05-08 00:41:29.918 [WARNING][6371] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--dbhz2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"46106b43-6a82-4ed1-a0c6-7d6292ac8f6f", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"41c59c4fa7543990f5e69fb1549b6d6eee24c11a64345eb672fdbd85e1f4ded4", Pod:"coredns-668d6bf9bc-dbhz2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic5c34036b7c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:29.954654 containerd[1544]: 2025-05-08 00:41:29.918 [INFO][6371] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03" May 8 00:41:29.954654 containerd[1544]: 2025-05-08 00:41:29.918 [INFO][6371] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03" iface="eth0" netns="" May 8 00:41:29.954654 containerd[1544]: 2025-05-08 00:41:29.918 [INFO][6371] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03" May 8 00:41:29.954654 containerd[1544]: 2025-05-08 00:41:29.918 [INFO][6371] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03" May 8 00:41:29.954654 containerd[1544]: 2025-05-08 00:41:29.937 [INFO][6378] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03" HandleID="k8s-pod-network.3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03" Workload="localhost-k8s-coredns--668d6bf9bc--dbhz2-eth0" May 8 00:41:29.954654 containerd[1544]: 2025-05-08 00:41:29.937 [INFO][6378] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:29.954654 containerd[1544]: 2025-05-08 00:41:29.937 [INFO][6378] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:29.954654 containerd[1544]: 2025-05-08 00:41:29.947 [WARNING][6378] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03" HandleID="k8s-pod-network.3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03" Workload="localhost-k8s-coredns--668d6bf9bc--dbhz2-eth0" May 8 00:41:29.954654 containerd[1544]: 2025-05-08 00:41:29.948 [INFO][6378] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03" HandleID="k8s-pod-network.3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03" Workload="localhost-k8s-coredns--668d6bf9bc--dbhz2-eth0" May 8 00:41:29.954654 containerd[1544]: 2025-05-08 00:41:29.951 [INFO][6378] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:29.954654 containerd[1544]: 2025-05-08 00:41:29.953 [INFO][6371] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03" May 8 00:41:29.954654 containerd[1544]: time="2025-05-08T00:41:29.954628590Z" level=info msg="TearDown network for sandbox \"3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03\" successfully" May 8 00:41:29.954654 containerd[1544]: time="2025-05-08T00:41:29.954652558Z" level=info msg="StopPodSandbox for \"3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03\" returns successfully" May 8 00:41:29.955556 containerd[1544]: time="2025-05-08T00:41:29.955224769Z" level=info msg="RemovePodSandbox for \"3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03\"" May 8 00:41:29.955556 containerd[1544]: time="2025-05-08T00:41:29.955247642Z" level=info msg="Forcibly stopping sandbox \"3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03\"" May 8 00:41:30.026831 containerd[1544]: 2025-05-08 00:41:30.000 [WARNING][6396] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--dbhz2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"46106b43-6a82-4ed1-a0c6-7d6292ac8f6f", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"41c59c4fa7543990f5e69fb1549b6d6eee24c11a64345eb672fdbd85e1f4ded4", Pod:"coredns-668d6bf9bc-dbhz2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic5c34036b7c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:30.026831 containerd[1544]: 2025-05-08 00:41:30.000 [INFO][6396] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03" May 8 00:41:30.026831 containerd[1544]: 2025-05-08 00:41:30.000 [INFO][6396] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03" iface="eth0" netns="" May 8 00:41:30.026831 containerd[1544]: 2025-05-08 00:41:30.000 [INFO][6396] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03" May 8 00:41:30.026831 containerd[1544]: 2025-05-08 00:41:30.000 [INFO][6396] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03" May 8 00:41:30.026831 containerd[1544]: 2025-05-08 00:41:30.019 [INFO][6403] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03" HandleID="k8s-pod-network.3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03" Workload="localhost-k8s-coredns--668d6bf9bc--dbhz2-eth0" May 8 00:41:30.026831 containerd[1544]: 2025-05-08 00:41:30.019 [INFO][6403] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:30.026831 containerd[1544]: 2025-05-08 00:41:30.019 [INFO][6403] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:30.026831 containerd[1544]: 2025-05-08 00:41:30.023 [WARNING][6403] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03" HandleID="k8s-pod-network.3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03" Workload="localhost-k8s-coredns--668d6bf9bc--dbhz2-eth0" May 8 00:41:30.026831 containerd[1544]: 2025-05-08 00:41:30.023 [INFO][6403] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03" HandleID="k8s-pod-network.3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03" Workload="localhost-k8s-coredns--668d6bf9bc--dbhz2-eth0" May 8 00:41:30.026831 containerd[1544]: 2025-05-08 00:41:30.024 [INFO][6403] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:30.026831 containerd[1544]: 2025-05-08 00:41:30.025 [INFO][6396] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03" May 8 00:41:30.027313 containerd[1544]: time="2025-05-08T00:41:30.026923751Z" level=info msg="TearDown network for sandbox \"3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03\" successfully" May 8 00:41:30.066396 containerd[1544]: time="2025-05-08T00:41:30.066351828Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:30.066543 containerd[1544]: time="2025-05-08T00:41:30.066408031Z" level=info msg="RemovePodSandbox \"3dc26aab56f35bd9cb67bb6cf315f2958a07ad56ca29a6d55204d124fb269c03\" returns successfully"