May 8 00:35:32.749367 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed May 7 22:54:21 -00 2025 May 8 00:35:32.749396 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=86cfbfcc89a9c46f6cbba5bdb3509d1ce1367f0c93b0b0e4c6bdcad1a2064c90 May 8 00:35:32.749403 kernel: Disabled fast string operations May 8 00:35:32.749407 kernel: BIOS-provided physical RAM map: May 8 00:35:32.749411 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable May 8 00:35:32.749415 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved May 8 00:35:32.749422 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved May 8 00:35:32.749427 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable May 8 00:35:32.749431 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data May 8 00:35:32.749435 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS May 8 00:35:32.749439 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable May 8 00:35:32.749443 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved May 8 00:35:32.749447 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved May 8 00:35:32.749451 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved May 8 00:35:32.749457 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved May 8 00:35:32.749462 kernel: NX (Execute Disable) protection: active May 8 00:35:32.749467 kernel: APIC: Static calls initialized May 8 00:35:32.749471 kernel: SMBIOS 2.7 present. May 8 00:35:32.749476 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 May 8 00:35:32.749481 kernel: vmware: hypercall mode: 0x00 May 8 00:35:32.749486 kernel: Hypervisor detected: VMware May 8 00:35:32.749490 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz May 8 00:35:32.749496 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz May 8 00:35:32.749501 kernel: vmware: using clock offset of 4756423619 ns May 8 00:35:32.749505 kernel: tsc: Detected 3408.000 MHz processor May 8 00:35:32.749510 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 8 00:35:32.749515 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 8 00:35:32.749520 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 May 8 00:35:32.749525 kernel: total RAM covered: 3072M May 8 00:35:32.749530 kernel: Found optimal setting for mtrr clean up May 8 00:35:32.749535 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G May 8 00:35:32.749541 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs May 8 00:35:32.749546 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 8 00:35:32.749551 kernel: Using GB pages for direct mapping May 8 00:35:32.749555 kernel: ACPI: Early table checksum verification disabled May 8 00:35:32.749560 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) May 8 00:35:32.749565 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) May 8 00:35:32.749570 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) May 8 00:35:32.749575 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) May 8 00:35:32.749579 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 May 8 00:35:32.749587 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 May 8 00:35:32.749592 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) May 8 00:35:32.749597 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) May 8 00:35:32.749602 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) May 8 00:35:32.749607 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) May 8 00:35:32.749613 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) May 8 00:35:32.749618 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) May 8 00:35:32.749623 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] May 8 00:35:32.749629 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] May 8 00:35:32.749634 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] May 8 00:35:32.749639 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] May 8 00:35:32.749643 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] May 8 00:35:32.749648 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] May 8 00:35:32.749653 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] May 8 00:35:32.749658 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] May 8 00:35:32.749664 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] May 8 00:35:32.749669 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] May 8 00:35:32.749674 kernel: system APIC only can use physical flat May 8 00:35:32.749679 kernel: APIC: Switched APIC routing to: physical flat May 8 00:35:32.749684 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 8 00:35:32.749689 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 May 8 00:35:32.749694 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 May 8 00:35:32.749699 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 May 8 00:35:32.749704 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 May 8 00:35:32.749710 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 May 8 00:35:32.749715 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 May 8 00:35:32.749720 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 May 8 00:35:32.749725 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 May 8 00:35:32.749730 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 May 8 00:35:32.749735 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 May 8 00:35:32.749740 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 May 8 00:35:32.749744 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 May 8 00:35:32.749749 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 May 8 00:35:32.749754 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 May 8 00:35:32.749760 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 May 8 00:35:32.749765 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 May 8 00:35:32.749770 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 May 8 00:35:32.749775 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 May 8 00:35:32.749780 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 May 8 00:35:32.749785 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 May 8 00:35:32.749790 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 May 8 00:35:32.749795 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 May 8 00:35:32.749800 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 May 8 00:35:32.749804 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 May 8 00:35:32.749810 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 May 8 00:35:32.749815 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 May 8 00:35:32.749820 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 May 8 00:35:32.749825 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 May 8 00:35:32.749830 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 May 8 00:35:32.749835 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 May 8 00:35:32.749840 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 May 8 00:35:32.749845 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 May 8 00:35:32.749850 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 May 8 00:35:32.749855 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 May 8 00:35:32.749861 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 May 8 00:35:32.749866 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 May 8 00:35:32.749871 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 May 8 00:35:32.749876 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 May 8 00:35:32.749881 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 May 8 00:35:32.749885 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 May 8 00:35:32.749891 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 May 8 00:35:32.749895 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 May 8 00:35:32.749900 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 May 8 00:35:32.749905 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 May 8 00:35:32.749910 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 May 8 00:35:32.749916 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 May 8 00:35:32.749921 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 May 8 00:35:32.749926 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 May 8 00:35:32.749932 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 May 8 00:35:32.749936 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 May 8 00:35:32.749941 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 May 8 00:35:32.749946 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 May 8 00:35:32.749951 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 May 8 00:35:32.749956 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 May 8 00:35:32.749961 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 May 8 00:35:32.749967 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 May 8 00:35:32.749971 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 May 8 00:35:32.749977 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 May 8 00:35:32.749986 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 May 8 00:35:32.749991 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 May 8 00:35:32.749996 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 May 8 00:35:32.750001 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 May 8 00:35:32.750007 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 May 8 00:35:32.750013 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 May 8 00:35:32.750018 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 May 8 00:35:32.750023 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 May 8 00:35:32.750029 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 May 8 00:35:32.750034 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 May 8 00:35:32.750039 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 May 8 00:35:32.750044 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 May 8 00:35:32.750050 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 May 8 00:35:32.750055 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 May 8 00:35:32.750060 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 May 8 00:35:32.750067 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 May 8 00:35:32.750072 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 May 8 00:35:32.750077 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 May 8 00:35:32.750083 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 May 8 00:35:32.750088 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 May 8 00:35:32.750093 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 May 8 00:35:32.750099 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 May 8 00:35:32.750104 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 May 8 00:35:32.750109 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 May 8 00:35:32.750114 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 May 8 00:35:32.750121 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 May 8 00:35:32.750126 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 May 8 00:35:32.750131 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 May 8 00:35:32.750137 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 May 8 00:35:32.750142 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 May 8 00:35:32.750147 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 May 8 00:35:32.750153 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 May 8 00:35:32.750158 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 May 8 00:35:32.750163 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 May 8 00:35:32.750168 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 May 8 00:35:32.750174 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 May 8 00:35:32.750180 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 May 8 00:35:32.750185 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 May 8 00:35:32.750190 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 May 8 00:35:32.750195 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 May 8 00:35:32.750201 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 May 8 00:35:32.750206 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 May 8 00:35:32.750211 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 May 8 00:35:32.750216 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 May 8 00:35:32.750222 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 May 8 00:35:32.750228 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 May 8 00:35:32.750233 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 May 8 00:35:32.750239 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 May 8 00:35:32.750244 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 May 8 00:35:32.750249 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 May 8 00:35:32.750254 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 May 8 00:35:32.750260 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 May 8 00:35:32.750265 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 May 8 00:35:32.750270 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 May 8 00:35:32.750275 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 May 8 00:35:32.750282 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 May 8 00:35:32.750287 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 May 8 00:35:32.750292 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 May 8 00:35:32.750297 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 May 8 00:35:32.750303 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 May 8 00:35:32.750308 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 May 8 00:35:32.750313 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 May 8 00:35:32.750319 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 May 8 00:35:32.750324 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 May 8 00:35:32.750329 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 May 8 00:35:32.750334 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 May 8 00:35:32.750341 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 May 8 00:35:32.750346 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 May 8 00:35:32.750351 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 May 8 00:35:32.750356 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 8 00:35:32.750362 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 8 00:35:32.750367 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug May 8 00:35:32.750373 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] May 8 00:35:32.750378 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] May 8 00:35:32.750405 kernel: Zone ranges: May 8 00:35:32.750413 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 8 00:35:32.750418 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] May 8 00:35:32.750429 kernel: Normal empty May 8 00:35:32.750435 kernel: Movable zone start for each node May 8 00:35:32.750441 kernel: Early memory node ranges May 8 00:35:32.750446 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] May 8 00:35:32.750452 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] May 8 00:35:32.750457 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] May 8 00:35:32.750462 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] May 8 00:35:32.750469 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 8 00:35:32.750475 kernel: On node 0, zone DMA: 98 pages in unavailable ranges May 8 00:35:32.750480 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges May 8 00:35:32.750486 kernel: ACPI: PM-Timer IO Port: 0x1008 May 8 00:35:32.750491 kernel: system APIC only can use physical flat May 8 00:35:32.750496 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) May 8 00:35:32.750502 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) May 8 00:35:32.750507 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) May 8 00:35:32.750513 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) May 8 00:35:32.750518 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) May 8 00:35:32.750524 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) May 8 00:35:32.750530 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) May 8 00:35:32.750535 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) May 8 00:35:32.750541 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) May 8 00:35:32.750546 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) May 8 00:35:32.750551 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) May 8 00:35:32.750557 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) May 8 00:35:32.750562 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) May 8 00:35:32.750567 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) May 8 00:35:32.750573 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) May 8 00:35:32.750579 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) May 8 00:35:32.750584 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) May 8 00:35:32.750589 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) May 8 00:35:32.750595 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) May 8 00:35:32.750607 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) May 8 00:35:32.750614 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) May 8 00:35:32.750620 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) May 8 00:35:32.750625 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) May 8 00:35:32.750630 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) May 8 00:35:32.750638 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) May 8 00:35:32.750643 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) May 8 00:35:32.750648 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) May 8 00:35:32.750654 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) May 8 00:35:32.750659 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) May 8 00:35:32.750664 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) May 8 00:35:32.750670 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) May 8 00:35:32.750675 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) May 8 00:35:32.750680 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) May 8 00:35:32.750686 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) May 8 00:35:32.750693 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) May 8 00:35:32.750698 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) May 8 00:35:32.750703 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) May 8 00:35:32.750709 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) May 8 00:35:32.750714 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) May 8 00:35:32.750719 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) May 8 00:35:32.750724 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) May 8 00:35:32.750730 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) May 8 00:35:32.750735 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) May 8 00:35:32.750742 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) May 8 00:35:32.750747 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) May 8 00:35:32.750752 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) May 8 00:35:32.750758 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) May 8 00:35:32.750763 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) May 8 00:35:32.750768 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) May 8 00:35:32.750774 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) May 8 00:35:32.750779 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) May 8 00:35:32.750785 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) May 8 00:35:32.750790 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) May 8 00:35:32.750796 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) May 8 00:35:32.750802 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) May 8 00:35:32.750807 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) May 8 00:35:32.750813 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) May 8 00:35:32.750818 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) May 8 00:35:32.750823 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) May 8 00:35:32.750828 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) May 8 00:35:32.750834 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) May 8 00:35:32.750839 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) May 8 00:35:32.750846 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) May 8 00:35:32.750851 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) May 8 00:35:32.750856 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) May 8 00:35:32.750862 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) May 8 00:35:32.750867 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) May 8 00:35:32.750873 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) May 8 00:35:32.750878 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) May 8 00:35:32.750883 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) May 8 00:35:32.750888 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) May 8 00:35:32.750894 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) May 8 00:35:32.750900 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) May 8 00:35:32.750906 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) May 8 00:35:32.750911 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) May 8 00:35:32.750916 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) May 8 00:35:32.750921 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) May 8 00:35:32.750927 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) May 8 00:35:32.750932 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) May 8 00:35:32.750937 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) May 8 00:35:32.750943 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) May 8 00:35:32.750948 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) May 8 00:35:32.750954 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) May 8 00:35:32.750960 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) May 8 00:35:32.750965 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) May 8 00:35:32.750970 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) May 8 00:35:32.750976 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) May 8 00:35:32.750981 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) May 8 00:35:32.750986 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) May 8 00:35:32.750991 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) May 8 00:35:32.750997 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) May 8 00:35:32.751003 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) May 8 00:35:32.751008 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) May 8 00:35:32.751014 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) May 8 00:35:32.751019 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) May 8 00:35:32.751024 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) May 8 00:35:32.751030 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) May 8 00:35:32.751035 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) May 8 00:35:32.751040 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) May 8 00:35:32.751046 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) May 8 00:35:32.751051 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) May 8 00:35:32.751058 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) May 8 00:35:32.751063 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) May 8 00:35:32.751068 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) May 8 00:35:32.751074 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) May 8 00:35:32.751079 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) May 8 00:35:32.751084 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) May 8 00:35:32.751089 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) May 8 00:35:32.751095 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) May 8 00:35:32.751100 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) May 8 00:35:32.751106 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) May 8 00:35:32.751112 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) May 8 00:35:32.751117 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) May 8 00:35:32.751122 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) May 8 00:35:32.751128 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) May 8 00:35:32.751133 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) May 8 00:35:32.751138 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) May 8 00:35:32.751143 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) May 8 00:35:32.751149 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) May 8 00:35:32.751154 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) May 8 00:35:32.751161 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) May 8 00:35:32.751166 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) May 8 00:35:32.751171 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) May 8 00:35:32.751176 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) May 8 00:35:32.751182 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) May 8 00:35:32.751187 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) May 8 00:35:32.751192 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) May 8 00:35:32.751198 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) May 8 00:35:32.751203 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 May 8 00:35:32.751209 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) May 8 00:35:32.751215 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 8 00:35:32.751220 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 May 8 00:35:32.751226 kernel: TSC deadline timer available May 8 00:35:32.751231 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs May 8 00:35:32.751236 kernel: [mem 0x80000000-0xefffffff] available for PCI devices May 8 00:35:32.751242 kernel: Booting paravirtualized kernel on VMware hypervisor May 8 00:35:32.751247 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 8 00:35:32.751253 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 May 8 00:35:32.751259 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u262144 May 8 00:35:32.751265 kernel: pcpu-alloc: s197096 r8192 d32280 u262144 alloc=1*2097152 May 8 00:35:32.751270 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 May 8 00:35:32.751276 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 May 8 00:35:32.751281 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 May 8 00:35:32.751286 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 May 8 00:35:32.751292 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 May 8 00:35:32.751304 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 May 8 00:35:32.751310 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 May 8 00:35:32.751317 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 May 8 00:35:32.751322 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 May 8 00:35:32.751328 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 May 8 00:35:32.751334 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 May 8 00:35:32.751339 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 May 8 00:35:32.751345 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 May 8 00:35:32.751350 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 May 8 00:35:32.751356 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 May 8 00:35:32.751362 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 May 8 00:35:32.751369 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=86cfbfcc89a9c46f6cbba5bdb3509d1ce1367f0c93b0b0e4c6bdcad1a2064c90 May 8 00:35:32.751375 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 00:35:32.751381 kernel: random: crng init done May 8 00:35:32.751416 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes May 8 00:35:32.751422 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes May 8 00:35:32.751428 kernel: printk: log_buf_len min size: 262144 bytes May 8 00:35:32.751433 kernel: printk: log_buf_len: 1048576 bytes May 8 00:35:32.751439 kernel: printk: early log buf free: 239648(91%) May 8 00:35:32.751446 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 00:35:32.751452 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 8 00:35:32.751458 kernel: Fallback order for Node 0: 0 May 8 00:35:32.751464 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 May 8 00:35:32.751470 kernel: Policy zone: DMA32 May 8 00:35:32.751475 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 00:35:32.751482 kernel: Memory: 1936376K/2096628K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42856K init, 2336K bss, 159992K reserved, 0K cma-reserved) May 8 00:35:32.751489 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 May 8 00:35:32.751494 kernel: ftrace: allocating 37944 entries in 149 pages May 8 00:35:32.751500 kernel: ftrace: allocated 149 pages with 4 groups May 8 00:35:32.751506 kernel: Dynamic Preempt: voluntary May 8 00:35:32.751511 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 00:35:32.751517 kernel: rcu: RCU event tracing is enabled. May 8 00:35:32.751524 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. May 8 00:35:32.751530 kernel: Trampoline variant of Tasks RCU enabled. May 8 00:35:32.751537 kernel: Rude variant of Tasks RCU enabled. May 8 00:35:32.751543 kernel: Tracing variant of Tasks RCU enabled. May 8 00:35:32.751548 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 00:35:32.751554 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 May 8 00:35:32.751560 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 May 8 00:35:32.751565 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. May 8 00:35:32.751571 kernel: Console: colour VGA+ 80x25 May 8 00:35:32.751577 kernel: printk: console [tty0] enabled May 8 00:35:32.751583 kernel: printk: console [ttyS0] enabled May 8 00:35:32.751590 kernel: ACPI: Core revision 20230628 May 8 00:35:32.751596 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns May 8 00:35:32.751601 kernel: APIC: Switch to symmetric I/O mode setup May 8 00:35:32.751607 kernel: x2apic enabled May 8 00:35:32.751613 kernel: APIC: Switched APIC routing to: physical x2apic May 8 00:35:32.751619 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 8 00:35:32.751625 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns May 8 00:35:32.751631 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) May 8 00:35:32.751636 kernel: Disabled fast string operations May 8 00:35:32.751643 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 8 00:35:32.751649 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 8 00:35:32.751655 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 8 00:35:32.751661 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit May 8 00:35:32.751666 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall May 8 00:35:32.751672 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS May 8 00:35:32.751678 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 8 00:35:32.751683 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT May 8 00:35:32.751689 kernel: RETBleed: Mitigation: Enhanced IBRS May 8 00:35:32.751696 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 8 00:35:32.751702 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 8 00:35:32.751708 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 8 00:35:32.751714 kernel: SRBDS: Unknown: Dependent on hypervisor status May 8 00:35:32.751720 kernel: GDS: Unknown: Dependent on hypervisor status May 8 00:35:32.751725 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 8 00:35:32.751731 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 8 00:35:32.751737 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 8 00:35:32.751743 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 8 00:35:32.751749 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 8 00:35:32.751755 kernel: Freeing SMP alternatives memory: 32K May 8 00:35:32.751761 kernel: pid_max: default: 131072 minimum: 1024 May 8 00:35:32.751767 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 8 00:35:32.751773 kernel: landlock: Up and running. May 8 00:35:32.751779 kernel: SELinux: Initializing. May 8 00:35:32.751784 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 8 00:35:32.751790 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 8 00:35:32.751796 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) May 8 00:35:32.751803 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 8 00:35:32.751809 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 8 00:35:32.751815 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 8 00:35:32.751820 kernel: Performance Events: Skylake events, core PMU driver. May 8 00:35:32.751826 kernel: core: CPUID marked event: 'cpu cycles' unavailable May 8 00:35:32.751832 kernel: core: CPUID marked event: 'instructions' unavailable May 8 00:35:32.751838 kernel: core: CPUID marked event: 'bus cycles' unavailable May 8 00:35:32.751843 kernel: core: CPUID marked event: 'cache references' unavailable May 8 00:35:32.751850 kernel: core: CPUID marked event: 'cache misses' unavailable May 8 00:35:32.751856 kernel: core: CPUID marked event: 'branch instructions' unavailable May 8 00:35:32.751862 kernel: core: CPUID marked event: 'branch misses' unavailable May 8 00:35:32.751874 kernel: ... version: 1 May 8 00:35:32.751880 kernel: ... bit width: 48 May 8 00:35:32.751886 kernel: ... generic registers: 4 May 8 00:35:32.751891 kernel: ... value mask: 0000ffffffffffff May 8 00:35:32.751897 kernel: ... max period: 000000007fffffff May 8 00:35:32.751903 kernel: ... fixed-purpose events: 0 May 8 00:35:32.751910 kernel: ... event mask: 000000000000000f May 8 00:35:32.751916 kernel: signal: max sigframe size: 1776 May 8 00:35:32.751922 kernel: rcu: Hierarchical SRCU implementation. May 8 00:35:32.751928 kernel: rcu: Max phase no-delay instances is 400. May 8 00:35:32.751933 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 8 00:35:32.751939 kernel: smp: Bringing up secondary CPUs ... May 8 00:35:32.751945 kernel: smpboot: x86: Booting SMP configuration: May 8 00:35:32.751951 kernel: .... node #0, CPUs: #1 May 8 00:35:32.751956 kernel: Disabled fast string operations May 8 00:35:32.751962 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 May 8 00:35:32.751969 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 May 8 00:35:32.751974 kernel: smp: Brought up 1 node, 2 CPUs May 8 00:35:32.751980 kernel: smpboot: Max logical packages: 128 May 8 00:35:32.751986 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) May 8 00:35:32.751992 kernel: devtmpfs: initialized May 8 00:35:32.751998 kernel: x86/mm: Memory block size: 128MB May 8 00:35:32.752003 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) May 8 00:35:32.752009 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 00:35:32.752016 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) May 8 00:35:32.752022 kernel: pinctrl core: initialized pinctrl subsystem May 8 00:35:32.752028 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 00:35:32.752034 kernel: audit: initializing netlink subsys (disabled) May 8 00:35:32.752040 kernel: audit: type=2000 audit(1746664531.068:1): state=initialized audit_enabled=0 res=1 May 8 00:35:32.752046 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 00:35:32.752051 kernel: thermal_sys: Registered thermal governor 'user_space' May 8 00:35:32.752057 kernel: cpuidle: using governor menu May 8 00:35:32.752063 kernel: Simple Boot Flag at 0x36 set to 0x80 May 8 00:35:32.752069 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 00:35:32.752075 kernel: dca service started, version 1.12.1 May 8 00:35:32.752100 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) May 8 00:35:32.752106 kernel: PCI: Using configuration type 1 for base access May 8 00:35:32.752112 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 8 00:35:32.752118 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 8 00:35:32.752123 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 8 00:35:32.752146 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 8 00:35:32.752152 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 8 00:35:32.752157 kernel: ACPI: Added _OSI(Module Device) May 8 00:35:32.752164 kernel: ACPI: Added _OSI(Processor Device) May 8 00:35:32.752170 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 00:35:32.752176 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 00:35:32.752181 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 00:35:32.752187 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored May 8 00:35:32.752193 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 8 00:35:32.752199 kernel: ACPI: Interpreter enabled May 8 00:35:32.752204 kernel: ACPI: PM: (supports S0 S1 S5) May 8 00:35:32.752210 kernel: ACPI: Using IOAPIC for interrupt routing May 8 00:35:32.752217 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 8 00:35:32.752223 kernel: PCI: Using E820 reservations for host bridge windows May 8 00:35:32.752229 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F May 8 00:35:32.752234 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) May 8 00:35:32.752315 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:35:32.752371 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] May 8 00:35:32.752440 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] May 8 00:35:32.752452 kernel: PCI host bridge to bus 0000:00 May 8 00:35:32.752504 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 8 00:35:32.752549 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] May 8 00:35:32.752593 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 8 00:35:32.752637 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 8 00:35:32.752684 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] May 8 00:35:32.752728 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] May 8 00:35:32.752791 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 May 8 00:35:32.752847 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 May 8 00:35:32.752906 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 May 8 00:35:32.752961 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a May 8 00:35:32.753012 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] May 8 00:35:32.753061 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 8 00:35:32.753113 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 8 00:35:32.753163 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 8 00:35:32.753213 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 8 00:35:32.753266 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 May 8 00:35:32.753317 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI May 8 00:35:32.753366 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB May 8 00:35:32.753444 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 May 8 00:35:32.753500 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] May 8 00:35:32.753551 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] May 8 00:35:32.753613 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 May 8 00:35:32.753666 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] May 8 00:35:32.753716 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] May 8 00:35:32.753768 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] May 8 00:35:32.753817 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] May 8 00:35:32.753869 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 8 00:35:32.753923 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 May 8 00:35:32.753978 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.754029 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold May 8 00:35:32.754083 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.754134 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold May 8 00:35:32.754189 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.754240 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold May 8 00:35:32.754296 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.754346 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold May 8 00:35:32.754423 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.754477 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold May 8 00:35:32.754534 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.754586 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold May 8 00:35:32.754641 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.754692 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold May 8 00:35:32.754782 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.754832 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold May 8 00:35:32.754888 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.754957 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold May 8 00:35:32.755012 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.755063 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold May 8 00:35:32.755117 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.755169 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold May 8 00:35:32.755228 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.755280 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold May 8 00:35:32.755334 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.755420 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold May 8 00:35:32.755481 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.755532 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold May 8 00:35:32.755590 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.755641 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold May 8 00:35:32.755697 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.755748 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold May 8 00:35:32.755804 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.755855 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold May 8 00:35:32.755913 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.755964 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold May 8 00:35:32.756018 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.756070 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold May 8 00:35:32.756126 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.756178 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold May 8 00:35:32.756234 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.756286 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold May 8 00:35:32.756341 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.756430 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold May 8 00:35:32.756488 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.756540 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold May 8 00:35:32.756593 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.756658 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold May 8 00:35:32.756713 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.756766 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold May 8 00:35:32.756999 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.757053 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold May 8 00:35:32.757109 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.757164 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold May 8 00:35:32.757221 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.757273 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold May 8 00:35:32.757335 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.757658 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold May 8 00:35:32.757725 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.759472 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold May 8 00:35:32.759536 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.759594 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold May 8 00:35:32.759658 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.759715 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold May 8 00:35:32.759771 kernel: pci_bus 0000:01: extended config space not accessible May 8 00:35:32.759827 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 8 00:35:32.759880 kernel: pci_bus 0000:02: extended config space not accessible May 8 00:35:32.759890 kernel: acpiphp: Slot [32] registered May 8 00:35:32.759896 kernel: acpiphp: Slot [33] registered May 8 00:35:32.759902 kernel: acpiphp: Slot [34] registered May 8 00:35:32.759909 kernel: acpiphp: Slot [35] registered May 8 00:35:32.759914 kernel: acpiphp: Slot [36] registered May 8 00:35:32.759920 kernel: acpiphp: Slot [37] registered May 8 00:35:32.759926 kernel: acpiphp: Slot [38] registered May 8 00:35:32.759934 kernel: acpiphp: Slot [39] registered May 8 00:35:32.759940 kernel: acpiphp: Slot [40] registered May 8 00:35:32.759946 kernel: acpiphp: Slot [41] registered May 8 00:35:32.759952 kernel: acpiphp: Slot [42] registered May 8 00:35:32.759957 kernel: acpiphp: Slot [43] registered May 8 00:35:32.759965 kernel: acpiphp: Slot [44] registered May 8 00:35:32.759973 kernel: acpiphp: Slot [45] registered May 8 00:35:32.759979 kernel: acpiphp: Slot [46] registered May 8 00:35:32.759988 kernel: acpiphp: Slot [47] registered May 8 00:35:32.760000 kernel: acpiphp: Slot [48] registered May 8 00:35:32.760009 kernel: acpiphp: Slot [49] registered May 8 00:35:32.760015 kernel: acpiphp: Slot [50] registered May 8 00:35:32.760021 kernel: acpiphp: Slot [51] registered May 8 00:35:32.760027 kernel: acpiphp: Slot [52] registered May 8 00:35:32.760033 kernel: acpiphp: Slot [53] registered May 8 00:35:32.760038 kernel: acpiphp: Slot [54] registered May 8 00:35:32.760044 kernel: acpiphp: Slot [55] registered May 8 00:35:32.760050 kernel: acpiphp: Slot [56] registered May 8 00:35:32.760056 kernel: acpiphp: Slot [57] registered May 8 00:35:32.760063 kernel: acpiphp: Slot [58] registered May 8 00:35:32.760069 kernel: acpiphp: Slot [59] registered May 8 00:35:32.760075 kernel: acpiphp: Slot [60] registered May 8 00:35:32.760081 kernel: acpiphp: Slot [61] registered May 8 00:35:32.760087 kernel: acpiphp: Slot [62] registered May 8 00:35:32.760093 kernel: acpiphp: Slot [63] registered May 8 00:35:32.760148 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) May 8 00:35:32.760201 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] May 8 00:35:32.760254 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] May 8 00:35:32.760305 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] May 8 00:35:32.760357 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) May 8 00:35:32.761473 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) May 8 00:35:32.761533 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) May 8 00:35:32.761588 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) May 8 00:35:32.761645 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) May 8 00:35:32.761705 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 May 8 00:35:32.761764 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] May 8 00:35:32.761817 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] May 8 00:35:32.761873 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] May 8 00:35:32.761927 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold May 8 00:35:32.761979 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' May 8 00:35:32.762034 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] May 8 00:35:32.762085 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] May 8 00:35:32.762140 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] May 8 00:35:32.762197 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] May 8 00:35:32.762255 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] May 8 00:35:32.762316 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] May 8 00:35:32.762368 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] May 8 00:35:32.763458 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] May 8 00:35:32.763517 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] May 8 00:35:32.763570 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] May 8 00:35:32.763632 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] May 8 00:35:32.763700 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] May 8 00:35:32.763753 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] May 8 00:35:32.763803 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] May 8 00:35:32.763854 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] May 8 00:35:32.763905 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] May 8 00:35:32.763956 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] May 8 00:35:32.764012 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] May 8 00:35:32.764062 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] May 8 00:35:32.764112 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] May 8 00:35:32.764164 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] May 8 00:35:32.764216 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] May 8 00:35:32.764269 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] May 8 00:35:32.764327 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] May 8 00:35:32.764427 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] May 8 00:35:32.764491 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] May 8 00:35:32.764556 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 May 8 00:35:32.764618 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] May 8 00:35:32.764672 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] May 8 00:35:32.764728 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] May 8 00:35:32.764780 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] May 8 00:35:32.764833 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] May 8 00:35:32.764886 kernel: pci 0000:0b:00.0: supports D1 D2 May 8 00:35:32.764937 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 8 00:35:32.764989 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' May 8 00:35:32.765042 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] May 8 00:35:32.765099 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] May 8 00:35:32.765158 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] May 8 00:35:32.765213 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] May 8 00:35:32.765264 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] May 8 00:35:32.765314 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] May 8 00:35:32.767487 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] May 8 00:35:32.767551 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] May 8 00:35:32.767605 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] May 8 00:35:32.767657 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] May 8 00:35:32.767713 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] May 8 00:35:32.767766 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] May 8 00:35:32.767817 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] May 8 00:35:32.767874 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] May 8 00:35:32.767929 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] May 8 00:35:32.767979 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] May 8 00:35:32.768037 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] May 8 00:35:32.768099 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] May 8 00:35:32.768150 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] May 8 00:35:32.768200 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] May 8 00:35:32.768253 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] May 8 00:35:32.768304 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] May 8 00:35:32.768354 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] May 8 00:35:32.768414 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] May 8 00:35:32.768464 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] May 8 00:35:32.768515 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] May 8 00:35:32.768570 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] May 8 00:35:32.768621 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] May 8 00:35:32.768675 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] May 8 00:35:32.768727 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] May 8 00:35:32.768780 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] May 8 00:35:32.768831 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] May 8 00:35:32.768884 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] May 8 00:35:32.768957 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] May 8 00:35:32.769012 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] May 8 00:35:32.769064 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] May 8 00:35:32.769114 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] May 8 00:35:32.769165 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] May 8 00:35:32.769226 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] May 8 00:35:32.769297 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] May 8 00:35:32.769356 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] May 8 00:35:32.773460 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] May 8 00:35:32.773544 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] May 8 00:35:32.773600 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] May 8 00:35:32.773656 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] May 8 00:35:32.773710 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] May 8 00:35:32.773761 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] May 8 00:35:32.773815 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] May 8 00:35:32.773867 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] May 8 00:35:32.773923 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] May 8 00:35:32.773977 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] May 8 00:35:32.774028 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] May 8 00:35:32.774079 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] May 8 00:35:32.774131 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] May 8 00:35:32.774183 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] May 8 00:35:32.774246 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] May 8 00:35:32.774299 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] May 8 00:35:32.774357 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] May 8 00:35:32.774419 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] May 8 00:35:32.774471 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] May 8 00:35:32.774522 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] May 8 00:35:32.774576 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] May 8 00:35:32.774628 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] May 8 00:35:32.774679 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] May 8 00:35:32.774735 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] May 8 00:35:32.774789 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] May 8 00:35:32.774846 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] May 8 00:35:32.774900 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] May 8 00:35:32.774951 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] May 8 00:35:32.775002 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] May 8 00:35:32.775055 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] May 8 00:35:32.775106 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] May 8 00:35:32.775160 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] May 8 00:35:32.775218 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] May 8 00:35:32.775270 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] May 8 00:35:32.775321 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] May 8 00:35:32.775374 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] May 8 00:35:32.778039 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] May 8 00:35:32.778098 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] May 8 00:35:32.778108 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 May 8 00:35:32.778115 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 May 8 00:35:32.778123 kernel: ACPI: PCI: Interrupt link LNKB disabled May 8 00:35:32.778129 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 8 00:35:32.778136 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 May 8 00:35:32.778142 kernel: iommu: Default domain type: Translated May 8 00:35:32.778148 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 8 00:35:32.778154 kernel: PCI: Using ACPI for IRQ routing May 8 00:35:32.778160 kernel: PCI: pci_cache_line_size set to 64 bytes May 8 00:35:32.778166 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] May 8 00:35:32.778172 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] May 8 00:35:32.778230 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device May 8 00:35:32.778284 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible May 8 00:35:32.778335 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 8 00:35:32.778345 kernel: vgaarb: loaded May 8 00:35:32.778351 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 May 8 00:35:32.778357 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter May 8 00:35:32.778363 kernel: clocksource: Switched to clocksource tsc-early May 8 00:35:32.778369 kernel: VFS: Disk quotas dquot_6.6.0 May 8 00:35:32.778378 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 00:35:32.778757 kernel: pnp: PnP ACPI init May 8 00:35:32.778822 kernel: system 00:00: [io 0x1000-0x103f] has been reserved May 8 00:35:32.778873 kernel: system 00:00: [io 0x1040-0x104f] has been reserved May 8 00:35:32.778920 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved May 8 00:35:32.778972 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved May 8 00:35:32.779022 kernel: pnp 00:06: [dma 2] May 8 00:35:32.779082 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved May 8 00:35:32.779143 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved May 8 00:35:32.779190 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved May 8 00:35:32.779199 kernel: pnp: PnP ACPI: found 8 devices May 8 00:35:32.779205 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 8 00:35:32.779212 kernel: NET: Registered PF_INET protocol family May 8 00:35:32.779218 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 00:35:32.779224 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 8 00:35:32.779233 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 00:35:32.779239 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 8 00:35:32.779245 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 8 00:35:32.779251 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 8 00:35:32.779257 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 8 00:35:32.779263 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 8 00:35:32.779269 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 00:35:32.779275 kernel: NET: Registered PF_XDP protocol family May 8 00:35:32.779333 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 8 00:35:32.779732 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 May 8 00:35:32.779812 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 8 00:35:32.779886 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 8 00:35:32.779953 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 8 00:35:32.780008 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 May 8 00:35:32.780066 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 May 8 00:35:32.780122 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 May 8 00:35:32.780176 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 May 8 00:35:32.780230 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 May 8 00:35:32.780288 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 May 8 00:35:32.780345 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 May 8 00:35:32.780423 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 May 8 00:35:32.780479 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 May 8 00:35:32.780531 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 May 8 00:35:32.780600 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 May 8 00:35:32.780657 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 May 8 00:35:32.780711 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 May 8 00:35:32.780766 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 May 8 00:35:32.780823 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 May 8 00:35:32.780885 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 May 8 00:35:32.780956 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 May 8 00:35:32.781018 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 May 8 00:35:32.781071 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] May 8 00:35:32.781125 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] May 8 00:35:32.781176 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] May 8 00:35:32.781242 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.781302 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] May 8 00:35:32.781353 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.781417 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] May 8 00:35:32.781470 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.781536 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] May 8 00:35:32.781599 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.781650 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] May 8 00:35:32.781701 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.781753 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] May 8 00:35:32.781816 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.781867 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] May 8 00:35:32.781937 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.782001 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] May 8 00:35:32.782055 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.782109 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] May 8 00:35:32.782162 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.782213 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] May 8 00:35:32.782263 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.782328 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] May 8 00:35:32.782421 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.782480 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] May 8 00:35:32.782537 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.782603 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] May 8 00:35:32.782660 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.782717 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] May 8 00:35:32.782768 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.782819 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] May 8 00:35:32.782869 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.782920 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] May 8 00:35:32.782974 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.783024 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] May 8 00:35:32.783075 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.783126 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] May 8 00:35:32.783184 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.783242 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] May 8 00:35:32.783304 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.783357 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] May 8 00:35:32.783427 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.783480 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] May 8 00:35:32.783530 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.783587 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] May 8 00:35:32.783639 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.783690 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] May 8 00:35:32.783744 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.783796 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] May 8 00:35:32.783857 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.784116 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] May 8 00:35:32.784258 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.784474 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] May 8 00:35:32.784534 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.784590 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] May 8 00:35:32.784643 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.784694 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] May 8 00:35:32.784745 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.784796 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] May 8 00:35:32.784850 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.784916 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] May 8 00:35:32.784968 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.785022 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] May 8 00:35:32.785073 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.785124 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] May 8 00:35:32.785174 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.785225 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] May 8 00:35:32.785276 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.785328 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] May 8 00:35:32.785452 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.785517 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] May 8 00:35:32.785576 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.785636 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] May 8 00:35:32.785687 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.785738 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] May 8 00:35:32.785789 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.785840 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] May 8 00:35:32.785890 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.785941 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] May 8 00:35:32.785995 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.786045 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] May 8 00:35:32.786096 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.786148 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] May 8 00:35:32.786205 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.786261 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] May 8 00:35:32.786318 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.786371 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 8 00:35:32.786436 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] May 8 00:35:32.786495 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] May 8 00:35:32.786546 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] May 8 00:35:32.786596 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] May 8 00:35:32.786667 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] May 8 00:35:32.786721 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] May 8 00:35:32.786772 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] May 8 00:35:32.786834 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] May 8 00:35:32.786886 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] May 8 00:35:32.786945 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] May 8 00:35:32.786997 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] May 8 00:35:32.787058 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] May 8 00:35:32.787110 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] May 8 00:35:32.787164 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] May 8 00:35:32.787215 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] May 8 00:35:32.787266 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] May 8 00:35:32.787317 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] May 8 00:35:32.787374 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] May 8 00:35:32.787505 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] May 8 00:35:32.787561 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] May 8 00:35:32.787613 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] May 8 00:35:32.787663 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] May 8 00:35:32.787713 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] May 8 00:35:32.787767 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] May 8 00:35:32.787820 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] May 8 00:35:32.787870 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] May 8 00:35:32.787925 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] May 8 00:35:32.787991 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] May 8 00:35:32.788042 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] May 8 00:35:32.788092 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] May 8 00:35:32.788142 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] May 8 00:35:32.788193 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] May 8 00:35:32.788249 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] May 8 00:35:32.788304 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] May 8 00:35:32.788355 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] May 8 00:35:32.788418 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] May 8 00:35:32.788473 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] May 8 00:35:32.788532 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] May 8 00:35:32.788584 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] May 8 00:35:32.788639 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] May 8 00:35:32.788690 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] May 8 00:35:32.788743 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] May 8 00:35:32.788813 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] May 8 00:35:32.788871 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] May 8 00:35:32.788937 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] May 8 00:35:32.788990 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] May 8 00:35:32.789058 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] May 8 00:35:32.789131 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] May 8 00:35:32.789209 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] May 8 00:35:32.789277 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] May 8 00:35:32.789329 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] May 8 00:35:32.789387 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] May 8 00:35:32.789456 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] May 8 00:35:32.789524 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] May 8 00:35:32.789587 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] May 8 00:35:32.789647 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] May 8 00:35:32.789701 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] May 8 00:35:32.789753 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] May 8 00:35:32.789804 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] May 8 00:35:32.789855 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] May 8 00:35:32.789909 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] May 8 00:35:32.789975 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] May 8 00:35:32.790055 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] May 8 00:35:32.790117 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] May 8 00:35:32.790176 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] May 8 00:35:32.790228 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] May 8 00:35:32.790278 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] May 8 00:35:32.790329 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] May 8 00:35:32.790381 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] May 8 00:35:32.790442 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] May 8 00:35:32.790494 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] May 8 00:35:32.790552 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] May 8 00:35:32.790619 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] May 8 00:35:32.790675 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] May 8 00:35:32.790727 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] May 8 00:35:32.790778 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] May 8 00:35:32.790830 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] May 8 00:35:32.790881 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] May 8 00:35:32.790933 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] May 8 00:35:32.790984 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] May 8 00:35:32.791035 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] May 8 00:35:32.791093 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] May 8 00:35:32.791149 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] May 8 00:35:32.791213 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] May 8 00:35:32.791266 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] May 8 00:35:32.791318 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] May 8 00:35:32.791369 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] May 8 00:35:32.791442 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] May 8 00:35:32.791495 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] May 8 00:35:32.791574 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] May 8 00:35:32.791635 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] May 8 00:35:32.791700 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] May 8 00:35:32.791764 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] May 8 00:35:32.791817 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] May 8 00:35:32.791868 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] May 8 00:35:32.791920 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] May 8 00:35:32.791981 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] May 8 00:35:32.792054 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] May 8 00:35:32.792114 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] May 8 00:35:32.792173 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] May 8 00:35:32.792485 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] May 8 00:35:32.792542 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] May 8 00:35:32.792594 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] May 8 00:35:32.792656 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] May 8 00:35:32.792722 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] May 8 00:35:32.792776 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] May 8 00:35:32.792840 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] May 8 00:35:32.792906 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] May 8 00:35:32.792959 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] May 8 00:35:32.793014 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] May 8 00:35:32.793066 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] May 8 00:35:32.793116 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] May 8 00:35:32.793171 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] May 8 00:35:32.793223 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] May 8 00:35:32.793274 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] May 8 00:35:32.793319 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] May 8 00:35:32.793374 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] May 8 00:35:32.793446 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] May 8 00:35:32.793502 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] May 8 00:35:32.793559 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] May 8 00:35:32.793607 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] May 8 00:35:32.793654 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] May 8 00:35:32.793700 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] May 8 00:35:32.793747 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] May 8 00:35:32.793794 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] May 8 00:35:32.793855 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] May 8 00:35:32.793909 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] May 8 00:35:32.793960 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] May 8 00:35:32.794019 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] May 8 00:35:32.794071 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] May 8 00:35:32.794119 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] May 8 00:35:32.794166 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] May 8 00:35:32.794223 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] May 8 00:35:32.794271 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] May 8 00:35:32.794319 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] May 8 00:35:32.795441 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] May 8 00:35:32.795506 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] May 8 00:35:32.795561 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] May 8 00:35:32.795618 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] May 8 00:35:32.795671 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] May 8 00:35:32.795719 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] May 8 00:35:32.795769 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] May 8 00:35:32.795816 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] May 8 00:35:32.795870 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] May 8 00:35:32.795929 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] May 8 00:35:32.795984 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] May 8 00:35:32.796032 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] May 8 00:35:32.796079 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] May 8 00:35:32.796130 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] May 8 00:35:32.796178 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] May 8 00:35:32.796227 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] May 8 00:35:32.796282 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] May 8 00:35:32.796331 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] May 8 00:35:32.796387 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] May 8 00:35:32.797532 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] May 8 00:35:32.797588 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] May 8 00:35:32.797641 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] May 8 00:35:32.797693 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] May 8 00:35:32.797744 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] May 8 00:35:32.797792 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] May 8 00:35:32.797844 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] May 8 00:35:32.797892 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] May 8 00:35:32.797942 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] May 8 00:35:32.797993 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] May 8 00:35:32.798043 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] May 8 00:35:32.798091 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] May 8 00:35:32.798137 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] May 8 00:35:32.798188 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] May 8 00:35:32.798236 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] May 8 00:35:32.798283 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] May 8 00:35:32.798337 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] May 8 00:35:32.798397 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] May 8 00:35:32.798447 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] May 8 00:35:32.798502 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] May 8 00:35:32.798550 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] May 8 00:35:32.798601 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] May 8 00:35:32.798651 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] May 8 00:35:32.798703 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] May 8 00:35:32.798751 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] May 8 00:35:32.798802 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] May 8 00:35:32.798850 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] May 8 00:35:32.798902 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] May 8 00:35:32.798950 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] May 8 00:35:32.799005 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] May 8 00:35:32.799053 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] May 8 00:35:32.799100 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] May 8 00:35:32.799150 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] May 8 00:35:32.799198 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] May 8 00:35:32.799248 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] May 8 00:35:32.799299 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] May 8 00:35:32.799347 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] May 8 00:35:32.799409 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] May 8 00:35:32.799458 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] May 8 00:35:32.799509 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] May 8 00:35:32.799557 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] May 8 00:35:32.799615 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] May 8 00:35:32.799663 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] May 8 00:35:32.799714 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] May 8 00:35:32.799762 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] May 8 00:35:32.799813 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] May 8 00:35:32.799861 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] May 8 00:35:32.799919 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 8 00:35:32.799929 kernel: PCI: CLS 32 bytes, default 64 May 8 00:35:32.799936 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 8 00:35:32.799943 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns May 8 00:35:32.799950 kernel: clocksource: Switched to clocksource tsc May 8 00:35:32.799956 kernel: Initialise system trusted keyrings May 8 00:35:32.799962 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 8 00:35:32.799969 kernel: Key type asymmetric registered May 8 00:35:32.799977 kernel: Asymmetric key parser 'x509' registered May 8 00:35:32.799983 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 8 00:35:32.799990 kernel: io scheduler mq-deadline registered May 8 00:35:32.799996 kernel: io scheduler kyber registered May 8 00:35:32.800003 kernel: io scheduler bfq registered May 8 00:35:32.800057 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 May 8 00:35:32.800110 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.800163 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 May 8 00:35:32.800216 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.800271 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 May 8 00:35:32.800323 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.800376 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 May 8 00:35:32.802559 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.802622 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 May 8 00:35:32.802678 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.802739 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 May 8 00:35:32.802793 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.802846 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 May 8 00:35:32.802898 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.802951 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 May 8 00:35:32.803006 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.803060 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 May 8 00:35:32.803114 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.803166 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 May 8 00:35:32.803218 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.803271 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 May 8 00:35:32.803323 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.803378 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 May 8 00:35:32.804072 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.804132 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 May 8 00:35:32.804186 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.804239 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 May 8 00:35:32.804294 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.804348 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 May 8 00:35:32.804418 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.804472 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 May 8 00:35:32.804527 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.804589 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 May 8 00:35:32.804667 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.804721 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 May 8 00:35:32.804772 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.804823 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 May 8 00:35:32.804874 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.804926 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 May 8 00:35:32.804976 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.805031 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 May 8 00:35:32.805082 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.805134 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 May 8 00:35:32.805186 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.805239 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 May 8 00:35:32.805293 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.805346 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 May 8 00:35:32.805411 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.805474 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 May 8 00:35:32.805530 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.808814 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 May 8 00:35:32.808890 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.808950 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 May 8 00:35:32.809004 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.809059 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 May 8 00:35:32.809113 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.809167 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 May 8 00:35:32.809223 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.809276 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 May 8 00:35:32.809328 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.809439 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 May 8 00:35:32.809501 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.809558 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 May 8 00:35:32.809619 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.809630 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 8 00:35:32.809637 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 00:35:32.809643 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 8 00:35:32.809650 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 May 8 00:35:32.809656 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 8 00:35:32.809666 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 8 00:35:32.809720 kernel: rtc_cmos 00:01: registered as rtc0 May 8 00:35:32.809768 kernel: rtc_cmos 00:01: setting system clock to 2025-05-08T00:35:32 UTC (1746664532) May 8 00:35:32.809777 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 8 00:35:32.809822 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram May 8 00:35:32.809830 kernel: intel_pstate: CPU model not supported May 8 00:35:32.809837 kernel: NET: Registered PF_INET6 protocol family May 8 00:35:32.809843 kernel: Segment Routing with IPv6 May 8 00:35:32.809852 kernel: In-situ OAM (IOAM) with IPv6 May 8 00:35:32.809860 kernel: NET: Registered PF_PACKET protocol family May 8 00:35:32.809866 kernel: Key type dns_resolver registered May 8 00:35:32.809872 kernel: IPI shorthand broadcast: enabled May 8 00:35:32.809879 kernel: sched_clock: Marking stable (948003795, 241470179)->(1256647397, -67173423) May 8 00:35:32.809885 kernel: registered taskstats version 1 May 8 00:35:32.809892 kernel: Loading compiled-in X.509 certificates May 8 00:35:32.809898 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 75e4e434c57439d3f2eaf7797bbbcdd698dafd0e' May 8 00:35:32.809904 kernel: Key type .fscrypt registered May 8 00:35:32.809912 kernel: Key type fscrypt-provisioning registered May 8 00:35:32.809918 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 00:35:32.809924 kernel: ima: Allocated hash algorithm: sha1 May 8 00:35:32.809931 kernel: ima: No architecture policies found May 8 00:35:32.809937 kernel: clk: Disabling unused clocks May 8 00:35:32.809943 kernel: Freeing unused kernel image (initmem) memory: 42856K May 8 00:35:32.809949 kernel: Write protecting the kernel read-only data: 36864k May 8 00:35:32.809956 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 8 00:35:32.809962 kernel: Run /init as init process May 8 00:35:32.809970 kernel: with arguments: May 8 00:35:32.809976 kernel: /init May 8 00:35:32.809982 kernel: with environment: May 8 00:35:32.809989 kernel: HOME=/ May 8 00:35:32.809995 kernel: TERM=linux May 8 00:35:32.810001 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 00:35:32.810009 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 00:35:32.810017 systemd[1]: Detected virtualization vmware. May 8 00:35:32.810024 systemd[1]: Detected architecture x86-64. May 8 00:35:32.810031 systemd[1]: Running in initrd. May 8 00:35:32.810037 systemd[1]: No hostname configured, using default hostname. May 8 00:35:32.810043 systemd[1]: Hostname set to . May 8 00:35:32.810050 systemd[1]: Initializing machine ID from random generator. May 8 00:35:32.810057 systemd[1]: Queued start job for default target initrd.target. May 8 00:35:32.810063 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:35:32.810070 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:35:32.810079 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 8 00:35:32.810085 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:35:32.810092 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 8 00:35:32.810098 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 8 00:35:32.810106 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 8 00:35:32.810113 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 8 00:35:32.810120 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:35:32.810127 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:35:32.810134 systemd[1]: Reached target paths.target - Path Units. May 8 00:35:32.810141 systemd[1]: Reached target slices.target - Slice Units. May 8 00:35:32.810147 systemd[1]: Reached target swap.target - Swaps. May 8 00:35:32.810154 systemd[1]: Reached target timers.target - Timer Units. May 8 00:35:32.810160 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:35:32.810167 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:35:32.810173 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 00:35:32.810182 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 8 00:35:32.810188 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:35:32.810195 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:35:32.810202 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:35:32.810208 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:35:32.810215 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 8 00:35:32.810221 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:35:32.810228 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 8 00:35:32.810234 systemd[1]: Starting systemd-fsck-usr.service... May 8 00:35:32.810242 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:35:32.810248 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:35:32.810255 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:35:32.810261 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 8 00:35:32.810281 systemd-journald[215]: Collecting audit messages is disabled. May 8 00:35:32.810299 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:35:32.810306 systemd[1]: Finished systemd-fsck-usr.service. May 8 00:35:32.810313 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:35:32.810321 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:35:32.810328 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:35:32.810335 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 00:35:32.810341 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:35:32.810348 kernel: Bridge firewalling registered May 8 00:35:32.810355 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:35:32.810362 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:35:32.810368 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:35:32.810375 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:35:32.811094 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:35:32.811106 systemd-journald[215]: Journal started May 8 00:35:32.811122 systemd-journald[215]: Runtime Journal (/run/log/journal/ecb45c8c43c44955a441a24d321f04de) is 4.8M, max 38.6M, 33.8M free. May 8 00:35:32.765715 systemd-modules-load[216]: Inserted module 'overlay' May 8 00:35:32.791548 systemd-modules-load[216]: Inserted module 'br_netfilter' May 8 00:35:32.816396 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 8 00:35:32.817706 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:35:32.818403 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:35:32.820487 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:35:32.822838 dracut-cmdline[237]: dracut-dracut-053 May 8 00:35:32.824585 dracut-cmdline[237]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=86cfbfcc89a9c46f6cbba5bdb3509d1ce1367f0c93b0b0e4c6bdcad1a2064c90 May 8 00:35:32.828147 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:35:32.829230 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:35:32.850968 systemd-resolved[263]: Positive Trust Anchors: May 8 00:35:32.850982 systemd-resolved[263]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:35:32.851006 systemd-resolved[263]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:35:32.853047 systemd-resolved[263]: Defaulting to hostname 'linux'. May 8 00:35:32.853889 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:35:32.854053 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:35:32.875406 kernel: SCSI subsystem initialized May 8 00:35:32.881394 kernel: Loading iSCSI transport class v2.0-870. May 8 00:35:32.888396 kernel: iscsi: registered transport (tcp) May 8 00:35:32.901723 kernel: iscsi: registered transport (qla4xxx) May 8 00:35:32.901765 kernel: QLogic iSCSI HBA Driver May 8 00:35:32.922084 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 8 00:35:32.925486 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 8 00:35:32.940841 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 00:35:32.940885 kernel: device-mapper: uevent: version 1.0.3 May 8 00:35:32.941940 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 8 00:35:32.972407 kernel: raid6: avx2x4 gen() 51392 MB/s May 8 00:35:32.989412 kernel: raid6: avx2x2 gen() 49488 MB/s May 8 00:35:33.006690 kernel: raid6: avx2x1 gen() 43464 MB/s May 8 00:35:33.006741 kernel: raid6: using algorithm avx2x4 gen() 51392 MB/s May 8 00:35:33.024625 kernel: raid6: .... xor() 21239 MB/s, rmw enabled May 8 00:35:33.024658 kernel: raid6: using avx2x2 recovery algorithm May 8 00:35:33.038399 kernel: xor: automatically using best checksumming function avx May 8 00:35:33.141414 kernel: Btrfs loaded, zoned=no, fsverity=no May 8 00:35:33.147311 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 8 00:35:33.150479 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:35:33.158907 systemd-udevd[432]: Using default interface naming scheme 'v255'. May 8 00:35:33.161390 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:35:33.170718 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 8 00:35:33.177604 dracut-pre-trigger[438]: rd.md=0: removing MD RAID activation May 8 00:35:33.194265 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:35:33.197469 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:35:33.268083 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:35:33.273500 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 8 00:35:33.287600 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 8 00:35:33.288469 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:35:33.289007 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:35:33.289509 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:35:33.295577 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 8 00:35:33.303394 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 8 00:35:33.334406 kernel: VMware PVSCSI driver - version 1.0.7.0-k May 8 00:35:33.338760 kernel: vmw_pvscsi: using 64bit dma May 8 00:35:33.338794 kernel: vmw_pvscsi: max_id: 16 May 8 00:35:33.338807 kernel: vmw_pvscsi: setting ring_pages to 8 May 8 00:35:33.345931 kernel: vmw_pvscsi: enabling reqCallThreshold May 8 00:35:33.345974 kernel: vmw_pvscsi: driver-based request coalescing enabled May 8 00:35:33.345988 kernel: vmw_pvscsi: using MSI-X May 8 00:35:33.345999 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 May 8 00:35:33.348099 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 May 8 00:35:33.352855 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 May 8 00:35:33.352888 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI May 8 00:35:33.358402 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 May 8 00:35:33.368743 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps May 8 00:35:33.373396 kernel: cryptd: max_cpu_qlen set to 1000 May 8 00:35:33.379403 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 May 8 00:35:33.382401 kernel: libata version 3.00 loaded. May 8 00:35:33.382436 kernel: ata_piix 0000:00:07.1: version 2.13 May 8 00:35:33.389530 kernel: scsi host1: ata_piix May 8 00:35:33.389615 kernel: scsi host2: ata_piix May 8 00:35:33.389683 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 May 8 00:35:33.389693 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 May 8 00:35:33.384314 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:35:33.384393 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:35:33.384787 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:35:33.384878 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:35:33.384952 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:35:33.385053 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:35:33.392475 kernel: AVX2 version of gcm_enc/dec engaged. May 8 00:35:33.392497 kernel: AES CTR mode by8 optimization enabled May 8 00:35:33.393820 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:35:33.407922 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:35:33.412473 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:35:33.422883 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:35:33.560401 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 May 8 00:35:33.565446 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 May 8 00:35:33.575781 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) May 8 00:35:33.614108 kernel: sd 0:0:0:0: [sda] Write Protect is off May 8 00:35:33.614443 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 May 8 00:35:33.614516 kernel: sd 0:0:0:0: [sda] Cache data unavailable May 8 00:35:33.614580 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through May 8 00:35:33.614644 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:35:33.614653 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 8 00:35:33.638424 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray May 8 00:35:33.653923 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 8 00:35:33.653937 kernel: BTRFS: device fsid 28014d97-e6d7-4db4-b1d9-76a980e09972 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (494) May 8 00:35:33.653950 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (488) May 8 00:35:33.653957 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 8 00:35:33.644994 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. May 8 00:35:33.648329 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. May 8 00:35:33.653248 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. May 8 00:35:33.654044 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. May 8 00:35:33.657210 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. May 8 00:35:33.676611 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 8 00:35:33.718410 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:35:33.721819 kernel: GPT:disk_guids don't match. May 8 00:35:33.721852 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 00:35:33.721861 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:35:34.728401 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:35:34.728951 disk-uuid[589]: The operation has completed successfully. May 8 00:35:34.803533 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 00:35:34.803602 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 8 00:35:34.807491 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 8 00:35:34.809209 sh[610]: Success May 8 00:35:34.817397 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 8 00:35:34.889887 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 8 00:35:34.895228 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 8 00:35:34.895577 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 8 00:35:34.940531 kernel: BTRFS info (device dm-0): first mount of filesystem 28014d97-e6d7-4db4-b1d9-76a980e09972 May 8 00:35:34.940584 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 8 00:35:34.940602 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 8 00:35:34.940613 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 8 00:35:34.941597 kernel: BTRFS info (device dm-0): using free space tree May 8 00:35:34.950411 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 8 00:35:34.952626 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 8 00:35:34.961489 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... May 8 00:35:34.963030 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 8 00:35:34.993076 kernel: BTRFS info (device sda6): first mount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:35:34.993124 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:35:34.993135 kernel: BTRFS info (device sda6): using free space tree May 8 00:35:35.013407 kernel: BTRFS info (device sda6): enabling ssd optimizations May 8 00:35:35.022650 systemd[1]: mnt-oem.mount: Deactivated successfully. May 8 00:35:35.024405 kernel: BTRFS info (device sda6): last unmount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:35:35.032198 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 8 00:35:35.036499 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 8 00:35:35.054016 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. May 8 00:35:35.058476 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 8 00:35:35.119780 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:35:35.129591 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:35:35.144295 ignition[671]: Ignition 2.19.0 May 8 00:35:35.144306 ignition[671]: Stage: fetch-offline May 8 00:35:35.144328 ignition[671]: no configs at "/usr/lib/ignition/base.d" May 8 00:35:35.144334 ignition[671]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:35:35.144408 ignition[671]: parsed url from cmdline: "" May 8 00:35:35.144410 ignition[671]: no config URL provided May 8 00:35:35.144415 ignition[671]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:35:35.144420 ignition[671]: no config at "/usr/lib/ignition/user.ign" May 8 00:35:35.144795 ignition[671]: config successfully fetched May 8 00:35:35.144815 ignition[671]: parsing config with SHA512: df2d745cd5d15d8bfd659031ade3adac226e33ccb6c98ee0d74f0c7a652f81ea3cbd6c5954a139840cc82af1da3ae046d4b2a2cd75d36b240b0d59581172f66e May 8 00:35:35.145719 systemd-networkd[799]: lo: Link UP May 8 00:35:35.145723 systemd-networkd[799]: lo: Gained carrier May 8 00:35:35.146720 systemd-networkd[799]: Enumeration completed May 8 00:35:35.146908 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:35:35.147173 systemd[1]: Reached target network.target - Network. May 8 00:35:35.147264 systemd-networkd[799]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. May 8 00:35:35.149681 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated May 8 00:35:35.149800 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps May 8 00:35:35.149349 ignition[671]: fetch-offline: fetch-offline passed May 8 00:35:35.149077 unknown[671]: fetched base config from "system" May 8 00:35:35.149399 ignition[671]: Ignition finished successfully May 8 00:35:35.149081 unknown[671]: fetched user config from "vmware" May 8 00:35:35.150769 systemd-networkd[799]: ens192: Link UP May 8 00:35:35.150771 systemd-networkd[799]: ens192: Gained carrier May 8 00:35:35.151124 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:35:35.151504 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 8 00:35:35.156591 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 8 00:35:35.164687 ignition[807]: Ignition 2.19.0 May 8 00:35:35.164694 ignition[807]: Stage: kargs May 8 00:35:35.164801 ignition[807]: no configs at "/usr/lib/ignition/base.d" May 8 00:35:35.164807 ignition[807]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:35:35.165322 ignition[807]: kargs: kargs passed May 8 00:35:35.165349 ignition[807]: Ignition finished successfully May 8 00:35:35.166549 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 8 00:35:35.169473 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 8 00:35:35.178016 ignition[815]: Ignition 2.19.0 May 8 00:35:35.178023 ignition[815]: Stage: disks May 8 00:35:35.178117 ignition[815]: no configs at "/usr/lib/ignition/base.d" May 8 00:35:35.178123 ignition[815]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:35:35.178651 ignition[815]: disks: disks passed May 8 00:35:35.178682 ignition[815]: Ignition finished successfully May 8 00:35:35.179470 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 8 00:35:35.179694 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 8 00:35:35.179826 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 00:35:35.180019 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:35:35.180212 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:35:35.180398 systemd[1]: Reached target basic.target - Basic System. May 8 00:35:35.184473 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 8 00:35:35.195127 systemd-fsck[823]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 8 00:35:35.196325 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 8 00:35:35.200462 systemd[1]: Mounting sysroot.mount - /sysroot... May 8 00:35:35.260401 kernel: EXT4-fs (sda9): mounted filesystem 36960c89-ba45-4808-a41c-bf61ce9470a3 r/w with ordered data mode. Quota mode: none. May 8 00:35:35.260562 systemd[1]: Mounted sysroot.mount - /sysroot. May 8 00:35:35.260980 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 8 00:35:35.265478 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:35:35.266448 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 8 00:35:35.267617 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 8 00:35:35.267651 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 00:35:35.267666 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:35:35.271478 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 8 00:35:35.272417 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 8 00:35:35.276524 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (831) May 8 00:35:35.276567 kernel: BTRFS info (device sda6): first mount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:35:35.277734 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:35:35.277754 kernel: BTRFS info (device sda6): using free space tree May 8 00:35:35.281446 kernel: BTRFS info (device sda6): enabling ssd optimizations May 8 00:35:35.282307 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:35:35.333844 initrd-setup-root[855]: cut: /sysroot/etc/passwd: No such file or directory May 8 00:35:35.336921 initrd-setup-root[862]: cut: /sysroot/etc/group: No such file or directory May 8 00:35:35.339318 initrd-setup-root[869]: cut: /sysroot/etc/shadow: No such file or directory May 8 00:35:35.342026 initrd-setup-root[876]: cut: /sysroot/etc/gshadow: No such file or directory May 8 00:35:35.426009 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 8 00:35:35.430438 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 8 00:35:35.432917 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 8 00:35:35.437423 kernel: BTRFS info (device sda6): last unmount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:35:35.452553 ignition[943]: INFO : Ignition 2.19.0 May 8 00:35:35.452553 ignition[943]: INFO : Stage: mount May 8 00:35:35.452917 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:35:35.452917 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:35:35.453250 ignition[943]: INFO : mount: mount passed May 8 00:35:35.453761 ignition[943]: INFO : Ignition finished successfully May 8 00:35:35.454014 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 8 00:35:35.457486 systemd[1]: Starting ignition-files.service - Ignition (files)... May 8 00:35:35.477049 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 8 00:35:35.936638 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 8 00:35:35.941504 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:35:35.954405 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (955) May 8 00:35:35.954441 kernel: BTRFS info (device sda6): first mount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:35:35.954450 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:35:35.956398 kernel: BTRFS info (device sda6): using free space tree May 8 00:35:35.959573 kernel: BTRFS info (device sda6): enabling ssd optimizations May 8 00:35:35.960500 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:35:35.983067 ignition[972]: INFO : Ignition 2.19.0 May 8 00:35:35.983596 ignition[972]: INFO : Stage: files May 8 00:35:35.983596 ignition[972]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:35:35.983596 ignition[972]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:35:35.983958 ignition[972]: DEBUG : files: compiled without relabeling support, skipping May 8 00:35:35.991427 ignition[972]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 00:35:35.991427 ignition[972]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 00:35:36.010741 ignition[972]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 00:35:36.010962 ignition[972]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 00:35:36.011113 ignition[972]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 00:35:36.011089 unknown[972]: wrote ssh authorized keys file for user: core May 8 00:35:36.016245 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 8 00:35:36.016872 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 8 00:35:36.055272 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 8 00:35:36.196409 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 8 00:35:36.196409 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 8 00:35:36.196890 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 8 00:35:36.196890 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 00:35:36.196890 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 00:35:36.196890 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:35:36.196890 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:35:36.196890 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:35:36.197987 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:35:36.197987 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:35:36.197987 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:35:36.197987 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 8 00:35:36.197987 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 8 00:35:36.197987 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 8 00:35:36.197987 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 May 8 00:35:36.677300 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 8 00:35:36.932036 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 8 00:35:36.932036 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" May 8 00:35:36.932521 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" May 8 00:35:36.932521 ignition[972]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 8 00:35:36.938151 ignition[972]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:35:36.938424 ignition[972]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:35:36.938424 ignition[972]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 8 00:35:36.938424 ignition[972]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 8 00:35:36.938424 ignition[972]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:35:36.939068 ignition[972]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:35:36.939068 ignition[972]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 8 00:35:36.939068 ignition[972]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 8 00:35:37.067552 systemd-networkd[799]: ens192: Gained IPv6LL May 8 00:35:37.074944 ignition[972]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:35:37.077402 ignition[972]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:35:37.077402 ignition[972]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 8 00:35:37.077402 ignition[972]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 8 00:35:37.078325 ignition[972]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 8 00:35:37.078325 ignition[972]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 00:35:37.078325 ignition[972]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 00:35:37.078325 ignition[972]: INFO : files: files passed May 8 00:35:37.078325 ignition[972]: INFO : Ignition finished successfully May 8 00:35:37.078528 systemd[1]: Finished ignition-files.service - Ignition (files). May 8 00:35:37.082484 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 8 00:35:37.083488 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 8 00:35:37.084802 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 00:35:37.084868 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 8 00:35:37.092196 initrd-setup-root-after-ignition[1002]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:35:37.092196 initrd-setup-root-after-ignition[1002]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 8 00:35:37.092864 initrd-setup-root-after-ignition[1006]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:35:37.093598 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:35:37.093995 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 8 00:35:37.097491 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 8 00:35:37.109817 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 00:35:37.109875 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 8 00:35:37.110278 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 8 00:35:37.110413 systemd[1]: Reached target initrd.target - Initrd Default Target. May 8 00:35:37.110613 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 8 00:35:37.111042 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 8 00:35:37.120280 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:35:37.124491 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 8 00:35:37.129683 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 8 00:35:37.129971 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:35:37.130126 systemd[1]: Stopped target timers.target - Timer Units. May 8 00:35:37.130253 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 00:35:37.130330 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:35:37.130671 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 8 00:35:37.130911 systemd[1]: Stopped target basic.target - Basic System. May 8 00:35:37.131096 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 8 00:35:37.131298 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:35:37.131499 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 8 00:35:37.131690 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 8 00:35:37.132033 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:35:37.132251 systemd[1]: Stopped target sysinit.target - System Initialization. May 8 00:35:37.132457 systemd[1]: Stopped target local-fs.target - Local File Systems. May 8 00:35:37.132646 systemd[1]: Stopped target swap.target - Swaps. May 8 00:35:37.132825 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 00:35:37.132886 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 8 00:35:37.133159 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 8 00:35:37.133373 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:35:37.133563 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 8 00:35:37.133607 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:35:37.133770 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 00:35:37.133828 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 8 00:35:37.134086 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 00:35:37.134152 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:35:37.134397 systemd[1]: Stopped target paths.target - Path Units. May 8 00:35:37.134559 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 00:35:37.136425 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:35:37.136583 systemd[1]: Stopped target slices.target - Slice Units. May 8 00:35:37.136780 systemd[1]: Stopped target sockets.target - Socket Units. May 8 00:35:37.136966 systemd[1]: iscsid.socket: Deactivated successfully. May 8 00:35:37.137029 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:35:37.137232 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 00:35:37.137277 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:35:37.137569 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 00:35:37.137652 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:35:37.137887 systemd[1]: ignition-files.service: Deactivated successfully. May 8 00:35:37.137964 systemd[1]: Stopped ignition-files.service - Ignition (files). May 8 00:35:37.145553 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 8 00:35:37.148530 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 8 00:35:37.148649 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 00:35:37.148744 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:35:37.148928 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 00:35:37.149009 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:35:37.150732 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 00:35:37.150793 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 8 00:35:37.155098 ignition[1026]: INFO : Ignition 2.19.0 May 8 00:35:37.156235 ignition[1026]: INFO : Stage: umount May 8 00:35:37.156235 ignition[1026]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:35:37.156235 ignition[1026]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:35:37.156235 ignition[1026]: INFO : umount: umount passed May 8 00:35:37.156235 ignition[1026]: INFO : Ignition finished successfully May 8 00:35:37.157564 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 00:35:37.157635 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 8 00:35:37.157879 systemd[1]: Stopped target network.target - Network. May 8 00:35:37.157980 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 00:35:37.158007 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 8 00:35:37.158155 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 00:35:37.158177 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 8 00:35:37.158323 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 00:35:37.158345 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 8 00:35:37.158497 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 8 00:35:37.158518 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 8 00:35:37.158887 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 8 00:35:37.159106 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 8 00:35:37.164853 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 00:35:37.164928 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 8 00:35:37.165649 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 00:35:37.165729 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 8 00:35:37.166160 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 00:35:37.166194 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 8 00:35:37.169501 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 8 00:35:37.170120 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 00:35:37.170147 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:35:37.171119 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. May 8 00:35:37.171145 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. May 8 00:35:37.171299 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:35:37.171322 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:35:37.171489 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 00:35:37.171511 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 8 00:35:37.171680 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 8 00:35:37.171701 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:35:37.171932 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:35:37.177718 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 00:35:37.177800 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 8 00:35:37.185611 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 00:35:37.185906 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 00:35:37.186001 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:35:37.186449 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 00:35:37.186481 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 8 00:35:37.186612 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 00:35:37.186634 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:35:37.186848 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 00:35:37.186870 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 8 00:35:37.187136 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 00:35:37.187159 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 8 00:35:37.187472 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:35:37.187495 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:35:37.191538 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 8 00:35:37.191827 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 00:35:37.191857 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:35:37.191982 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 8 00:35:37.192004 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:35:37.192127 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 00:35:37.192154 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:35:37.192269 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:35:37.192290 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:35:37.194744 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 00:35:37.194810 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 8 00:35:37.442106 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 00:35:37.442184 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 8 00:35:37.442525 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 8 00:35:37.442677 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 00:35:37.442712 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 8 00:35:37.446588 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 8 00:35:37.458639 systemd[1]: Switching root. May 8 00:35:37.496128 systemd-journald[215]: Journal stopped May 8 00:35:32.749367 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed May 7 22:54:21 -00 2025 May 8 00:35:32.749396 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=86cfbfcc89a9c46f6cbba5bdb3509d1ce1367f0c93b0b0e4c6bdcad1a2064c90 May 8 00:35:32.749403 kernel: Disabled fast string operations May 8 00:35:32.749407 kernel: BIOS-provided physical RAM map: May 8 00:35:32.749411 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable May 8 00:35:32.749415 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved May 8 00:35:32.749422 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved May 8 00:35:32.749427 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable May 8 00:35:32.749431 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data May 8 00:35:32.749435 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS May 8 00:35:32.749439 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable May 8 00:35:32.749443 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved May 8 00:35:32.749447 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved May 8 00:35:32.749451 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved May 8 00:35:32.749457 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved May 8 00:35:32.749462 kernel: NX (Execute Disable) protection: active May 8 00:35:32.749467 kernel: APIC: Static calls initialized May 8 00:35:32.749471 kernel: SMBIOS 2.7 present. May 8 00:35:32.749476 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 May 8 00:35:32.749481 kernel: vmware: hypercall mode: 0x00 May 8 00:35:32.749486 kernel: Hypervisor detected: VMware May 8 00:35:32.749490 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz May 8 00:35:32.749496 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz May 8 00:35:32.749501 kernel: vmware: using clock offset of 4756423619 ns May 8 00:35:32.749505 kernel: tsc: Detected 3408.000 MHz processor May 8 00:35:32.749510 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 8 00:35:32.749515 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 8 00:35:32.749520 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 May 8 00:35:32.749525 kernel: total RAM covered: 3072M May 8 00:35:32.749530 kernel: Found optimal setting for mtrr clean up May 8 00:35:32.749535 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G May 8 00:35:32.749541 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs May 8 00:35:32.749546 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 8 00:35:32.749551 kernel: Using GB pages for direct mapping May 8 00:35:32.749555 kernel: ACPI: Early table checksum verification disabled May 8 00:35:32.749560 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) May 8 00:35:32.749565 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) May 8 00:35:32.749570 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) May 8 00:35:32.749575 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) May 8 00:35:32.749579 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 May 8 00:35:32.749587 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 May 8 00:35:32.749592 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) May 8 00:35:32.749597 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) May 8 00:35:32.749602 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) May 8 00:35:32.749607 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) May 8 00:35:32.749613 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) May 8 00:35:32.749618 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) May 8 00:35:32.749623 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] May 8 00:35:32.749629 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] May 8 00:35:32.749634 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] May 8 00:35:32.749639 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] May 8 00:35:32.749643 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] May 8 00:35:32.749648 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] May 8 00:35:32.749653 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] May 8 00:35:32.749658 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] May 8 00:35:32.749664 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] May 8 00:35:32.749669 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] May 8 00:35:32.749674 kernel: system APIC only can use physical flat May 8 00:35:32.749679 kernel: APIC: Switched APIC routing to: physical flat May 8 00:35:32.749684 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 8 00:35:32.749689 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 May 8 00:35:32.749694 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 May 8 00:35:32.749699 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 May 8 00:35:32.749704 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 May 8 00:35:32.749710 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 May 8 00:35:32.749715 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 May 8 00:35:32.749720 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 May 8 00:35:32.749725 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 May 8 00:35:32.749730 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 May 8 00:35:32.749735 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 May 8 00:35:32.749740 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 May 8 00:35:32.749744 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 May 8 00:35:32.749749 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 May 8 00:35:32.749754 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 May 8 00:35:32.749760 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 May 8 00:35:32.749765 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 May 8 00:35:32.749770 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 May 8 00:35:32.749775 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 May 8 00:35:32.749780 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 May 8 00:35:32.749785 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 May 8 00:35:32.749790 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 May 8 00:35:32.749795 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 May 8 00:35:32.749800 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 May 8 00:35:32.749804 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 May 8 00:35:32.749810 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 May 8 00:35:32.749815 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 May 8 00:35:32.749820 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 May 8 00:35:32.749825 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 May 8 00:35:32.749830 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 May 8 00:35:32.749835 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 May 8 00:35:32.749840 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 May 8 00:35:32.749845 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 May 8 00:35:32.749850 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 May 8 00:35:32.749855 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 May 8 00:35:32.749861 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 May 8 00:35:32.749866 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 May 8 00:35:32.749871 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 May 8 00:35:32.749876 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 May 8 00:35:32.749881 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 May 8 00:35:32.749885 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 May 8 00:35:32.749891 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 May 8 00:35:32.749895 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 May 8 00:35:32.749900 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 May 8 00:35:32.749905 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 May 8 00:35:32.749910 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 May 8 00:35:32.749916 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 May 8 00:35:32.749921 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 May 8 00:35:32.749926 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 May 8 00:35:32.749932 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 May 8 00:35:32.749936 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 May 8 00:35:32.749941 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 May 8 00:35:32.749946 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 May 8 00:35:32.749951 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 May 8 00:35:32.749956 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 May 8 00:35:32.749961 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 May 8 00:35:32.749967 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 May 8 00:35:32.749971 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 May 8 00:35:32.749977 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 May 8 00:35:32.749986 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 May 8 00:35:32.749991 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 May 8 00:35:32.749996 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 May 8 00:35:32.750001 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 May 8 00:35:32.750007 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 May 8 00:35:32.750013 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 May 8 00:35:32.750018 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 May 8 00:35:32.750023 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 May 8 00:35:32.750029 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 May 8 00:35:32.750034 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 May 8 00:35:32.750039 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 May 8 00:35:32.750044 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 May 8 00:35:32.750050 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 May 8 00:35:32.750055 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 May 8 00:35:32.750060 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 May 8 00:35:32.750067 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 May 8 00:35:32.750072 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 May 8 00:35:32.750077 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 May 8 00:35:32.750083 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 May 8 00:35:32.750088 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 May 8 00:35:32.750093 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 May 8 00:35:32.750099 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 May 8 00:35:32.750104 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 May 8 00:35:32.750109 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 May 8 00:35:32.750114 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 May 8 00:35:32.750121 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 May 8 00:35:32.750126 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 May 8 00:35:32.750131 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 May 8 00:35:32.750137 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 May 8 00:35:32.750142 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 May 8 00:35:32.750147 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 May 8 00:35:32.750153 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 May 8 00:35:32.750158 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 May 8 00:35:32.750163 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 May 8 00:35:32.750168 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 May 8 00:35:32.750174 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 May 8 00:35:32.750180 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 May 8 00:35:32.750185 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 May 8 00:35:32.750190 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 May 8 00:35:32.750195 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 May 8 00:35:32.750201 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 May 8 00:35:32.750206 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 May 8 00:35:32.750211 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 May 8 00:35:32.750216 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 May 8 00:35:32.750222 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 May 8 00:35:32.750228 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 May 8 00:35:32.750233 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 May 8 00:35:32.750239 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 May 8 00:35:32.750244 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 May 8 00:35:32.750249 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 May 8 00:35:32.750254 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 May 8 00:35:32.750260 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 May 8 00:35:32.750265 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 May 8 00:35:32.750270 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 May 8 00:35:32.750275 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 May 8 00:35:32.750282 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 May 8 00:35:32.750287 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 May 8 00:35:32.750292 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 May 8 00:35:32.750297 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 May 8 00:35:32.750303 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 May 8 00:35:32.750308 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 May 8 00:35:32.750313 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 May 8 00:35:32.750319 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 May 8 00:35:32.750324 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 May 8 00:35:32.750329 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 May 8 00:35:32.750334 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 May 8 00:35:32.750341 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 May 8 00:35:32.750346 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 May 8 00:35:32.750351 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 May 8 00:35:32.750356 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 8 00:35:32.750362 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 8 00:35:32.750367 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug May 8 00:35:32.750373 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] May 8 00:35:32.750378 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] May 8 00:35:32.750405 kernel: Zone ranges: May 8 00:35:32.750413 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 8 00:35:32.750418 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] May 8 00:35:32.750429 kernel: Normal empty May 8 00:35:32.750435 kernel: Movable zone start for each node May 8 00:35:32.750441 kernel: Early memory node ranges May 8 00:35:32.750446 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] May 8 00:35:32.750452 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] May 8 00:35:32.750457 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] May 8 00:35:32.750462 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] May 8 00:35:32.750469 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 8 00:35:32.750475 kernel: On node 0, zone DMA: 98 pages in unavailable ranges May 8 00:35:32.750480 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges May 8 00:35:32.750486 kernel: ACPI: PM-Timer IO Port: 0x1008 May 8 00:35:32.750491 kernel: system APIC only can use physical flat May 8 00:35:32.750496 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) May 8 00:35:32.750502 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) May 8 00:35:32.750507 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) May 8 00:35:32.750513 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) May 8 00:35:32.750518 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) May 8 00:35:32.750524 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) May 8 00:35:32.750530 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) May 8 00:35:32.750535 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) May 8 00:35:32.750541 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) May 8 00:35:32.750546 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) May 8 00:35:32.750551 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) May 8 00:35:32.750557 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) May 8 00:35:32.750562 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) May 8 00:35:32.750567 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) May 8 00:35:32.750573 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) May 8 00:35:32.750579 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) May 8 00:35:32.750584 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) May 8 00:35:32.750589 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) May 8 00:35:32.750595 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) May 8 00:35:32.750607 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) May 8 00:35:32.750614 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) May 8 00:35:32.750620 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) May 8 00:35:32.750625 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) May 8 00:35:32.750630 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) May 8 00:35:32.750638 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) May 8 00:35:32.750643 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) May 8 00:35:32.750648 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) May 8 00:35:32.750654 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) May 8 00:35:32.750659 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) May 8 00:35:32.750664 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) May 8 00:35:32.750670 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) May 8 00:35:32.750675 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) May 8 00:35:32.750680 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) May 8 00:35:32.750686 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) May 8 00:35:32.750693 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) May 8 00:35:32.750698 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) May 8 00:35:32.750703 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) May 8 00:35:32.750709 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) May 8 00:35:32.750714 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) May 8 00:35:32.750719 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) May 8 00:35:32.750724 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) May 8 00:35:32.750730 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) May 8 00:35:32.750735 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) May 8 00:35:32.750742 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) May 8 00:35:32.750747 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) May 8 00:35:32.750752 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) May 8 00:35:32.750758 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) May 8 00:35:32.750763 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) May 8 00:35:32.750768 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) May 8 00:35:32.750774 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) May 8 00:35:32.750779 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) May 8 00:35:32.750785 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) May 8 00:35:32.750790 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) May 8 00:35:32.750796 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) May 8 00:35:32.750802 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) May 8 00:35:32.750807 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) May 8 00:35:32.750813 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) May 8 00:35:32.750818 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) May 8 00:35:32.750823 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) May 8 00:35:32.750828 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) May 8 00:35:32.750834 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) May 8 00:35:32.750839 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) May 8 00:35:32.750846 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) May 8 00:35:32.750851 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) May 8 00:35:32.750856 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) May 8 00:35:32.750862 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) May 8 00:35:32.750867 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) May 8 00:35:32.750873 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) May 8 00:35:32.750878 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) May 8 00:35:32.750883 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) May 8 00:35:32.750888 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) May 8 00:35:32.750894 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) May 8 00:35:32.750900 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) May 8 00:35:32.750906 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) May 8 00:35:32.750911 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) May 8 00:35:32.750916 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) May 8 00:35:32.750921 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) May 8 00:35:32.750927 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) May 8 00:35:32.750932 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) May 8 00:35:32.750937 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) May 8 00:35:32.750943 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) May 8 00:35:32.750948 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) May 8 00:35:32.750954 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) May 8 00:35:32.750960 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) May 8 00:35:32.750965 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) May 8 00:35:32.750970 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) May 8 00:35:32.750976 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) May 8 00:35:32.750981 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) May 8 00:35:32.750986 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) May 8 00:35:32.750991 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) May 8 00:35:32.750997 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) May 8 00:35:32.751003 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) May 8 00:35:32.751008 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) May 8 00:35:32.751014 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) May 8 00:35:32.751019 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) May 8 00:35:32.751024 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) May 8 00:35:32.751030 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) May 8 00:35:32.751035 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) May 8 00:35:32.751040 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) May 8 00:35:32.751046 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) May 8 00:35:32.751051 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) May 8 00:35:32.751058 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) May 8 00:35:32.751063 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) May 8 00:35:32.751068 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) May 8 00:35:32.751074 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) May 8 00:35:32.751079 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) May 8 00:35:32.751084 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) May 8 00:35:32.751089 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) May 8 00:35:32.751095 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) May 8 00:35:32.751100 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) May 8 00:35:32.751106 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) May 8 00:35:32.751112 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) May 8 00:35:32.751117 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) May 8 00:35:32.751122 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) May 8 00:35:32.751128 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) May 8 00:35:32.751133 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) May 8 00:35:32.751138 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) May 8 00:35:32.751143 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) May 8 00:35:32.751149 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) May 8 00:35:32.751154 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) May 8 00:35:32.751161 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) May 8 00:35:32.751166 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) May 8 00:35:32.751171 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) May 8 00:35:32.751176 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) May 8 00:35:32.751182 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) May 8 00:35:32.751187 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) May 8 00:35:32.751192 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) May 8 00:35:32.751198 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) May 8 00:35:32.751203 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 May 8 00:35:32.751209 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) May 8 00:35:32.751215 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 8 00:35:32.751220 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 May 8 00:35:32.751226 kernel: TSC deadline timer available May 8 00:35:32.751231 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs May 8 00:35:32.751236 kernel: [mem 0x80000000-0xefffffff] available for PCI devices May 8 00:35:32.751242 kernel: Booting paravirtualized kernel on VMware hypervisor May 8 00:35:32.751247 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 8 00:35:32.751253 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 May 8 00:35:32.751259 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u262144 May 8 00:35:32.751265 kernel: pcpu-alloc: s197096 r8192 d32280 u262144 alloc=1*2097152 May 8 00:35:32.751270 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 May 8 00:35:32.751276 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 May 8 00:35:32.751281 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 May 8 00:35:32.751286 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 May 8 00:35:32.751292 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 May 8 00:35:32.751304 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 May 8 00:35:32.751310 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 May 8 00:35:32.751317 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 May 8 00:35:32.751322 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 May 8 00:35:32.751328 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 May 8 00:35:32.751334 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 May 8 00:35:32.751339 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 May 8 00:35:32.751345 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 May 8 00:35:32.751350 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 May 8 00:35:32.751356 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 May 8 00:35:32.751362 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 May 8 00:35:32.751369 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=86cfbfcc89a9c46f6cbba5bdb3509d1ce1367f0c93b0b0e4c6bdcad1a2064c90 May 8 00:35:32.751375 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 00:35:32.751381 kernel: random: crng init done May 8 00:35:32.751416 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes May 8 00:35:32.751422 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes May 8 00:35:32.751428 kernel: printk: log_buf_len min size: 262144 bytes May 8 00:35:32.751433 kernel: printk: log_buf_len: 1048576 bytes May 8 00:35:32.751439 kernel: printk: early log buf free: 239648(91%) May 8 00:35:32.751446 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 00:35:32.751452 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 8 00:35:32.751458 kernel: Fallback order for Node 0: 0 May 8 00:35:32.751464 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 May 8 00:35:32.751470 kernel: Policy zone: DMA32 May 8 00:35:32.751475 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 00:35:32.751482 kernel: Memory: 1936376K/2096628K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42856K init, 2336K bss, 159992K reserved, 0K cma-reserved) May 8 00:35:32.751489 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 May 8 00:35:32.751494 kernel: ftrace: allocating 37944 entries in 149 pages May 8 00:35:32.751500 kernel: ftrace: allocated 149 pages with 4 groups May 8 00:35:32.751506 kernel: Dynamic Preempt: voluntary May 8 00:35:32.751511 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 00:35:32.751517 kernel: rcu: RCU event tracing is enabled. May 8 00:35:32.751524 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. May 8 00:35:32.751530 kernel: Trampoline variant of Tasks RCU enabled. May 8 00:35:32.751537 kernel: Rude variant of Tasks RCU enabled. May 8 00:35:32.751543 kernel: Tracing variant of Tasks RCU enabled. May 8 00:35:32.751548 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 00:35:32.751554 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 May 8 00:35:32.751560 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 May 8 00:35:32.751565 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. May 8 00:35:32.751571 kernel: Console: colour VGA+ 80x25 May 8 00:35:32.751577 kernel: printk: console [tty0] enabled May 8 00:35:32.751583 kernel: printk: console [ttyS0] enabled May 8 00:35:32.751590 kernel: ACPI: Core revision 20230628 May 8 00:35:32.751596 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns May 8 00:35:32.751601 kernel: APIC: Switch to symmetric I/O mode setup May 8 00:35:32.751607 kernel: x2apic enabled May 8 00:35:32.751613 kernel: APIC: Switched APIC routing to: physical x2apic May 8 00:35:32.751619 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 8 00:35:32.751625 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns May 8 00:35:32.751631 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) May 8 00:35:32.751636 kernel: Disabled fast string operations May 8 00:35:32.751643 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 8 00:35:32.751649 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 8 00:35:32.751655 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 8 00:35:32.751661 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit May 8 00:35:32.751666 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall May 8 00:35:32.751672 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS May 8 00:35:32.751678 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 8 00:35:32.751683 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT May 8 00:35:32.751689 kernel: RETBleed: Mitigation: Enhanced IBRS May 8 00:35:32.751696 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 8 00:35:32.751702 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 8 00:35:32.751708 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 8 00:35:32.751714 kernel: SRBDS: Unknown: Dependent on hypervisor status May 8 00:35:32.751720 kernel: GDS: Unknown: Dependent on hypervisor status May 8 00:35:32.751725 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 8 00:35:32.751731 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 8 00:35:32.751737 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 8 00:35:32.751743 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 8 00:35:32.751749 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 8 00:35:32.751755 kernel: Freeing SMP alternatives memory: 32K May 8 00:35:32.751761 kernel: pid_max: default: 131072 minimum: 1024 May 8 00:35:32.751767 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 8 00:35:32.751773 kernel: landlock: Up and running. May 8 00:35:32.751779 kernel: SELinux: Initializing. May 8 00:35:32.751784 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 8 00:35:32.751790 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 8 00:35:32.751796 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) May 8 00:35:32.751803 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 8 00:35:32.751809 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 8 00:35:32.751815 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 8 00:35:32.751820 kernel: Performance Events: Skylake events, core PMU driver. May 8 00:35:32.751826 kernel: core: CPUID marked event: 'cpu cycles' unavailable May 8 00:35:32.751832 kernel: core: CPUID marked event: 'instructions' unavailable May 8 00:35:32.751838 kernel: core: CPUID marked event: 'bus cycles' unavailable May 8 00:35:32.751843 kernel: core: CPUID marked event: 'cache references' unavailable May 8 00:35:32.751850 kernel: core: CPUID marked event: 'cache misses' unavailable May 8 00:35:32.751856 kernel: core: CPUID marked event: 'branch instructions' unavailable May 8 00:35:32.751862 kernel: core: CPUID marked event: 'branch misses' unavailable May 8 00:35:32.751874 kernel: ... version: 1 May 8 00:35:32.751880 kernel: ... bit width: 48 May 8 00:35:32.751886 kernel: ... generic registers: 4 May 8 00:35:32.751891 kernel: ... value mask: 0000ffffffffffff May 8 00:35:32.751897 kernel: ... max period: 000000007fffffff May 8 00:35:32.751903 kernel: ... fixed-purpose events: 0 May 8 00:35:32.751910 kernel: ... event mask: 000000000000000f May 8 00:35:32.751916 kernel: signal: max sigframe size: 1776 May 8 00:35:32.751922 kernel: rcu: Hierarchical SRCU implementation. May 8 00:35:32.751928 kernel: rcu: Max phase no-delay instances is 400. May 8 00:35:32.751933 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 8 00:35:32.751939 kernel: smp: Bringing up secondary CPUs ... May 8 00:35:32.751945 kernel: smpboot: x86: Booting SMP configuration: May 8 00:35:32.751951 kernel: .... node #0, CPUs: #1 May 8 00:35:32.751956 kernel: Disabled fast string operations May 8 00:35:32.751962 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 May 8 00:35:32.751969 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 May 8 00:35:32.751974 kernel: smp: Brought up 1 node, 2 CPUs May 8 00:35:32.751980 kernel: smpboot: Max logical packages: 128 May 8 00:35:32.751986 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) May 8 00:35:32.751992 kernel: devtmpfs: initialized May 8 00:35:32.751998 kernel: x86/mm: Memory block size: 128MB May 8 00:35:32.752003 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) May 8 00:35:32.752009 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 00:35:32.752016 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) May 8 00:35:32.752022 kernel: pinctrl core: initialized pinctrl subsystem May 8 00:35:32.752028 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 00:35:32.752034 kernel: audit: initializing netlink subsys (disabled) May 8 00:35:32.752040 kernel: audit: type=2000 audit(1746664531.068:1): state=initialized audit_enabled=0 res=1 May 8 00:35:32.752046 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 00:35:32.752051 kernel: thermal_sys: Registered thermal governor 'user_space' May 8 00:35:32.752057 kernel: cpuidle: using governor menu May 8 00:35:32.752063 kernel: Simple Boot Flag at 0x36 set to 0x80 May 8 00:35:32.752069 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 00:35:32.752075 kernel: dca service started, version 1.12.1 May 8 00:35:32.752100 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) May 8 00:35:32.752106 kernel: PCI: Using configuration type 1 for base access May 8 00:35:32.752112 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 8 00:35:32.752118 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 8 00:35:32.752123 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 8 00:35:32.752146 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 8 00:35:32.752152 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 8 00:35:32.752157 kernel: ACPI: Added _OSI(Module Device) May 8 00:35:32.752164 kernel: ACPI: Added _OSI(Processor Device) May 8 00:35:32.752170 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 00:35:32.752176 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 00:35:32.752181 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 00:35:32.752187 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored May 8 00:35:32.752193 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 8 00:35:32.752199 kernel: ACPI: Interpreter enabled May 8 00:35:32.752204 kernel: ACPI: PM: (supports S0 S1 S5) May 8 00:35:32.752210 kernel: ACPI: Using IOAPIC for interrupt routing May 8 00:35:32.752217 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 8 00:35:32.752223 kernel: PCI: Using E820 reservations for host bridge windows May 8 00:35:32.752229 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F May 8 00:35:32.752234 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) May 8 00:35:32.752315 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:35:32.752371 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] May 8 00:35:32.752440 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] May 8 00:35:32.752452 kernel: PCI host bridge to bus 0000:00 May 8 00:35:32.752504 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 8 00:35:32.752549 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] May 8 00:35:32.752593 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 8 00:35:32.752637 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 8 00:35:32.752684 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] May 8 00:35:32.752728 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] May 8 00:35:32.752791 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 May 8 00:35:32.752847 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 May 8 00:35:32.752906 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 May 8 00:35:32.752961 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a May 8 00:35:32.753012 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] May 8 00:35:32.753061 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 8 00:35:32.753113 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 8 00:35:32.753163 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 8 00:35:32.753213 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 8 00:35:32.753266 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 May 8 00:35:32.753317 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI May 8 00:35:32.753366 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB May 8 00:35:32.753444 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 May 8 00:35:32.753500 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] May 8 00:35:32.753551 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] May 8 00:35:32.753613 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 May 8 00:35:32.753666 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] May 8 00:35:32.753716 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] May 8 00:35:32.753768 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] May 8 00:35:32.753817 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] May 8 00:35:32.753869 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 8 00:35:32.753923 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 May 8 00:35:32.753978 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.754029 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold May 8 00:35:32.754083 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.754134 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold May 8 00:35:32.754189 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.754240 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold May 8 00:35:32.754296 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.754346 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold May 8 00:35:32.754423 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.754477 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold May 8 00:35:32.754534 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.754586 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold May 8 00:35:32.754641 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.754692 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold May 8 00:35:32.754782 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.754832 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold May 8 00:35:32.754888 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.754957 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold May 8 00:35:32.755012 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.755063 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold May 8 00:35:32.755117 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.755169 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold May 8 00:35:32.755228 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.755280 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold May 8 00:35:32.755334 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.755420 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold May 8 00:35:32.755481 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.755532 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold May 8 00:35:32.755590 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.755641 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold May 8 00:35:32.755697 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.755748 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold May 8 00:35:32.755804 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.755855 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold May 8 00:35:32.755913 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.755964 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold May 8 00:35:32.756018 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.756070 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold May 8 00:35:32.756126 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.756178 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold May 8 00:35:32.756234 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.756286 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold May 8 00:35:32.756341 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.756430 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold May 8 00:35:32.756488 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.756540 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold May 8 00:35:32.756593 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.756658 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold May 8 00:35:32.756713 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.756766 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold May 8 00:35:32.756999 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.757053 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold May 8 00:35:32.757109 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.757164 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold May 8 00:35:32.757221 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.757273 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold May 8 00:35:32.757335 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.757658 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold May 8 00:35:32.757725 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.759472 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold May 8 00:35:32.759536 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.759594 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold May 8 00:35:32.759658 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 May 8 00:35:32.759715 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold May 8 00:35:32.759771 kernel: pci_bus 0000:01: extended config space not accessible May 8 00:35:32.759827 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 8 00:35:32.759880 kernel: pci_bus 0000:02: extended config space not accessible May 8 00:35:32.759890 kernel: acpiphp: Slot [32] registered May 8 00:35:32.759896 kernel: acpiphp: Slot [33] registered May 8 00:35:32.759902 kernel: acpiphp: Slot [34] registered May 8 00:35:32.759909 kernel: acpiphp: Slot [35] registered May 8 00:35:32.759914 kernel: acpiphp: Slot [36] registered May 8 00:35:32.759920 kernel: acpiphp: Slot [37] registered May 8 00:35:32.759926 kernel: acpiphp: Slot [38] registered May 8 00:35:32.759934 kernel: acpiphp: Slot [39] registered May 8 00:35:32.759940 kernel: acpiphp: Slot [40] registered May 8 00:35:32.759946 kernel: acpiphp: Slot [41] registered May 8 00:35:32.759952 kernel: acpiphp: Slot [42] registered May 8 00:35:32.759957 kernel: acpiphp: Slot [43] registered May 8 00:35:32.759965 kernel: acpiphp: Slot [44] registered May 8 00:35:32.759973 kernel: acpiphp: Slot [45] registered May 8 00:35:32.759979 kernel: acpiphp: Slot [46] registered May 8 00:35:32.759988 kernel: acpiphp: Slot [47] registered May 8 00:35:32.760000 kernel: acpiphp: Slot [48] registered May 8 00:35:32.760009 kernel: acpiphp: Slot [49] registered May 8 00:35:32.760015 kernel: acpiphp: Slot [50] registered May 8 00:35:32.760021 kernel: acpiphp: Slot [51] registered May 8 00:35:32.760027 kernel: acpiphp: Slot [52] registered May 8 00:35:32.760033 kernel: acpiphp: Slot [53] registered May 8 00:35:32.760038 kernel: acpiphp: Slot [54] registered May 8 00:35:32.760044 kernel: acpiphp: Slot [55] registered May 8 00:35:32.760050 kernel: acpiphp: Slot [56] registered May 8 00:35:32.760056 kernel: acpiphp: Slot [57] registered May 8 00:35:32.760063 kernel: acpiphp: Slot [58] registered May 8 00:35:32.760069 kernel: acpiphp: Slot [59] registered May 8 00:35:32.760075 kernel: acpiphp: Slot [60] registered May 8 00:35:32.760081 kernel: acpiphp: Slot [61] registered May 8 00:35:32.760087 kernel: acpiphp: Slot [62] registered May 8 00:35:32.760093 kernel: acpiphp: Slot [63] registered May 8 00:35:32.760148 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) May 8 00:35:32.760201 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] May 8 00:35:32.760254 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] May 8 00:35:32.760305 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] May 8 00:35:32.760357 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) May 8 00:35:32.761473 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) May 8 00:35:32.761533 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) May 8 00:35:32.761588 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) May 8 00:35:32.761645 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) May 8 00:35:32.761705 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 May 8 00:35:32.761764 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] May 8 00:35:32.761817 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] May 8 00:35:32.761873 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] May 8 00:35:32.761927 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold May 8 00:35:32.761979 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' May 8 00:35:32.762034 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] May 8 00:35:32.762085 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] May 8 00:35:32.762140 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] May 8 00:35:32.762197 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] May 8 00:35:32.762255 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] May 8 00:35:32.762316 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] May 8 00:35:32.762368 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] May 8 00:35:32.763458 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] May 8 00:35:32.763517 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] May 8 00:35:32.763570 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] May 8 00:35:32.763632 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] May 8 00:35:32.763700 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] May 8 00:35:32.763753 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] May 8 00:35:32.763803 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] May 8 00:35:32.763854 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] May 8 00:35:32.763905 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] May 8 00:35:32.763956 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] May 8 00:35:32.764012 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] May 8 00:35:32.764062 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] May 8 00:35:32.764112 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] May 8 00:35:32.764164 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] May 8 00:35:32.764216 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] May 8 00:35:32.764269 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] May 8 00:35:32.764327 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] May 8 00:35:32.764427 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] May 8 00:35:32.764491 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] May 8 00:35:32.764556 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 May 8 00:35:32.764618 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] May 8 00:35:32.764672 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] May 8 00:35:32.764728 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] May 8 00:35:32.764780 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] May 8 00:35:32.764833 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] May 8 00:35:32.764886 kernel: pci 0000:0b:00.0: supports D1 D2 May 8 00:35:32.764937 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 8 00:35:32.764989 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' May 8 00:35:32.765042 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] May 8 00:35:32.765099 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] May 8 00:35:32.765158 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] May 8 00:35:32.765213 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] May 8 00:35:32.765264 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] May 8 00:35:32.765314 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] May 8 00:35:32.767487 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] May 8 00:35:32.767551 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] May 8 00:35:32.767605 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] May 8 00:35:32.767657 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] May 8 00:35:32.767713 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] May 8 00:35:32.767766 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] May 8 00:35:32.767817 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] May 8 00:35:32.767874 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] May 8 00:35:32.767929 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] May 8 00:35:32.767979 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] May 8 00:35:32.768037 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] May 8 00:35:32.768099 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] May 8 00:35:32.768150 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] May 8 00:35:32.768200 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] May 8 00:35:32.768253 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] May 8 00:35:32.768304 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] May 8 00:35:32.768354 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] May 8 00:35:32.768414 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] May 8 00:35:32.768464 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] May 8 00:35:32.768515 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] May 8 00:35:32.768570 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] May 8 00:35:32.768621 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] May 8 00:35:32.768675 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] May 8 00:35:32.768727 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] May 8 00:35:32.768780 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] May 8 00:35:32.768831 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] May 8 00:35:32.768884 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] May 8 00:35:32.768957 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] May 8 00:35:32.769012 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] May 8 00:35:32.769064 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] May 8 00:35:32.769114 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] May 8 00:35:32.769165 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] May 8 00:35:32.769226 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] May 8 00:35:32.769297 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] May 8 00:35:32.769356 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] May 8 00:35:32.773460 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] May 8 00:35:32.773544 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] May 8 00:35:32.773600 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] May 8 00:35:32.773656 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] May 8 00:35:32.773710 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] May 8 00:35:32.773761 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] May 8 00:35:32.773815 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] May 8 00:35:32.773867 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] May 8 00:35:32.773923 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] May 8 00:35:32.773977 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] May 8 00:35:32.774028 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] May 8 00:35:32.774079 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] May 8 00:35:32.774131 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] May 8 00:35:32.774183 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] May 8 00:35:32.774246 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] May 8 00:35:32.774299 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] May 8 00:35:32.774357 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] May 8 00:35:32.774419 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] May 8 00:35:32.774471 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] May 8 00:35:32.774522 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] May 8 00:35:32.774576 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] May 8 00:35:32.774628 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] May 8 00:35:32.774679 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] May 8 00:35:32.774735 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] May 8 00:35:32.774789 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] May 8 00:35:32.774846 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] May 8 00:35:32.774900 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] May 8 00:35:32.774951 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] May 8 00:35:32.775002 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] May 8 00:35:32.775055 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] May 8 00:35:32.775106 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] May 8 00:35:32.775160 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] May 8 00:35:32.775218 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] May 8 00:35:32.775270 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] May 8 00:35:32.775321 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] May 8 00:35:32.775374 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] May 8 00:35:32.778039 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] May 8 00:35:32.778098 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] May 8 00:35:32.778108 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 May 8 00:35:32.778115 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 May 8 00:35:32.778123 kernel: ACPI: PCI: Interrupt link LNKB disabled May 8 00:35:32.778129 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 8 00:35:32.778136 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 May 8 00:35:32.778142 kernel: iommu: Default domain type: Translated May 8 00:35:32.778148 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 8 00:35:32.778154 kernel: PCI: Using ACPI for IRQ routing May 8 00:35:32.778160 kernel: PCI: pci_cache_line_size set to 64 bytes May 8 00:35:32.778166 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] May 8 00:35:32.778172 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] May 8 00:35:32.778230 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device May 8 00:35:32.778284 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible May 8 00:35:32.778335 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 8 00:35:32.778345 kernel: vgaarb: loaded May 8 00:35:32.778351 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 May 8 00:35:32.778357 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter May 8 00:35:32.778363 kernel: clocksource: Switched to clocksource tsc-early May 8 00:35:32.778369 kernel: VFS: Disk quotas dquot_6.6.0 May 8 00:35:32.778378 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 00:35:32.778757 kernel: pnp: PnP ACPI init May 8 00:35:32.778822 kernel: system 00:00: [io 0x1000-0x103f] has been reserved May 8 00:35:32.778873 kernel: system 00:00: [io 0x1040-0x104f] has been reserved May 8 00:35:32.778920 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved May 8 00:35:32.778972 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved May 8 00:35:32.779022 kernel: pnp 00:06: [dma 2] May 8 00:35:32.779082 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved May 8 00:35:32.779143 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved May 8 00:35:32.779190 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved May 8 00:35:32.779199 kernel: pnp: PnP ACPI: found 8 devices May 8 00:35:32.779205 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 8 00:35:32.779212 kernel: NET: Registered PF_INET protocol family May 8 00:35:32.779218 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 00:35:32.779224 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 8 00:35:32.779233 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 00:35:32.779239 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 8 00:35:32.779245 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 8 00:35:32.779251 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 8 00:35:32.779257 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 8 00:35:32.779263 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 8 00:35:32.779269 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 00:35:32.779275 kernel: NET: Registered PF_XDP protocol family May 8 00:35:32.779333 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 8 00:35:32.779732 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 May 8 00:35:32.779812 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 8 00:35:32.779886 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 8 00:35:32.779953 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 8 00:35:32.780008 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 May 8 00:35:32.780066 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 May 8 00:35:32.780122 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 May 8 00:35:32.780176 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 May 8 00:35:32.780230 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 May 8 00:35:32.780288 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 May 8 00:35:32.780345 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 May 8 00:35:32.780423 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 May 8 00:35:32.780479 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 May 8 00:35:32.780531 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 May 8 00:35:32.780600 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 May 8 00:35:32.780657 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 May 8 00:35:32.780711 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 May 8 00:35:32.780766 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 May 8 00:35:32.780823 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 May 8 00:35:32.780885 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 May 8 00:35:32.780956 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 May 8 00:35:32.781018 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 May 8 00:35:32.781071 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] May 8 00:35:32.781125 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] May 8 00:35:32.781176 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] May 8 00:35:32.781242 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.781302 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] May 8 00:35:32.781353 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.781417 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] May 8 00:35:32.781470 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.781536 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] May 8 00:35:32.781599 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.781650 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] May 8 00:35:32.781701 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.781753 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] May 8 00:35:32.781816 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.781867 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] May 8 00:35:32.781937 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.782001 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] May 8 00:35:32.782055 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.782109 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] May 8 00:35:32.782162 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.782213 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] May 8 00:35:32.782263 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.782328 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] May 8 00:35:32.782421 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.782480 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] May 8 00:35:32.782537 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.782603 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] May 8 00:35:32.782660 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.782717 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] May 8 00:35:32.782768 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.782819 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] May 8 00:35:32.782869 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.782920 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] May 8 00:35:32.782974 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.783024 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] May 8 00:35:32.783075 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.783126 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] May 8 00:35:32.783184 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.783242 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] May 8 00:35:32.783304 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.783357 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] May 8 00:35:32.783427 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.783480 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] May 8 00:35:32.783530 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.783587 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] May 8 00:35:32.783639 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.783690 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] May 8 00:35:32.783744 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.783796 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] May 8 00:35:32.783857 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.784116 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] May 8 00:35:32.784258 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.784474 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] May 8 00:35:32.784534 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.784590 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] May 8 00:35:32.784643 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.784694 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] May 8 00:35:32.784745 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.784796 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] May 8 00:35:32.784850 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.784916 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] May 8 00:35:32.784968 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.785022 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] May 8 00:35:32.785073 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.785124 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] May 8 00:35:32.785174 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.785225 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] May 8 00:35:32.785276 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.785328 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] May 8 00:35:32.785452 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.785517 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] May 8 00:35:32.785576 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.785636 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] May 8 00:35:32.785687 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.785738 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] May 8 00:35:32.785789 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.785840 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] May 8 00:35:32.785890 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.785941 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] May 8 00:35:32.785995 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.786045 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] May 8 00:35:32.786096 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.786148 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] May 8 00:35:32.786205 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.786261 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] May 8 00:35:32.786318 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] May 8 00:35:32.786371 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 8 00:35:32.786436 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] May 8 00:35:32.786495 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] May 8 00:35:32.786546 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] May 8 00:35:32.786596 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] May 8 00:35:32.786667 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] May 8 00:35:32.786721 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] May 8 00:35:32.786772 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] May 8 00:35:32.786834 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] May 8 00:35:32.786886 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] May 8 00:35:32.786945 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] May 8 00:35:32.786997 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] May 8 00:35:32.787058 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] May 8 00:35:32.787110 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] May 8 00:35:32.787164 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] May 8 00:35:32.787215 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] May 8 00:35:32.787266 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] May 8 00:35:32.787317 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] May 8 00:35:32.787374 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] May 8 00:35:32.787505 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] May 8 00:35:32.787561 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] May 8 00:35:32.787613 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] May 8 00:35:32.787663 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] May 8 00:35:32.787713 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] May 8 00:35:32.787767 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] May 8 00:35:32.787820 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] May 8 00:35:32.787870 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] May 8 00:35:32.787925 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] May 8 00:35:32.787991 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] May 8 00:35:32.788042 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] May 8 00:35:32.788092 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] May 8 00:35:32.788142 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] May 8 00:35:32.788193 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] May 8 00:35:32.788249 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] May 8 00:35:32.788304 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] May 8 00:35:32.788355 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] May 8 00:35:32.788418 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] May 8 00:35:32.788473 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] May 8 00:35:32.788532 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] May 8 00:35:32.788584 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] May 8 00:35:32.788639 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] May 8 00:35:32.788690 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] May 8 00:35:32.788743 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] May 8 00:35:32.788813 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] May 8 00:35:32.788871 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] May 8 00:35:32.788937 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] May 8 00:35:32.788990 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] May 8 00:35:32.789058 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] May 8 00:35:32.789131 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] May 8 00:35:32.789209 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] May 8 00:35:32.789277 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] May 8 00:35:32.789329 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] May 8 00:35:32.789387 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] May 8 00:35:32.789456 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] May 8 00:35:32.789524 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] May 8 00:35:32.789587 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] May 8 00:35:32.789647 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] May 8 00:35:32.789701 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] May 8 00:35:32.789753 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] May 8 00:35:32.789804 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] May 8 00:35:32.789855 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] May 8 00:35:32.789909 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] May 8 00:35:32.789975 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] May 8 00:35:32.790055 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] May 8 00:35:32.790117 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] May 8 00:35:32.790176 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] May 8 00:35:32.790228 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] May 8 00:35:32.790278 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] May 8 00:35:32.790329 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] May 8 00:35:32.790381 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] May 8 00:35:32.790442 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] May 8 00:35:32.790494 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] May 8 00:35:32.790552 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] May 8 00:35:32.790619 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] May 8 00:35:32.790675 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] May 8 00:35:32.790727 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] May 8 00:35:32.790778 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] May 8 00:35:32.790830 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] May 8 00:35:32.790881 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] May 8 00:35:32.790933 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] May 8 00:35:32.790984 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] May 8 00:35:32.791035 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] May 8 00:35:32.791093 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] May 8 00:35:32.791149 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] May 8 00:35:32.791213 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] May 8 00:35:32.791266 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] May 8 00:35:32.791318 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] May 8 00:35:32.791369 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] May 8 00:35:32.791442 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] May 8 00:35:32.791495 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] May 8 00:35:32.791574 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] May 8 00:35:32.791635 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] May 8 00:35:32.791700 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] May 8 00:35:32.791764 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] May 8 00:35:32.791817 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] May 8 00:35:32.791868 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] May 8 00:35:32.791920 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] May 8 00:35:32.791981 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] May 8 00:35:32.792054 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] May 8 00:35:32.792114 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] May 8 00:35:32.792173 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] May 8 00:35:32.792485 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] May 8 00:35:32.792542 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] May 8 00:35:32.792594 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] May 8 00:35:32.792656 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] May 8 00:35:32.792722 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] May 8 00:35:32.792776 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] May 8 00:35:32.792840 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] May 8 00:35:32.792906 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] May 8 00:35:32.792959 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] May 8 00:35:32.793014 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] May 8 00:35:32.793066 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] May 8 00:35:32.793116 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] May 8 00:35:32.793171 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] May 8 00:35:32.793223 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] May 8 00:35:32.793274 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] May 8 00:35:32.793319 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] May 8 00:35:32.793374 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] May 8 00:35:32.793446 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] May 8 00:35:32.793502 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] May 8 00:35:32.793559 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] May 8 00:35:32.793607 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] May 8 00:35:32.793654 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] May 8 00:35:32.793700 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] May 8 00:35:32.793747 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] May 8 00:35:32.793794 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] May 8 00:35:32.793855 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] May 8 00:35:32.793909 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] May 8 00:35:32.793960 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] May 8 00:35:32.794019 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] May 8 00:35:32.794071 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] May 8 00:35:32.794119 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] May 8 00:35:32.794166 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] May 8 00:35:32.794223 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] May 8 00:35:32.794271 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] May 8 00:35:32.794319 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] May 8 00:35:32.795441 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] May 8 00:35:32.795506 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] May 8 00:35:32.795561 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] May 8 00:35:32.795618 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] May 8 00:35:32.795671 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] May 8 00:35:32.795719 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] May 8 00:35:32.795769 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] May 8 00:35:32.795816 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] May 8 00:35:32.795870 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] May 8 00:35:32.795929 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] May 8 00:35:32.795984 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] May 8 00:35:32.796032 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] May 8 00:35:32.796079 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] May 8 00:35:32.796130 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] May 8 00:35:32.796178 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] May 8 00:35:32.796227 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] May 8 00:35:32.796282 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] May 8 00:35:32.796331 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] May 8 00:35:32.796387 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] May 8 00:35:32.797532 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] May 8 00:35:32.797588 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] May 8 00:35:32.797641 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] May 8 00:35:32.797693 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] May 8 00:35:32.797744 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] May 8 00:35:32.797792 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] May 8 00:35:32.797844 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] May 8 00:35:32.797892 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] May 8 00:35:32.797942 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] May 8 00:35:32.797993 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] May 8 00:35:32.798043 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] May 8 00:35:32.798091 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] May 8 00:35:32.798137 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] May 8 00:35:32.798188 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] May 8 00:35:32.798236 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] May 8 00:35:32.798283 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] May 8 00:35:32.798337 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] May 8 00:35:32.798397 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] May 8 00:35:32.798447 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] May 8 00:35:32.798502 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] May 8 00:35:32.798550 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] May 8 00:35:32.798601 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] May 8 00:35:32.798651 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] May 8 00:35:32.798703 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] May 8 00:35:32.798751 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] May 8 00:35:32.798802 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] May 8 00:35:32.798850 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] May 8 00:35:32.798902 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] May 8 00:35:32.798950 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] May 8 00:35:32.799005 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] May 8 00:35:32.799053 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] May 8 00:35:32.799100 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] May 8 00:35:32.799150 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] May 8 00:35:32.799198 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] May 8 00:35:32.799248 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] May 8 00:35:32.799299 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] May 8 00:35:32.799347 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] May 8 00:35:32.799409 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] May 8 00:35:32.799458 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] May 8 00:35:32.799509 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] May 8 00:35:32.799557 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] May 8 00:35:32.799615 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] May 8 00:35:32.799663 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] May 8 00:35:32.799714 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] May 8 00:35:32.799762 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] May 8 00:35:32.799813 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] May 8 00:35:32.799861 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] May 8 00:35:32.799919 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 8 00:35:32.799929 kernel: PCI: CLS 32 bytes, default 64 May 8 00:35:32.799936 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 8 00:35:32.799943 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns May 8 00:35:32.799950 kernel: clocksource: Switched to clocksource tsc May 8 00:35:32.799956 kernel: Initialise system trusted keyrings May 8 00:35:32.799962 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 8 00:35:32.799969 kernel: Key type asymmetric registered May 8 00:35:32.799977 kernel: Asymmetric key parser 'x509' registered May 8 00:35:32.799983 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 8 00:35:32.799990 kernel: io scheduler mq-deadline registered May 8 00:35:32.799996 kernel: io scheduler kyber registered May 8 00:35:32.800003 kernel: io scheduler bfq registered May 8 00:35:32.800057 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 May 8 00:35:32.800110 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.800163 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 May 8 00:35:32.800216 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.800271 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 May 8 00:35:32.800323 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.800376 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 May 8 00:35:32.802559 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.802622 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 May 8 00:35:32.802678 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.802739 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 May 8 00:35:32.802793 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.802846 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 May 8 00:35:32.802898 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.802951 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 May 8 00:35:32.803006 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.803060 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 May 8 00:35:32.803114 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.803166 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 May 8 00:35:32.803218 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.803271 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 May 8 00:35:32.803323 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.803378 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 May 8 00:35:32.804072 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.804132 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 May 8 00:35:32.804186 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.804239 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 May 8 00:35:32.804294 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.804348 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 May 8 00:35:32.804418 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.804472 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 May 8 00:35:32.804527 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.804589 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 May 8 00:35:32.804667 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.804721 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 May 8 00:35:32.804772 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.804823 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 May 8 00:35:32.804874 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.804926 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 May 8 00:35:32.804976 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.805031 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 May 8 00:35:32.805082 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.805134 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 May 8 00:35:32.805186 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.805239 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 May 8 00:35:32.805293 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.805346 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 May 8 00:35:32.805411 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.805474 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 May 8 00:35:32.805530 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.808814 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 May 8 00:35:32.808890 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.808950 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 May 8 00:35:32.809004 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.809059 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 May 8 00:35:32.809113 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.809167 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 May 8 00:35:32.809223 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.809276 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 May 8 00:35:32.809328 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.809439 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 May 8 00:35:32.809501 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.809558 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 May 8 00:35:32.809619 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:35:32.809630 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 8 00:35:32.809637 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 00:35:32.809643 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 8 00:35:32.809650 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 May 8 00:35:32.809656 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 8 00:35:32.809666 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 8 00:35:32.809720 kernel: rtc_cmos 00:01: registered as rtc0 May 8 00:35:32.809768 kernel: rtc_cmos 00:01: setting system clock to 2025-05-08T00:35:32 UTC (1746664532) May 8 00:35:32.809777 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 8 00:35:32.809822 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram May 8 00:35:32.809830 kernel: intel_pstate: CPU model not supported May 8 00:35:32.809837 kernel: NET: Registered PF_INET6 protocol family May 8 00:35:32.809843 kernel: Segment Routing with IPv6 May 8 00:35:32.809852 kernel: In-situ OAM (IOAM) with IPv6 May 8 00:35:32.809860 kernel: NET: Registered PF_PACKET protocol family May 8 00:35:32.809866 kernel: Key type dns_resolver registered May 8 00:35:32.809872 kernel: IPI shorthand broadcast: enabled May 8 00:35:32.809879 kernel: sched_clock: Marking stable (948003795, 241470179)->(1256647397, -67173423) May 8 00:35:32.809885 kernel: registered taskstats version 1 May 8 00:35:32.809892 kernel: Loading compiled-in X.509 certificates May 8 00:35:32.809898 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 75e4e434c57439d3f2eaf7797bbbcdd698dafd0e' May 8 00:35:32.809904 kernel: Key type .fscrypt registered May 8 00:35:32.809912 kernel: Key type fscrypt-provisioning registered May 8 00:35:32.809918 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 00:35:32.809924 kernel: ima: Allocated hash algorithm: sha1 May 8 00:35:32.809931 kernel: ima: No architecture policies found May 8 00:35:32.809937 kernel: clk: Disabling unused clocks May 8 00:35:32.809943 kernel: Freeing unused kernel image (initmem) memory: 42856K May 8 00:35:32.809949 kernel: Write protecting the kernel read-only data: 36864k May 8 00:35:32.809956 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 8 00:35:32.809962 kernel: Run /init as init process May 8 00:35:32.809970 kernel: with arguments: May 8 00:35:32.809976 kernel: /init May 8 00:35:32.809982 kernel: with environment: May 8 00:35:32.809989 kernel: HOME=/ May 8 00:35:32.809995 kernel: TERM=linux May 8 00:35:32.810001 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 00:35:32.810009 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 00:35:32.810017 systemd[1]: Detected virtualization vmware. May 8 00:35:32.810024 systemd[1]: Detected architecture x86-64. May 8 00:35:32.810031 systemd[1]: Running in initrd. May 8 00:35:32.810037 systemd[1]: No hostname configured, using default hostname. May 8 00:35:32.810043 systemd[1]: Hostname set to . May 8 00:35:32.810050 systemd[1]: Initializing machine ID from random generator. May 8 00:35:32.810057 systemd[1]: Queued start job for default target initrd.target. May 8 00:35:32.810063 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:35:32.810070 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:35:32.810079 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 8 00:35:32.810085 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:35:32.810092 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 8 00:35:32.810098 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 8 00:35:32.810106 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 8 00:35:32.810113 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 8 00:35:32.810120 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:35:32.810127 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:35:32.810134 systemd[1]: Reached target paths.target - Path Units. May 8 00:35:32.810141 systemd[1]: Reached target slices.target - Slice Units. May 8 00:35:32.810147 systemd[1]: Reached target swap.target - Swaps. May 8 00:35:32.810154 systemd[1]: Reached target timers.target - Timer Units. May 8 00:35:32.810160 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:35:32.810167 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:35:32.810173 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 00:35:32.810182 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 8 00:35:32.810188 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:35:32.810195 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:35:32.810202 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:35:32.810208 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:35:32.810215 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 8 00:35:32.810221 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:35:32.810228 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 8 00:35:32.810234 systemd[1]: Starting systemd-fsck-usr.service... May 8 00:35:32.810242 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:35:32.810248 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:35:32.810255 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:35:32.810261 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 8 00:35:32.810281 systemd-journald[215]: Collecting audit messages is disabled. May 8 00:35:32.810299 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:35:32.810306 systemd[1]: Finished systemd-fsck-usr.service. May 8 00:35:32.810313 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:35:32.810321 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:35:32.810328 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:35:32.810335 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 00:35:32.810341 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:35:32.810348 kernel: Bridge firewalling registered May 8 00:35:32.810355 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:35:32.810362 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:35:32.810368 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:35:32.810375 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:35:32.811094 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:35:32.811106 systemd-journald[215]: Journal started May 8 00:35:32.811122 systemd-journald[215]: Runtime Journal (/run/log/journal/ecb45c8c43c44955a441a24d321f04de) is 4.8M, max 38.6M, 33.8M free. May 8 00:35:32.765715 systemd-modules-load[216]: Inserted module 'overlay' May 8 00:35:32.791548 systemd-modules-load[216]: Inserted module 'br_netfilter' May 8 00:35:32.816396 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 8 00:35:32.817706 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:35:32.818403 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:35:32.820487 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:35:32.822838 dracut-cmdline[237]: dracut-dracut-053 May 8 00:35:32.824585 dracut-cmdline[237]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=86cfbfcc89a9c46f6cbba5bdb3509d1ce1367f0c93b0b0e4c6bdcad1a2064c90 May 8 00:35:32.828147 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:35:32.829230 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:35:32.850968 systemd-resolved[263]: Positive Trust Anchors: May 8 00:35:32.850982 systemd-resolved[263]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:35:32.851006 systemd-resolved[263]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:35:32.853047 systemd-resolved[263]: Defaulting to hostname 'linux'. May 8 00:35:32.853889 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:35:32.854053 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:35:32.875406 kernel: SCSI subsystem initialized May 8 00:35:32.881394 kernel: Loading iSCSI transport class v2.0-870. May 8 00:35:32.888396 kernel: iscsi: registered transport (tcp) May 8 00:35:32.901723 kernel: iscsi: registered transport (qla4xxx) May 8 00:35:32.901765 kernel: QLogic iSCSI HBA Driver May 8 00:35:32.922084 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 8 00:35:32.925486 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 8 00:35:32.940841 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 00:35:32.940885 kernel: device-mapper: uevent: version 1.0.3 May 8 00:35:32.941940 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 8 00:35:32.972407 kernel: raid6: avx2x4 gen() 51392 MB/s May 8 00:35:32.989412 kernel: raid6: avx2x2 gen() 49488 MB/s May 8 00:35:33.006690 kernel: raid6: avx2x1 gen() 43464 MB/s May 8 00:35:33.006741 kernel: raid6: using algorithm avx2x4 gen() 51392 MB/s May 8 00:35:33.024625 kernel: raid6: .... xor() 21239 MB/s, rmw enabled May 8 00:35:33.024658 kernel: raid6: using avx2x2 recovery algorithm May 8 00:35:33.038399 kernel: xor: automatically using best checksumming function avx May 8 00:35:33.141414 kernel: Btrfs loaded, zoned=no, fsverity=no May 8 00:35:33.147311 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 8 00:35:33.150479 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:35:33.158907 systemd-udevd[432]: Using default interface naming scheme 'v255'. May 8 00:35:33.161390 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:35:33.170718 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 8 00:35:33.177604 dracut-pre-trigger[438]: rd.md=0: removing MD RAID activation May 8 00:35:33.194265 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:35:33.197469 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:35:33.268083 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:35:33.273500 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 8 00:35:33.287600 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 8 00:35:33.288469 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:35:33.289007 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:35:33.289509 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:35:33.295577 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 8 00:35:33.303394 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 8 00:35:33.334406 kernel: VMware PVSCSI driver - version 1.0.7.0-k May 8 00:35:33.338760 kernel: vmw_pvscsi: using 64bit dma May 8 00:35:33.338794 kernel: vmw_pvscsi: max_id: 16 May 8 00:35:33.338807 kernel: vmw_pvscsi: setting ring_pages to 8 May 8 00:35:33.345931 kernel: vmw_pvscsi: enabling reqCallThreshold May 8 00:35:33.345974 kernel: vmw_pvscsi: driver-based request coalescing enabled May 8 00:35:33.345988 kernel: vmw_pvscsi: using MSI-X May 8 00:35:33.345999 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 May 8 00:35:33.348099 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 May 8 00:35:33.352855 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 May 8 00:35:33.352888 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI May 8 00:35:33.358402 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 May 8 00:35:33.368743 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps May 8 00:35:33.373396 kernel: cryptd: max_cpu_qlen set to 1000 May 8 00:35:33.379403 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 May 8 00:35:33.382401 kernel: libata version 3.00 loaded. May 8 00:35:33.382436 kernel: ata_piix 0000:00:07.1: version 2.13 May 8 00:35:33.389530 kernel: scsi host1: ata_piix May 8 00:35:33.389615 kernel: scsi host2: ata_piix May 8 00:35:33.389683 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 May 8 00:35:33.389693 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 May 8 00:35:33.384314 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:35:33.384393 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:35:33.384787 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:35:33.384878 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:35:33.384952 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:35:33.385053 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:35:33.392475 kernel: AVX2 version of gcm_enc/dec engaged. May 8 00:35:33.392497 kernel: AES CTR mode by8 optimization enabled May 8 00:35:33.393820 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:35:33.407922 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:35:33.412473 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:35:33.422883 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:35:33.560401 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 May 8 00:35:33.565446 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 May 8 00:35:33.575781 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) May 8 00:35:33.614108 kernel: sd 0:0:0:0: [sda] Write Protect is off May 8 00:35:33.614443 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 May 8 00:35:33.614516 kernel: sd 0:0:0:0: [sda] Cache data unavailable May 8 00:35:33.614580 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through May 8 00:35:33.614644 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:35:33.614653 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 8 00:35:33.638424 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray May 8 00:35:33.653923 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 8 00:35:33.653937 kernel: BTRFS: device fsid 28014d97-e6d7-4db4-b1d9-76a980e09972 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (494) May 8 00:35:33.653950 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (488) May 8 00:35:33.653957 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 8 00:35:33.644994 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. May 8 00:35:33.648329 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. May 8 00:35:33.653248 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. May 8 00:35:33.654044 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. May 8 00:35:33.657210 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. May 8 00:35:33.676611 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 8 00:35:33.718410 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:35:33.721819 kernel: GPT:disk_guids don't match. May 8 00:35:33.721852 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 00:35:33.721861 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:35:34.728401 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:35:34.728951 disk-uuid[589]: The operation has completed successfully. May 8 00:35:34.803533 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 00:35:34.803602 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 8 00:35:34.807491 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 8 00:35:34.809209 sh[610]: Success May 8 00:35:34.817397 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 8 00:35:34.889887 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 8 00:35:34.895228 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 8 00:35:34.895577 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 8 00:35:34.940531 kernel: BTRFS info (device dm-0): first mount of filesystem 28014d97-e6d7-4db4-b1d9-76a980e09972 May 8 00:35:34.940584 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 8 00:35:34.940602 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 8 00:35:34.940613 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 8 00:35:34.941597 kernel: BTRFS info (device dm-0): using free space tree May 8 00:35:34.950411 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 8 00:35:34.952626 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 8 00:35:34.961489 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... May 8 00:35:34.963030 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 8 00:35:34.993076 kernel: BTRFS info (device sda6): first mount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:35:34.993124 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:35:34.993135 kernel: BTRFS info (device sda6): using free space tree May 8 00:35:35.013407 kernel: BTRFS info (device sda6): enabling ssd optimizations May 8 00:35:35.022650 systemd[1]: mnt-oem.mount: Deactivated successfully. May 8 00:35:35.024405 kernel: BTRFS info (device sda6): last unmount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:35:35.032198 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 8 00:35:35.036499 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 8 00:35:35.054016 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. May 8 00:35:35.058476 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 8 00:35:35.119780 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:35:35.129591 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:35:35.144295 ignition[671]: Ignition 2.19.0 May 8 00:35:35.144306 ignition[671]: Stage: fetch-offline May 8 00:35:35.144328 ignition[671]: no configs at "/usr/lib/ignition/base.d" May 8 00:35:35.144334 ignition[671]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:35:35.144408 ignition[671]: parsed url from cmdline: "" May 8 00:35:35.144410 ignition[671]: no config URL provided May 8 00:35:35.144415 ignition[671]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:35:35.144420 ignition[671]: no config at "/usr/lib/ignition/user.ign" May 8 00:35:35.144795 ignition[671]: config successfully fetched May 8 00:35:35.144815 ignition[671]: parsing config with SHA512: df2d745cd5d15d8bfd659031ade3adac226e33ccb6c98ee0d74f0c7a652f81ea3cbd6c5954a139840cc82af1da3ae046d4b2a2cd75d36b240b0d59581172f66e May 8 00:35:35.145719 systemd-networkd[799]: lo: Link UP May 8 00:35:35.145723 systemd-networkd[799]: lo: Gained carrier May 8 00:35:35.146720 systemd-networkd[799]: Enumeration completed May 8 00:35:35.146908 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:35:35.147173 systemd[1]: Reached target network.target - Network. May 8 00:35:35.147264 systemd-networkd[799]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. May 8 00:35:35.149681 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated May 8 00:35:35.149800 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps May 8 00:35:35.149349 ignition[671]: fetch-offline: fetch-offline passed May 8 00:35:35.149077 unknown[671]: fetched base config from "system" May 8 00:35:35.149399 ignition[671]: Ignition finished successfully May 8 00:35:35.149081 unknown[671]: fetched user config from "vmware" May 8 00:35:35.150769 systemd-networkd[799]: ens192: Link UP May 8 00:35:35.150771 systemd-networkd[799]: ens192: Gained carrier May 8 00:35:35.151124 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:35:35.151504 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 8 00:35:35.156591 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 8 00:35:35.164687 ignition[807]: Ignition 2.19.0 May 8 00:35:35.164694 ignition[807]: Stage: kargs May 8 00:35:35.164801 ignition[807]: no configs at "/usr/lib/ignition/base.d" May 8 00:35:35.164807 ignition[807]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:35:35.165322 ignition[807]: kargs: kargs passed May 8 00:35:35.165349 ignition[807]: Ignition finished successfully May 8 00:35:35.166549 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 8 00:35:35.169473 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 8 00:35:35.178016 ignition[815]: Ignition 2.19.0 May 8 00:35:35.178023 ignition[815]: Stage: disks May 8 00:35:35.178117 ignition[815]: no configs at "/usr/lib/ignition/base.d" May 8 00:35:35.178123 ignition[815]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:35:35.178651 ignition[815]: disks: disks passed May 8 00:35:35.178682 ignition[815]: Ignition finished successfully May 8 00:35:35.179470 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 8 00:35:35.179694 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 8 00:35:35.179826 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 00:35:35.180019 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:35:35.180212 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:35:35.180398 systemd[1]: Reached target basic.target - Basic System. May 8 00:35:35.184473 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 8 00:35:35.195127 systemd-fsck[823]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 8 00:35:35.196325 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 8 00:35:35.200462 systemd[1]: Mounting sysroot.mount - /sysroot... May 8 00:35:35.260401 kernel: EXT4-fs (sda9): mounted filesystem 36960c89-ba45-4808-a41c-bf61ce9470a3 r/w with ordered data mode. Quota mode: none. May 8 00:35:35.260562 systemd[1]: Mounted sysroot.mount - /sysroot. May 8 00:35:35.260980 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 8 00:35:35.265478 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:35:35.266448 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 8 00:35:35.267617 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 8 00:35:35.267651 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 00:35:35.267666 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:35:35.271478 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 8 00:35:35.272417 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 8 00:35:35.276524 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (831) May 8 00:35:35.276567 kernel: BTRFS info (device sda6): first mount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:35:35.277734 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:35:35.277754 kernel: BTRFS info (device sda6): using free space tree May 8 00:35:35.281446 kernel: BTRFS info (device sda6): enabling ssd optimizations May 8 00:35:35.282307 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:35:35.333844 initrd-setup-root[855]: cut: /sysroot/etc/passwd: No such file or directory May 8 00:35:35.336921 initrd-setup-root[862]: cut: /sysroot/etc/group: No such file or directory May 8 00:35:35.339318 initrd-setup-root[869]: cut: /sysroot/etc/shadow: No such file or directory May 8 00:35:35.342026 initrd-setup-root[876]: cut: /sysroot/etc/gshadow: No such file or directory May 8 00:35:35.426009 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 8 00:35:35.430438 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 8 00:35:35.432917 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 8 00:35:35.437423 kernel: BTRFS info (device sda6): last unmount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:35:35.452553 ignition[943]: INFO : Ignition 2.19.0 May 8 00:35:35.452553 ignition[943]: INFO : Stage: mount May 8 00:35:35.452917 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:35:35.452917 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:35:35.453250 ignition[943]: INFO : mount: mount passed May 8 00:35:35.453761 ignition[943]: INFO : Ignition finished successfully May 8 00:35:35.454014 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 8 00:35:35.457486 systemd[1]: Starting ignition-files.service - Ignition (files)... May 8 00:35:35.477049 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 8 00:35:35.936638 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 8 00:35:35.941504 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:35:35.954405 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (955) May 8 00:35:35.954441 kernel: BTRFS info (device sda6): first mount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:35:35.954450 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:35:35.956398 kernel: BTRFS info (device sda6): using free space tree May 8 00:35:35.959573 kernel: BTRFS info (device sda6): enabling ssd optimizations May 8 00:35:35.960500 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:35:35.983067 ignition[972]: INFO : Ignition 2.19.0 May 8 00:35:35.983596 ignition[972]: INFO : Stage: files May 8 00:35:35.983596 ignition[972]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:35:35.983596 ignition[972]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:35:35.983958 ignition[972]: DEBUG : files: compiled without relabeling support, skipping May 8 00:35:35.991427 ignition[972]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 00:35:35.991427 ignition[972]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 00:35:36.010741 ignition[972]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 00:35:36.010962 ignition[972]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 00:35:36.011113 ignition[972]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 00:35:36.011089 unknown[972]: wrote ssh authorized keys file for user: core May 8 00:35:36.016245 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 8 00:35:36.016872 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 8 00:35:36.055272 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 8 00:35:36.196409 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 8 00:35:36.196409 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 8 00:35:36.196890 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 8 00:35:36.196890 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 00:35:36.196890 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 00:35:36.196890 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:35:36.196890 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:35:36.196890 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:35:36.197987 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:35:36.197987 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:35:36.197987 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:35:36.197987 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 8 00:35:36.197987 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 8 00:35:36.197987 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 8 00:35:36.197987 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 May 8 00:35:36.677300 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 8 00:35:36.932036 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 8 00:35:36.932036 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" May 8 00:35:36.932521 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" May 8 00:35:36.932521 ignition[972]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 8 00:35:36.938151 ignition[972]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:35:36.938424 ignition[972]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:35:36.938424 ignition[972]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 8 00:35:36.938424 ignition[972]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 8 00:35:36.938424 ignition[972]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:35:36.939068 ignition[972]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:35:36.939068 ignition[972]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 8 00:35:36.939068 ignition[972]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 8 00:35:37.067552 systemd-networkd[799]: ens192: Gained IPv6LL May 8 00:35:37.074944 ignition[972]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:35:37.077402 ignition[972]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:35:37.077402 ignition[972]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 8 00:35:37.077402 ignition[972]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 8 00:35:37.078325 ignition[972]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 8 00:35:37.078325 ignition[972]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 00:35:37.078325 ignition[972]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 00:35:37.078325 ignition[972]: INFO : files: files passed May 8 00:35:37.078325 ignition[972]: INFO : Ignition finished successfully May 8 00:35:37.078528 systemd[1]: Finished ignition-files.service - Ignition (files). May 8 00:35:37.082484 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 8 00:35:37.083488 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 8 00:35:37.084802 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 00:35:37.084868 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 8 00:35:37.092196 initrd-setup-root-after-ignition[1002]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:35:37.092196 initrd-setup-root-after-ignition[1002]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 8 00:35:37.092864 initrd-setup-root-after-ignition[1006]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:35:37.093598 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:35:37.093995 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 8 00:35:37.097491 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 8 00:35:37.109817 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 00:35:37.109875 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 8 00:35:37.110278 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 8 00:35:37.110413 systemd[1]: Reached target initrd.target - Initrd Default Target. May 8 00:35:37.110613 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 8 00:35:37.111042 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 8 00:35:37.120280 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:35:37.124491 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 8 00:35:37.129683 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 8 00:35:37.129971 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:35:37.130126 systemd[1]: Stopped target timers.target - Timer Units. May 8 00:35:37.130253 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 00:35:37.130330 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:35:37.130671 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 8 00:35:37.130911 systemd[1]: Stopped target basic.target - Basic System. May 8 00:35:37.131096 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 8 00:35:37.131298 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:35:37.131499 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 8 00:35:37.131690 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 8 00:35:37.132033 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:35:37.132251 systemd[1]: Stopped target sysinit.target - System Initialization. May 8 00:35:37.132457 systemd[1]: Stopped target local-fs.target - Local File Systems. May 8 00:35:37.132646 systemd[1]: Stopped target swap.target - Swaps. May 8 00:35:37.132825 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 00:35:37.132886 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 8 00:35:37.133159 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 8 00:35:37.133373 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:35:37.133563 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 8 00:35:37.133607 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:35:37.133770 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 00:35:37.133828 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 8 00:35:37.134086 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 00:35:37.134152 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:35:37.134397 systemd[1]: Stopped target paths.target - Path Units. May 8 00:35:37.134559 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 00:35:37.136425 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:35:37.136583 systemd[1]: Stopped target slices.target - Slice Units. May 8 00:35:37.136780 systemd[1]: Stopped target sockets.target - Socket Units. May 8 00:35:37.136966 systemd[1]: iscsid.socket: Deactivated successfully. May 8 00:35:37.137029 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:35:37.137232 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 00:35:37.137277 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:35:37.137569 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 00:35:37.137652 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:35:37.137887 systemd[1]: ignition-files.service: Deactivated successfully. May 8 00:35:37.137964 systemd[1]: Stopped ignition-files.service - Ignition (files). May 8 00:35:37.145553 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 8 00:35:37.148530 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 8 00:35:37.148649 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 00:35:37.148744 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:35:37.148928 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 00:35:37.149009 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:35:37.150732 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 00:35:37.150793 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 8 00:35:37.155098 ignition[1026]: INFO : Ignition 2.19.0 May 8 00:35:37.156235 ignition[1026]: INFO : Stage: umount May 8 00:35:37.156235 ignition[1026]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:35:37.156235 ignition[1026]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:35:37.156235 ignition[1026]: INFO : umount: umount passed May 8 00:35:37.156235 ignition[1026]: INFO : Ignition finished successfully May 8 00:35:37.157564 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 00:35:37.157635 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 8 00:35:37.157879 systemd[1]: Stopped target network.target - Network. May 8 00:35:37.157980 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 00:35:37.158007 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 8 00:35:37.158155 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 00:35:37.158177 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 8 00:35:37.158323 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 00:35:37.158345 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 8 00:35:37.158497 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 8 00:35:37.158518 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 8 00:35:37.158887 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 8 00:35:37.159106 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 8 00:35:37.164853 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 00:35:37.164928 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 8 00:35:37.165649 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 00:35:37.165729 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 8 00:35:37.166160 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 00:35:37.166194 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 8 00:35:37.169501 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 8 00:35:37.170120 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 00:35:37.170147 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:35:37.171119 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. May 8 00:35:37.171145 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. May 8 00:35:37.171299 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:35:37.171322 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:35:37.171489 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 00:35:37.171511 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 8 00:35:37.171680 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 8 00:35:37.171701 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:35:37.171932 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:35:37.177718 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 00:35:37.177800 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 8 00:35:37.185611 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 00:35:37.185906 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 00:35:37.186001 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:35:37.186449 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 00:35:37.186481 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 8 00:35:37.186612 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 00:35:37.186634 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:35:37.186848 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 00:35:37.186870 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 8 00:35:37.187136 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 00:35:37.187159 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 8 00:35:37.187472 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:35:37.187495 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:35:37.191538 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 8 00:35:37.191827 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 00:35:37.191857 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:35:37.191982 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 8 00:35:37.192004 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:35:37.192127 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 00:35:37.192154 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:35:37.192269 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:35:37.192290 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:35:37.194744 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 00:35:37.194810 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 8 00:35:37.442106 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 00:35:37.442184 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 8 00:35:37.442525 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 8 00:35:37.442677 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 00:35:37.442712 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 8 00:35:37.446588 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 8 00:35:37.458639 systemd[1]: Switching root. May 8 00:35:37.496128 systemd-journald[215]: Journal stopped May 8 00:35:38.723239 systemd-journald[215]: Received SIGTERM from PID 1 (systemd). May 8 00:35:38.723270 kernel: SELinux: policy capability network_peer_controls=1 May 8 00:35:38.723282 kernel: SELinux: policy capability open_perms=1 May 8 00:35:38.723291 kernel: SELinux: policy capability extended_socket_class=1 May 8 00:35:38.723300 kernel: SELinux: policy capability always_check_network=0 May 8 00:35:38.723309 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 00:35:38.723323 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 00:35:38.723330 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 00:35:38.723337 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 00:35:38.723346 systemd[1]: Successfully loaded SELinux policy in 40.202ms. May 8 00:35:38.723357 kernel: audit: type=1403 audit(1746664538.068:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 8 00:35:38.723366 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.577ms. May 8 00:35:38.723375 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 00:35:38.745514 systemd[1]: Detected virtualization vmware. May 8 00:35:38.745530 systemd[1]: Detected architecture x86-64. May 8 00:35:38.745537 systemd[1]: Detected first boot. May 8 00:35:38.745544 systemd[1]: Initializing machine ID from random generator. May 8 00:35:38.745555 zram_generator::config[1069]: No configuration found. May 8 00:35:38.745562 systemd[1]: Populated /etc with preset unit settings. May 8 00:35:38.745571 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 8 00:35:38.745578 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" May 8 00:35:38.745585 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 8 00:35:38.745592 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 8 00:35:38.745598 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 8 00:35:38.745606 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 8 00:35:38.745613 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 8 00:35:38.745620 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 8 00:35:38.745626 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 8 00:35:38.745640 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 8 00:35:38.745648 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 8 00:35:38.745654 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 8 00:35:38.745662 systemd[1]: Created slice user.slice - User and Session Slice. May 8 00:35:38.745669 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:35:38.745675 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:35:38.745682 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 8 00:35:38.745689 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 8 00:35:38.745695 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 8 00:35:38.745702 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:35:38.745708 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 8 00:35:38.745717 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:35:38.745724 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 8 00:35:38.745732 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 8 00:35:38.745740 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 8 00:35:38.745747 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 8 00:35:38.745754 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:35:38.745760 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:35:38.745767 systemd[1]: Reached target slices.target - Slice Units. May 8 00:35:38.745777 systemd[1]: Reached target swap.target - Swaps. May 8 00:35:38.745784 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 8 00:35:38.745791 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 8 00:35:38.745798 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:35:38.745805 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:35:38.745813 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:35:38.745820 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 8 00:35:38.745827 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 8 00:35:38.745834 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 8 00:35:38.745840 systemd[1]: Mounting media.mount - External Media Directory... May 8 00:35:38.745847 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:35:38.745854 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 8 00:35:38.745861 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 8 00:35:38.745870 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 8 00:35:38.745877 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 00:35:38.745884 systemd[1]: Reached target machines.target - Containers. May 8 00:35:38.745892 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 8 00:35:38.745898 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... May 8 00:35:38.745905 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:35:38.745912 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 8 00:35:38.745919 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:35:38.745927 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:35:38.745934 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:35:38.745941 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 8 00:35:38.745948 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:35:38.745955 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 00:35:38.745962 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 8 00:35:38.745969 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 8 00:35:38.745976 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 8 00:35:38.745983 systemd[1]: Stopped systemd-fsck-usr.service. May 8 00:35:38.745991 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:35:38.745997 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:35:38.746004 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 8 00:35:38.746011 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 8 00:35:38.746018 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:35:38.746025 systemd[1]: verity-setup.service: Deactivated successfully. May 8 00:35:38.746032 systemd[1]: Stopped verity-setup.service. May 8 00:35:38.746039 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:35:38.746048 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 8 00:35:38.746055 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 8 00:35:38.746062 systemd[1]: Mounted media.mount - External Media Directory. May 8 00:35:38.746069 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 8 00:35:38.746075 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 8 00:35:38.746082 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 8 00:35:38.746089 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:35:38.746096 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 00:35:38.746103 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 8 00:35:38.746111 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:35:38.746118 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:35:38.746125 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:35:38.746132 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:35:38.746139 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 8 00:35:38.746146 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:35:38.746153 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 8 00:35:38.746160 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 8 00:35:38.746167 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 8 00:35:38.746174 kernel: fuse: init (API version 7.39) May 8 00:35:38.746181 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:35:38.746188 systemd[1]: Reached target network-pre.target - Preparation for Network. May 8 00:35:38.746196 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 00:35:38.746203 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:35:38.746209 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 8 00:35:38.746216 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 8 00:35:38.746242 systemd-journald[1159]: Collecting audit messages is disabled. May 8 00:35:38.746260 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 8 00:35:38.746267 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:35:38.746274 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 8 00:35:38.746282 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:35:38.746289 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 8 00:35:38.746296 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:35:38.746304 systemd-journald[1159]: Journal started May 8 00:35:38.746322 systemd-journald[1159]: Runtime Journal (/run/log/journal/d0ae730ac4634ea785dccc3df7efdc73) is 4.8M, max 38.6M, 33.8M free. May 8 00:35:38.775582 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 8 00:35:38.775625 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 00:35:38.775642 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 8 00:35:38.775652 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 8 00:35:38.775662 kernel: loop: module loaded May 8 00:35:38.775671 kernel: ACPI: bus type drm_connector registered May 8 00:35:38.504036 systemd[1]: Queued start job for default target multi-user.target. May 8 00:35:38.528456 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 8 00:35:38.528674 systemd[1]: systemd-journald.service: Deactivated successfully. May 8 00:35:38.776195 jq[1136]: true May 8 00:35:38.782692 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 8 00:35:38.782852 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:35:38.778242 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:35:38.779106 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:35:38.779756 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:35:38.780085 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:35:38.780954 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 8 00:35:38.790650 jq[1168]: true May 8 00:35:38.791825 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. May 8 00:35:38.791838 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. May 8 00:35:38.797517 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 8 00:35:38.804514 kernel: loop0: detected capacity change from 0 to 142488 May 8 00:35:38.803196 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:35:38.814594 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 8 00:35:38.814738 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:35:38.826156 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 8 00:35:38.826505 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 8 00:35:38.826997 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 8 00:35:38.829874 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 8 00:35:38.830687 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:35:38.837108 systemd-journald[1159]: Time spent on flushing to /var/log/journal/d0ae730ac4634ea785dccc3df7efdc73 is 93.962ms for 1846 entries. May 8 00:35:38.837108 systemd-journald[1159]: System Journal (/var/log/journal/d0ae730ac4634ea785dccc3df7efdc73) is 8.0M, max 584.8M, 576.8M free. May 8 00:35:38.939257 systemd-journald[1159]: Received client request to flush runtime journal. May 8 00:35:38.939289 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 00:35:38.939301 kernel: loop1: detected capacity change from 0 to 2976 May 8 00:35:38.939311 kernel: loop2: detected capacity change from 0 to 140768 May 8 00:35:38.901468 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 00:35:38.876019 ignition[1174]: Ignition 2.19.0 May 8 00:35:38.902624 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 8 00:35:38.876848 ignition[1174]: deleting config from guestinfo properties May 8 00:35:38.910810 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 8 00:35:38.915468 ignition[1174]: Successfully deleted config May 8 00:35:38.914572 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:35:38.917125 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). May 8 00:35:38.930672 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:35:38.935488 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 8 00:35:38.937686 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. May 8 00:35:38.937988 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. May 8 00:35:38.950782 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 8 00:35:38.951092 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:35:38.960410 udevadm[1234]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 8 00:35:38.997668 kernel: loop3: detected capacity change from 0 to 205544 May 8 00:35:39.107554 kernel: loop4: detected capacity change from 0 to 142488 May 8 00:35:39.151401 kernel: loop5: detected capacity change from 0 to 2976 May 8 00:35:39.160447 kernel: loop6: detected capacity change from 0 to 140768 May 8 00:35:39.189125 kernel: loop7: detected capacity change from 0 to 205544 May 8 00:35:39.255889 (sd-merge)[1241]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. May 8 00:35:39.256239 (sd-merge)[1241]: Merged extensions into '/usr'. May 8 00:35:39.260946 systemd[1]: Reloading requested from client PID 1185 ('systemd-sysext') (unit systemd-sysext.service)... May 8 00:35:39.260961 systemd[1]: Reloading... May 8 00:35:39.320765 zram_generator::config[1264]: No configuration found. May 8 00:35:39.393265 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 8 00:35:39.410775 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:35:39.444728 systemd[1]: Reloading finished in 183 ms. May 8 00:35:39.463447 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 8 00:35:39.472230 systemd[1]: Starting ensure-sysext.service... May 8 00:35:39.475307 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:35:39.488434 systemd[1]: Reloading requested from client PID 1322 ('systemctl') (unit ensure-sysext.service)... May 8 00:35:39.488444 systemd[1]: Reloading... May 8 00:35:39.526401 zram_generator::config[1351]: No configuration found. May 8 00:35:39.563207 systemd-tmpfiles[1323]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 00:35:39.563431 systemd-tmpfiles[1323]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 8 00:35:39.563987 systemd-tmpfiles[1323]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 00:35:39.564200 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. May 8 00:35:39.564239 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. May 8 00:35:39.582246 systemd-tmpfiles[1323]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:35:39.582252 systemd-tmpfiles[1323]: Skipping /boot May 8 00:35:39.584086 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 8 00:35:39.586957 systemd-tmpfiles[1323]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:35:39.586963 systemd-tmpfiles[1323]: Skipping /boot May 8 00:35:39.599834 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:35:39.653047 systemd[1]: Reloading finished in 164 ms. May 8 00:35:39.665337 ldconfig[1181]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 00:35:39.666894 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 8 00:35:39.671609 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:35:39.675192 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 8 00:35:39.688475 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 8 00:35:39.691015 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 8 00:35:39.696491 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:35:39.698474 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:35:39.699693 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 8 00:35:39.700763 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 8 00:35:39.707019 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 8 00:35:39.710939 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:35:39.716678 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:35:39.719651 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:35:39.730691 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:35:39.730892 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:35:39.731024 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:35:39.732966 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 8 00:35:39.736535 systemd-udevd[1415]: Using default interface naming scheme 'v255'. May 8 00:35:39.736788 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:35:39.736920 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:35:39.737711 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:35:39.737888 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:35:39.737969 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:35:39.738060 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:35:39.739763 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:35:39.740894 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:35:39.744050 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:35:39.751958 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:35:39.757615 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:35:39.761521 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:35:39.761751 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:35:39.761859 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:35:39.762276 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:35:39.762781 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:35:39.763470 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:35:39.774480 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:35:39.778719 systemd[1]: Finished ensure-sysext.service. May 8 00:35:39.779036 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 8 00:35:39.795481 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 8 00:35:39.795635 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:35:39.795842 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:35:39.795946 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:35:39.798217 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 8 00:35:39.809487 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 8 00:35:39.809880 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:35:39.810430 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:35:39.811473 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:35:39.817878 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:35:39.817995 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:35:39.818244 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:35:39.822681 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 8 00:35:39.823166 augenrules[1468]: No rules May 8 00:35:39.824585 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 8 00:35:39.836275 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 8 00:35:39.843801 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 8 00:35:39.885406 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 8 00:35:39.899404 kernel: ACPI: button: Power Button [PWRF] May 8 00:35:39.931835 systemd-resolved[1414]: Positive Trust Anchors: May 8 00:35:39.931847 systemd-resolved[1414]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:35:39.931869 systemd-resolved[1414]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:35:39.940135 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 8 00:35:39.940302 systemd[1]: Reached target time-set.target - System Time Set. May 8 00:35:39.941366 systemd-resolved[1414]: Defaulting to hostname 'linux'. May 8 00:35:39.943550 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1444) May 8 00:35:39.943847 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:35:39.944125 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:35:39.950936 systemd-networkd[1452]: lo: Link UP May 8 00:35:39.951320 systemd-networkd[1452]: lo: Gained carrier May 8 00:35:39.952104 systemd-networkd[1452]: Enumeration completed May 8 00:35:39.952156 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:35:39.952320 systemd-networkd[1452]: ens192: Configuring with /etc/systemd/network/00-vmware.network. May 8 00:35:39.952321 systemd[1]: Reached target network.target - Network. May 8 00:35:39.956573 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated May 8 00:35:39.956739 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps May 8 00:35:39.957068 systemd-networkd[1452]: ens192: Link UP May 8 00:35:39.957225 systemd-networkd[1452]: ens192: Gained carrier May 8 00:35:39.957608 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 8 00:35:39.963486 systemd-timesyncd[1460]: Network configuration changed, trying to establish connection. May 8 00:35:39.977680 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. May 8 00:35:39.984534 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 8 00:35:39.995248 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 8 00:35:40.024440 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! May 8 00:35:40.032470 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 May 8 00:35:40.047939 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc May 8 00:35:40.050579 kernel: Guest personality initialized and is active May 8 00:35:40.051758 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 8 00:35:40.051779 kernel: Initialized host personality May 8 00:35:40.055610 (udev-worker)[1454]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. May 8 00:35:40.061409 kernel: mousedev: PS/2 mouse device common for all mice May 8 00:35:40.064627 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:35:40.077147 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 8 00:35:40.082780 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 8 00:35:40.092472 lvm[1506]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:35:40.122320 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 8 00:35:40.122553 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:35:40.127576 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 8 00:35:40.130033 lvm[1510]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:35:40.132762 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:35:40.133216 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:35:40.133414 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 8 00:35:40.133569 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 8 00:35:40.133797 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 8 00:35:40.133938 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 8 00:35:40.134043 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 8 00:35:40.134151 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 00:35:40.134173 systemd[1]: Reached target paths.target - Path Units. May 8 00:35:40.134254 systemd[1]: Reached target timers.target - Timer Units. May 8 00:35:40.146061 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 8 00:35:40.147154 systemd[1]: Starting docker.socket - Docker Socket for the API... May 8 00:35:40.154904 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 8 00:35:40.155524 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 8 00:35:40.155776 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 8 00:35:40.156241 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:35:40.156360 systemd[1]: Reached target basic.target - Basic System. May 8 00:35:40.156518 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 8 00:35:40.156546 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 8 00:35:40.157495 systemd[1]: Starting containerd.service - containerd container runtime... May 8 00:35:40.159494 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 8 00:35:40.162257 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 8 00:35:40.163649 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 8 00:35:40.163771 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 8 00:35:40.166312 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 8 00:35:40.169481 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 8 00:35:40.172903 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 8 00:35:40.173612 jq[1517]: false May 8 00:35:40.174542 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 8 00:35:40.177666 systemd[1]: Starting systemd-logind.service - User Login Management... May 8 00:35:40.178030 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 00:35:40.178517 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 8 00:35:40.181989 systemd[1]: Starting update-engine.service - Update Engine... May 8 00:35:40.188521 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 8 00:35:40.191282 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... May 8 00:35:40.194026 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 00:35:40.194292 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 8 00:35:40.206055 jq[1526]: true May 8 00:35:40.210154 extend-filesystems[1518]: Found loop4 May 8 00:35:40.210606 extend-filesystems[1518]: Found loop5 May 8 00:35:40.210606 extend-filesystems[1518]: Found loop6 May 8 00:35:40.210606 extend-filesystems[1518]: Found loop7 May 8 00:35:40.210606 extend-filesystems[1518]: Found sda May 8 00:35:40.210606 extend-filesystems[1518]: Found sda1 May 8 00:35:40.210606 extend-filesystems[1518]: Found sda2 May 8 00:35:40.210606 extend-filesystems[1518]: Found sda3 May 8 00:35:40.210606 extend-filesystems[1518]: Found usr May 8 00:35:40.210606 extend-filesystems[1518]: Found sda4 May 8 00:35:40.210606 extend-filesystems[1518]: Found sda6 May 8 00:35:40.210606 extend-filesystems[1518]: Found sda7 May 8 00:35:40.213750 extend-filesystems[1518]: Found sda9 May 8 00:35:40.213750 extend-filesystems[1518]: Checking size of /dev/sda9 May 8 00:35:40.216372 update_engine[1524]: I20250508 00:35:40.215591 1524 main.cc:92] Flatcar Update Engine starting May 8 00:35:40.214004 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 00:35:40.214421 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 8 00:35:40.221594 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. May 8 00:35:40.225466 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... May 8 00:35:40.233409 jq[1536]: true May 8 00:35:40.234794 dbus-daemon[1516]: [system] SELinux support is enabled May 8 00:35:40.238908 update_engine[1524]: I20250508 00:35:40.238621 1524 update_check_scheduler.cc:74] Next update check in 7m58s May 8 00:35:40.239236 (ntainerd)[1549]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 8 00:35:40.239861 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 8 00:35:40.241264 systemd[1]: motdgen.service: Deactivated successfully. May 8 00:35:40.242294 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 8 00:35:40.244056 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 00:35:40.244766 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 8 00:35:40.247597 extend-filesystems[1518]: Old size kept for /dev/sda9 May 8 00:35:40.247597 extend-filesystems[1518]: Found sr0 May 8 00:35:40.245065 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 00:35:40.245076 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 8 00:35:40.249160 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 00:35:40.249264 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 8 00:35:40.249844 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. May 8 00:35:40.252401 systemd[1]: Started update-engine.service - Update Engine. May 8 00:35:40.259642 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 8 00:35:40.261282 tar[1529]: linux-amd64/helm May 8 00:35:40.269106 unknown[1540]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath May 8 00:35:40.279994 unknown[1540]: Core dump limit set to -1 May 8 00:35:40.294497 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1451) May 8 00:35:40.299590 systemd-logind[1523]: Watching system buttons on /dev/input/event1 (Power Button) May 8 00:35:40.299610 systemd-logind[1523]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 8 00:35:40.300567 kernel: NET: Registered PF_VSOCK protocol family May 8 00:35:40.301077 systemd-logind[1523]: New seat seat0. May 8 00:35:40.311251 systemd[1]: Started systemd-logind.service - User Login Management. May 8 00:35:40.346397 bash[1578]: Updated "/home/core/.ssh/authorized_keys" May 8 00:35:40.341618 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 8 00:35:40.342654 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 8 00:35:40.396957 locksmithd[1561]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 00:35:40.414731 sshd_keygen[1567]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 00:35:40.439675 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 8 00:35:40.443605 systemd[1]: Starting issuegen.service - Generate /run/issue... May 8 00:35:40.449654 systemd[1]: issuegen.service: Deactivated successfully. May 8 00:35:40.451514 systemd[1]: Finished issuegen.service - Generate /run/issue. May 8 00:35:40.460700 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 8 00:35:40.470460 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 8 00:35:40.478620 systemd[1]: Started getty@tty1.service - Getty on tty1. May 8 00:35:40.480413 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 8 00:35:40.480742 systemd[1]: Reached target getty.target - Login Prompts. May 8 00:35:40.564427 containerd[1549]: time="2025-05-08T00:35:40.563750879Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 8 00:35:40.604503 containerd[1549]: time="2025-05-08T00:35:40.604361564Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 00:35:40.605320 containerd[1549]: time="2025-05-08T00:35:40.605301531Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 00:35:40.605364 containerd[1549]: time="2025-05-08T00:35:40.605355761Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 00:35:40.605424 containerd[1549]: time="2025-05-08T00:35:40.605416619Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 00:35:40.605555 containerd[1549]: time="2025-05-08T00:35:40.605544823Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 8 00:35:40.605599 containerd[1549]: time="2025-05-08T00:35:40.605591268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 8 00:35:40.606186 containerd[1549]: time="2025-05-08T00:35:40.605654972Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:35:40.606186 containerd[1549]: time="2025-05-08T00:35:40.605665088Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 00:35:40.606186 containerd[1549]: time="2025-05-08T00:35:40.605771687Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:35:40.606186 containerd[1549]: time="2025-05-08T00:35:40.605781602Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 00:35:40.606186 containerd[1549]: time="2025-05-08T00:35:40.605788915Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:35:40.606186 containerd[1549]: time="2025-05-08T00:35:40.605797670Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 00:35:40.606186 containerd[1549]: time="2025-05-08T00:35:40.605852825Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 00:35:40.606186 containerd[1549]: time="2025-05-08T00:35:40.605973866Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 00:35:40.606186 containerd[1549]: time="2025-05-08T00:35:40.606028237Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:35:40.606186 containerd[1549]: time="2025-05-08T00:35:40.606036289Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 00:35:40.606186 containerd[1549]: time="2025-05-08T00:35:40.606075741Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 00:35:40.606367 containerd[1549]: time="2025-05-08T00:35:40.606101322Z" level=info msg="metadata content store policy set" policy=shared May 8 00:35:40.607354 containerd[1549]: time="2025-05-08T00:35:40.607343203Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 00:35:40.607425 containerd[1549]: time="2025-05-08T00:35:40.607411500Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 00:35:40.607481 containerd[1549]: time="2025-05-08T00:35:40.607471810Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 8 00:35:40.607519 containerd[1549]: time="2025-05-08T00:35:40.607512582Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 8 00:35:40.607565 containerd[1549]: time="2025-05-08T00:35:40.607557753Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 00:35:40.607655 containerd[1549]: time="2025-05-08T00:35:40.607646450Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 00:35:40.607816 containerd[1549]: time="2025-05-08T00:35:40.607807543Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 00:35:40.607902 containerd[1549]: time="2025-05-08T00:35:40.607893489Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 8 00:35:40.607936 containerd[1549]: time="2025-05-08T00:35:40.607929584Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 8 00:35:40.607976 containerd[1549]: time="2025-05-08T00:35:40.607965785Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 8 00:35:40.608008 containerd[1549]: time="2025-05-08T00:35:40.608002440Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 00:35:40.608045 containerd[1549]: time="2025-05-08T00:35:40.608038214Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 00:35:40.608075 containerd[1549]: time="2025-05-08T00:35:40.608069269Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 00:35:40.608105 containerd[1549]: time="2025-05-08T00:35:40.608099325Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 00:35:40.608135 containerd[1549]: time="2025-05-08T00:35:40.608129017Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 00:35:40.608164 containerd[1549]: time="2025-05-08T00:35:40.608158437Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 00:35:40.608199 containerd[1549]: time="2025-05-08T00:35:40.608190713Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 00:35:40.608235 containerd[1549]: time="2025-05-08T00:35:40.608227811Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 00:35:40.608270 containerd[1549]: time="2025-05-08T00:35:40.608263678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 00:35:40.608302 containerd[1549]: time="2025-05-08T00:35:40.608295896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 00:35:40.608332 containerd[1549]: time="2025-05-08T00:35:40.608326245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 00:35:40.608368 containerd[1549]: time="2025-05-08T00:35:40.608360956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 00:35:40.608442 containerd[1549]: time="2025-05-08T00:35:40.608434521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 00:35:40.608476 containerd[1549]: time="2025-05-08T00:35:40.608470456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 00:35:40.608506 containerd[1549]: time="2025-05-08T00:35:40.608500472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 00:35:40.608537 containerd[1549]: time="2025-05-08T00:35:40.608530602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 00:35:40.608566 containerd[1549]: time="2025-05-08T00:35:40.608560498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 8 00:35:40.608604 containerd[1549]: time="2025-05-08T00:35:40.608597371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 8 00:35:40.608716 containerd[1549]: time="2025-05-08T00:35:40.608641480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 00:35:40.608716 containerd[1549]: time="2025-05-08T00:35:40.608651842Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 8 00:35:40.608716 containerd[1549]: time="2025-05-08T00:35:40.608659746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 00:35:40.608716 containerd[1549]: time="2025-05-08T00:35:40.608668190Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 8 00:35:40.608716 containerd[1549]: time="2025-05-08T00:35:40.608680068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 8 00:35:40.608716 containerd[1549]: time="2025-05-08T00:35:40.608686572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 00:35:40.608716 containerd[1549]: time="2025-05-08T00:35:40.608692451Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 00:35:40.608872 containerd[1549]: time="2025-05-08T00:35:40.608819656Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 00:35:40.608872 containerd[1549]: time="2025-05-08T00:35:40.608833335Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 8 00:35:40.608872 containerd[1549]: time="2025-05-08T00:35:40.608839909Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 00:35:40.608872 containerd[1549]: time="2025-05-08T00:35:40.608846623Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 8 00:35:40.608872 containerd[1549]: time="2025-05-08T00:35:40.608852152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 00:35:40.608872 containerd[1549]: time="2025-05-08T00:35:40.608862759Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 8 00:35:40.609396 containerd[1549]: time="2025-05-08T00:35:40.608995317Z" level=info msg="NRI interface is disabled by configuration." May 8 00:35:40.609396 containerd[1549]: time="2025-05-08T00:35:40.609006618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 00:35:40.609436 containerd[1549]: time="2025-05-08T00:35:40.609190424Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 00:35:40.609436 containerd[1549]: time="2025-05-08T00:35:40.609226832Z" level=info msg="Connect containerd service" May 8 00:35:40.609436 containerd[1549]: time="2025-05-08T00:35:40.609246199Z" level=info msg="using legacy CRI server" May 8 00:35:40.609436 containerd[1549]: time="2025-05-08T00:35:40.609250541Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 8 00:35:40.609436 containerd[1549]: time="2025-05-08T00:35:40.609302903Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 00:35:40.609881 containerd[1549]: time="2025-05-08T00:35:40.609869177Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:35:40.610017 containerd[1549]: time="2025-05-08T00:35:40.609997322Z" level=info msg="Start subscribing containerd event" May 8 00:35:40.610184 containerd[1549]: time="2025-05-08T00:35:40.610149565Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 00:35:40.610217 containerd[1549]: time="2025-05-08T00:35:40.610209562Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 00:35:40.610239 containerd[1549]: time="2025-05-08T00:35:40.610174891Z" level=info msg="Start recovering state" May 8 00:35:40.610271 containerd[1549]: time="2025-05-08T00:35:40.610260307Z" level=info msg="Start event monitor" May 8 00:35:40.610292 containerd[1549]: time="2025-05-08T00:35:40.610273507Z" level=info msg="Start snapshots syncer" May 8 00:35:40.610292 containerd[1549]: time="2025-05-08T00:35:40.610279331Z" level=info msg="Start cni network conf syncer for default" May 8 00:35:40.610292 containerd[1549]: time="2025-05-08T00:35:40.610285360Z" level=info msg="Start streaming server" May 8 00:35:40.610376 systemd[1]: Started containerd.service - containerd container runtime. May 8 00:35:40.611628 containerd[1549]: time="2025-05-08T00:35:40.611139452Z" level=info msg="containerd successfully booted in 0.050202s" May 8 00:35:40.725748 tar[1529]: linux-amd64/LICENSE May 8 00:35:40.725872 tar[1529]: linux-amd64/README.md May 8 00:35:40.738811 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 8 00:35:41.611478 systemd-networkd[1452]: ens192: Gained IPv6LL May 8 00:35:41.611867 systemd-timesyncd[1460]: Network configuration changed, trying to establish connection. May 8 00:35:41.612643 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 8 00:35:41.613717 systemd[1]: Reached target network-online.target - Network is Online. May 8 00:35:41.619037 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... May 8 00:35:41.636524 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:35:41.637772 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 8 00:35:41.683514 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 8 00:35:41.684115 systemd[1]: coreos-metadata.service: Deactivated successfully. May 8 00:35:41.684333 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. May 8 00:35:41.685966 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 8 00:35:43.099075 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:35:43.099644 systemd[1]: Reached target multi-user.target - Multi-User System. May 8 00:35:43.100307 systemd[1]: Startup finished in 1.033s (kernel) + 5.429s (initrd) + 5.070s (userspace) = 11.533s. May 8 00:35:43.104654 (kubelet)[1694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:35:43.260674 login[1625]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 8 00:35:43.261879 login[1628]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 8 00:35:43.269123 systemd-logind[1523]: New session 1 of user core. May 8 00:35:43.270406 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 8 00:35:43.285616 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 8 00:35:43.287222 systemd-logind[1523]: New session 2 of user core. May 8 00:35:43.296211 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 8 00:35:43.297907 systemd[1]: Starting user@500.service - User Manager for UID 500... May 8 00:35:43.309773 (systemd)[1701]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 00:35:43.452530 systemd[1701]: Queued start job for default target default.target. May 8 00:35:43.462626 systemd[1701]: Created slice app.slice - User Application Slice. May 8 00:35:43.462652 systemd[1701]: Reached target paths.target - Paths. May 8 00:35:43.462662 systemd[1701]: Reached target timers.target - Timers. May 8 00:35:43.463539 systemd[1701]: Starting dbus.socket - D-Bus User Message Bus Socket... May 8 00:35:43.477493 systemd[1701]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 8 00:35:43.478068 systemd[1701]: Reached target sockets.target - Sockets. May 8 00:35:43.478088 systemd[1701]: Reached target basic.target - Basic System. May 8 00:35:43.478134 systemd[1701]: Reached target default.target - Main User Target. May 8 00:35:43.478162 systemd[1701]: Startup finished in 164ms. May 8 00:35:43.478531 systemd[1]: Started user@500.service - User Manager for UID 500. May 8 00:35:43.482552 systemd[1]: Started session-1.scope - Session 1 of User core. May 8 00:35:43.483527 systemd[1]: Started session-2.scope - Session 2 of User core. May 8 00:35:44.494583 kubelet[1694]: E0508 00:35:44.494544 1694 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:35:44.496411 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:35:44.496497 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:35:54.746903 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 8 00:35:54.758635 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:35:54.820118 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:35:54.822979 (kubelet)[1745]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:35:54.854650 kubelet[1745]: E0508 00:35:54.854606 1745 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:35:54.857834 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:35:54.857958 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:36:05.108237 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 8 00:36:05.121722 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:36:05.453007 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:36:05.456262 (kubelet)[1760]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:36:05.507645 kubelet[1760]: E0508 00:36:05.507591 1760 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:36:05.509046 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:36:05.509139 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:36:10.434596 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 8 00:36:10.436030 systemd[1]: Started sshd@0-139.178.70.103:22-139.178.68.195:53924.service - OpenSSH per-connection server daemon (139.178.68.195:53924). May 8 00:36:10.472785 sshd[1767]: Accepted publickey for core from 139.178.68.195 port 53924 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:36:10.473698 sshd[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:36:10.476517 systemd-logind[1523]: New session 3 of user core. May 8 00:36:10.483464 systemd[1]: Started session-3.scope - Session 3 of User core. May 8 00:36:10.538638 systemd[1]: Started sshd@1-139.178.70.103:22-139.178.68.195:53938.service - OpenSSH per-connection server daemon (139.178.68.195:53938). May 8 00:36:10.573758 sshd[1772]: Accepted publickey for core from 139.178.68.195 port 53938 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:36:10.574972 sshd[1772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:36:10.579464 systemd-logind[1523]: New session 4 of user core. May 8 00:36:10.592546 systemd[1]: Started session-4.scope - Session 4 of User core. May 8 00:36:10.640697 sshd[1772]: pam_unix(sshd:session): session closed for user core May 8 00:36:10.652160 systemd[1]: sshd@1-139.178.70.103:22-139.178.68.195:53938.service: Deactivated successfully. May 8 00:36:10.652974 systemd[1]: session-4.scope: Deactivated successfully. May 8 00:36:10.653724 systemd-logind[1523]: Session 4 logged out. Waiting for processes to exit. May 8 00:36:10.654427 systemd[1]: Started sshd@2-139.178.70.103:22-139.178.68.195:53942.service - OpenSSH per-connection server daemon (139.178.68.195:53942). May 8 00:36:10.655642 systemd-logind[1523]: Removed session 4. May 8 00:36:10.685678 sshd[1779]: Accepted publickey for core from 139.178.68.195 port 53942 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:36:10.686485 sshd[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:36:10.688919 systemd-logind[1523]: New session 5 of user core. May 8 00:36:10.696456 systemd[1]: Started session-5.scope - Session 5 of User core. May 8 00:36:10.742690 sshd[1779]: pam_unix(sshd:session): session closed for user core May 8 00:36:10.752375 systemd[1]: sshd@2-139.178.70.103:22-139.178.68.195:53942.service: Deactivated successfully. May 8 00:36:10.753492 systemd[1]: session-5.scope: Deactivated successfully. May 8 00:36:10.754073 systemd-logind[1523]: Session 5 logged out. Waiting for processes to exit. May 8 00:36:10.755561 systemd[1]: Started sshd@3-139.178.70.103:22-139.178.68.195:53950.service - OpenSSH per-connection server daemon (139.178.68.195:53950). May 8 00:36:10.756294 systemd-logind[1523]: Removed session 5. May 8 00:36:10.786942 sshd[1786]: Accepted publickey for core from 139.178.68.195 port 53950 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:36:10.787764 sshd[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:36:10.790132 systemd-logind[1523]: New session 6 of user core. May 8 00:36:10.797490 systemd[1]: Started session-6.scope - Session 6 of User core. May 8 00:36:10.846074 sshd[1786]: pam_unix(sshd:session): session closed for user core May 8 00:36:10.853963 systemd[1]: sshd@3-139.178.70.103:22-139.178.68.195:53950.service: Deactivated successfully. May 8 00:36:10.854945 systemd[1]: session-6.scope: Deactivated successfully. May 8 00:36:10.855804 systemd-logind[1523]: Session 6 logged out. Waiting for processes to exit. May 8 00:36:10.861771 systemd[1]: Started sshd@4-139.178.70.103:22-139.178.68.195:53962.service - OpenSSH per-connection server daemon (139.178.68.195:53962). May 8 00:36:10.863623 systemd-logind[1523]: Removed session 6. May 8 00:36:10.890407 sshd[1793]: Accepted publickey for core from 139.178.68.195 port 53962 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:36:10.891232 sshd[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:36:10.894527 systemd-logind[1523]: New session 7 of user core. May 8 00:36:10.898489 systemd[1]: Started session-7.scope - Session 7 of User core. May 8 00:36:10.955213 sudo[1796]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 8 00:36:10.955666 sudo[1796]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:36:10.969822 sudo[1796]: pam_unix(sudo:session): session closed for user root May 8 00:36:10.970911 sshd[1793]: pam_unix(sshd:session): session closed for user core May 8 00:36:10.980072 systemd[1]: sshd@4-139.178.70.103:22-139.178.68.195:53962.service: Deactivated successfully. May 8 00:36:10.981049 systemd[1]: session-7.scope: Deactivated successfully. May 8 00:36:10.981918 systemd-logind[1523]: Session 7 logged out. Waiting for processes to exit. May 8 00:36:10.985614 systemd[1]: Started sshd@5-139.178.70.103:22-139.178.68.195:53976.service - OpenSSH per-connection server daemon (139.178.68.195:53976). May 8 00:36:10.986570 systemd-logind[1523]: Removed session 7. May 8 00:36:11.014774 sshd[1801]: Accepted publickey for core from 139.178.68.195 port 53976 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:36:11.015638 sshd[1801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:36:11.018420 systemd-logind[1523]: New session 8 of user core. May 8 00:36:11.024512 systemd[1]: Started session-8.scope - Session 8 of User core. May 8 00:36:11.073566 sudo[1805]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 8 00:36:11.073760 sudo[1805]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:36:11.076014 sudo[1805]: pam_unix(sudo:session): session closed for user root May 8 00:36:11.079028 sudo[1804]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 8 00:36:11.079318 sudo[1804]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:36:11.088548 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 8 00:36:11.089600 auditctl[1808]: No rules May 8 00:36:11.089787 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:36:11.089900 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 8 00:36:11.091329 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 8 00:36:11.107283 augenrules[1826]: No rules May 8 00:36:11.107634 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 8 00:36:11.108177 sudo[1804]: pam_unix(sudo:session): session closed for user root May 8 00:36:11.109765 sshd[1801]: pam_unix(sshd:session): session closed for user core May 8 00:36:11.114848 systemd[1]: sshd@5-139.178.70.103:22-139.178.68.195:53976.service: Deactivated successfully. May 8 00:36:11.115896 systemd[1]: session-8.scope: Deactivated successfully. May 8 00:36:11.116751 systemd-logind[1523]: Session 8 logged out. Waiting for processes to exit. May 8 00:36:11.120566 systemd[1]: Started sshd@6-139.178.70.103:22-139.178.68.195:53992.service - OpenSSH per-connection server daemon (139.178.68.195:53992). May 8 00:36:11.121514 systemd-logind[1523]: Removed session 8. May 8 00:36:11.148698 sshd[1834]: Accepted publickey for core from 139.178.68.195 port 53992 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:36:11.149495 sshd[1834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:36:11.151785 systemd-logind[1523]: New session 9 of user core. May 8 00:36:11.158570 systemd[1]: Started session-9.scope - Session 9 of User core. May 8 00:36:11.208804 sudo[1837]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 00:36:11.209019 sudo[1837]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:36:11.488597 (dockerd)[1852]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 8 00:36:11.488955 systemd[1]: Starting docker.service - Docker Application Container Engine... May 8 00:36:11.765844 dockerd[1852]: time="2025-05-08T00:36:11.765502089Z" level=info msg="Starting up" May 8 00:36:11.950888 dockerd[1852]: time="2025-05-08T00:36:11.950851726Z" level=info msg="Loading containers: start." May 8 00:37:32.512803 systemd-resolved[1414]: Clock change detected. Flushing caches. May 8 00:37:32.513196 systemd-timesyncd[1460]: Contacted time server 64.142.54.12:123 (2.flatcar.pool.ntp.org). May 8 00:37:32.513237 systemd-timesyncd[1460]: Initial clock synchronization to Thu 2025-05-08 00:37:32.512703 UTC. May 8 00:37:32.666934 kernel: Initializing XFRM netlink socket May 8 00:37:32.756425 systemd-networkd[1452]: docker0: Link UP May 8 00:37:32.770983 dockerd[1852]: time="2025-05-08T00:37:32.770828495Z" level=info msg="Loading containers: done." May 8 00:37:32.780691 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck402904151-merged.mount: Deactivated successfully. May 8 00:37:32.781591 dockerd[1852]: time="2025-05-08T00:37:32.781567658Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 8 00:37:32.781653 dockerd[1852]: time="2025-05-08T00:37:32.781637291Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 8 00:37:32.781711 dockerd[1852]: time="2025-05-08T00:37:32.781696958Z" level=info msg="Daemon has completed initialization" May 8 00:37:32.797690 dockerd[1852]: time="2025-05-08T00:37:32.797638160Z" level=info msg="API listen on /run/docker.sock" May 8 00:37:32.797751 systemd[1]: Started docker.service - Docker Application Container Engine. May 8 00:37:33.761526 containerd[1549]: time="2025-05-08T00:37:33.761486543Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 8 00:37:34.535732 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1414111371.mount: Deactivated successfully. May 8 00:37:35.483476 containerd[1549]: time="2025-05-08T00:37:35.483443388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:35.484361 containerd[1549]: time="2025-05-08T00:37:35.484328119Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960987" May 8 00:37:35.484802 containerd[1549]: time="2025-05-08T00:37:35.484786984Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:35.486216 containerd[1549]: time="2025-05-08T00:37:35.486193298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:35.486893 containerd[1549]: time="2025-05-08T00:37:35.486800330Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 1.725284585s" May 8 00:37:35.486893 containerd[1549]: time="2025-05-08T00:37:35.486819834Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" May 8 00:37:35.488127 containerd[1549]: time="2025-05-08T00:37:35.488095664Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 8 00:37:36.139998 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 8 00:37:36.148350 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:37:36.271306 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:37:36.273997 (kubelet)[2057]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:37:36.330542 kubelet[2057]: E0508 00:37:36.330481 2057 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:37:36.331835 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:37:36.331931 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:37:37.072733 containerd[1549]: time="2025-05-08T00:37:37.071981972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:37.077591 containerd[1549]: time="2025-05-08T00:37:37.077560569Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713776" May 8 00:37:37.081366 containerd[1549]: time="2025-05-08T00:37:37.081335073Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:37.090999 containerd[1549]: time="2025-05-08T00:37:37.090957463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:37.091835 containerd[1549]: time="2025-05-08T00:37:37.091722080Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 1.603606289s" May 8 00:37:37.091835 containerd[1549]: time="2025-05-08T00:37:37.091746232Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" May 8 00:37:37.092360 containerd[1549]: time="2025-05-08T00:37:37.092235742Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 8 00:37:38.310121 containerd[1549]: time="2025-05-08T00:37:38.310091746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:38.310942 containerd[1549]: time="2025-05-08T00:37:38.310893838Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780386" May 8 00:37:38.311156 containerd[1549]: time="2025-05-08T00:37:38.311141072Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:38.312929 containerd[1549]: time="2025-05-08T00:37:38.312866842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:38.313576 containerd[1549]: time="2025-05-08T00:37:38.313504092Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 1.221210763s" May 8 00:37:38.313576 containerd[1549]: time="2025-05-08T00:37:38.313523385Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" May 8 00:37:38.314342 containerd[1549]: time="2025-05-08T00:37:38.314206146Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 8 00:37:39.157666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2244127655.mount: Deactivated successfully. May 8 00:37:39.606805 containerd[1549]: time="2025-05-08T00:37:39.606218444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:39.611562 containerd[1549]: time="2025-05-08T00:37:39.611532156Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354625" May 8 00:37:39.617795 containerd[1549]: time="2025-05-08T00:37:39.617755745Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:39.633507 containerd[1549]: time="2025-05-08T00:37:39.633454583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:39.634225 containerd[1549]: time="2025-05-08T00:37:39.634060691Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 1.319837201s" May 8 00:37:39.634225 containerd[1549]: time="2025-05-08T00:37:39.634087487Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" May 8 00:37:39.634408 containerd[1549]: time="2025-05-08T00:37:39.634392044Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 8 00:37:40.283152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1846040501.mount: Deactivated successfully. May 8 00:37:41.101153 containerd[1549]: time="2025-05-08T00:37:41.100728926Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:41.101592 containerd[1549]: time="2025-05-08T00:37:41.101547461Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 8 00:37:41.102194 containerd[1549]: time="2025-05-08T00:37:41.101871213Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:41.104345 containerd[1549]: time="2025-05-08T00:37:41.104310459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:41.105313 containerd[1549]: time="2025-05-08T00:37:41.105203650Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.470791247s" May 8 00:37:41.105313 containerd[1549]: time="2025-05-08T00:37:41.105232180Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 8 00:37:41.105655 containerd[1549]: time="2025-05-08T00:37:41.105627138Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 8 00:37:41.517862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2766507762.mount: Deactivated successfully. May 8 00:37:41.521000 containerd[1549]: time="2025-05-08T00:37:41.520956002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:41.521467 containerd[1549]: time="2025-05-08T00:37:41.521438312Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 8 00:37:41.521615 containerd[1549]: time="2025-05-08T00:37:41.521596162Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:41.523502 containerd[1549]: time="2025-05-08T00:37:41.523482826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:41.524122 containerd[1549]: time="2025-05-08T00:37:41.524097105Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 418.382204ms" May 8 00:37:41.524244 containerd[1549]: time="2025-05-08T00:37:41.524180033Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 8 00:37:41.524691 containerd[1549]: time="2025-05-08T00:37:41.524508751Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 8 00:37:42.137490 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3937441517.mount: Deactivated successfully. May 8 00:37:43.773833 containerd[1549]: time="2025-05-08T00:37:43.773757182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:43.773833 containerd[1549]: time="2025-05-08T00:37:43.773808825Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 8 00:37:43.774905 containerd[1549]: time="2025-05-08T00:37:43.774882120Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:43.776647 containerd[1549]: time="2025-05-08T00:37:43.776591753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:43.777396 containerd[1549]: time="2025-05-08T00:37:43.777306674Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.252781089s" May 8 00:37:43.777396 containerd[1549]: time="2025-05-08T00:37:43.777327038Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 8 00:37:45.443205 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:37:45.453131 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:37:45.474400 systemd[1]: Reloading requested from client PID 2209 ('systemctl') (unit session-9.scope)... May 8 00:37:45.474496 systemd[1]: Reloading... May 8 00:37:45.543934 zram_generator::config[2248]: No configuration found. May 8 00:37:45.603162 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 8 00:37:45.618416 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:37:45.662985 systemd[1]: Reloading finished in 188 ms. May 8 00:37:45.697340 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 8 00:37:45.697398 systemd[1]: kubelet.service: Failed with result 'signal'. May 8 00:37:45.697729 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:37:45.703119 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:37:46.050003 update_engine[1524]: I20250508 00:37:46.049957 1524 update_attempter.cc:509] Updating boot flags... May 8 00:37:46.095338 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:37:46.100587 (kubelet)[2319]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:37:46.122968 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2328) May 8 00:37:46.161006 kubelet[2319]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:37:46.161006 kubelet[2319]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:37:46.161006 kubelet[2319]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:37:46.162134 kubelet[2319]: I0508 00:37:46.161863 2319 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:37:46.383933 kubelet[2319]: I0508 00:37:46.383856 2319 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 8 00:37:46.383933 kubelet[2319]: I0508 00:37:46.383876 2319 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:37:46.384227 kubelet[2319]: I0508 00:37:46.384040 2319 server.go:929] "Client rotation is on, will bootstrap in background" May 8 00:37:46.404847 kubelet[2319]: E0508 00:37:46.404825 2319 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.103:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" May 8 00:37:46.404925 kubelet[2319]: I0508 00:37:46.404919 2319 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:37:46.416446 kubelet[2319]: E0508 00:37:46.416423 2319 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 00:37:46.416446 kubelet[2319]: I0508 00:37:46.416441 2319 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 00:37:46.422434 kubelet[2319]: I0508 00:37:46.422308 2319 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:37:46.423218 kubelet[2319]: I0508 00:37:46.423179 2319 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 8 00:37:46.423279 kubelet[2319]: I0508 00:37:46.423261 2319 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:37:46.423373 kubelet[2319]: I0508 00:37:46.423277 2319 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 00:37:46.423434 kubelet[2319]: I0508 00:37:46.423374 2319 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:37:46.423434 kubelet[2319]: I0508 00:37:46.423380 2319 container_manager_linux.go:300] "Creating device plugin manager" May 8 00:37:46.423468 kubelet[2319]: I0508 00:37:46.423442 2319 state_mem.go:36] "Initialized new in-memory state store" May 8 00:37:46.424846 kubelet[2319]: I0508 00:37:46.424833 2319 kubelet.go:408] "Attempting to sync node with API server" May 8 00:37:46.424881 kubelet[2319]: I0508 00:37:46.424848 2319 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:37:46.425999 kubelet[2319]: I0508 00:37:46.425874 2319 kubelet.go:314] "Adding apiserver pod source" May 8 00:37:46.425999 kubelet[2319]: I0508 00:37:46.425887 2319 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:37:46.431010 kubelet[2319]: W0508 00:37:46.430873 2319 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused May 8 00:37:46.431010 kubelet[2319]: E0508 00:37:46.430903 2319 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" May 8 00:37:46.432830 kubelet[2319]: W0508 00:37:46.432682 2319 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.103:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused May 8 00:37:46.432830 kubelet[2319]: E0508 00:37:46.432705 2319 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.103:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" May 8 00:37:46.432830 kubelet[2319]: I0508 00:37:46.432748 2319 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 8 00:37:46.434132 kubelet[2319]: I0508 00:37:46.434081 2319 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:37:46.434676 kubelet[2319]: W0508 00:37:46.434563 2319 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 00:37:46.434873 kubelet[2319]: I0508 00:37:46.434861 2319 server.go:1269] "Started kubelet" May 8 00:37:46.435820 kubelet[2319]: I0508 00:37:46.435473 2319 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:37:46.436987 kubelet[2319]: I0508 00:37:46.436979 2319 server.go:460] "Adding debug handlers to kubelet server" May 8 00:37:46.438123 kubelet[2319]: I0508 00:37:46.438115 2319 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:37:46.440226 kubelet[2319]: I0508 00:37:46.440050 2319 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:37:46.440226 kubelet[2319]: I0508 00:37:46.440159 2319 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:37:46.442357 kubelet[2319]: I0508 00:37:46.442324 2319 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 00:37:46.444070 kubelet[2319]: I0508 00:37:46.443828 2319 volume_manager.go:289] "Starting Kubelet Volume Manager" May 8 00:37:46.444070 kubelet[2319]: E0508 00:37:46.443939 2319 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:37:46.444321 kubelet[2319]: E0508 00:37:46.441007 2319 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.103:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.103:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183d664288934e17 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-08 00:37:46.434850327 +0000 UTC m=+0.332045598,LastTimestamp:2025-05-08 00:37:46.434850327 +0000 UTC m=+0.332045598,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 8 00:37:46.445641 kubelet[2319]: E0508 00:37:46.444631 2319 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.103:6443: connect: connection refused" interval="200ms" May 8 00:37:46.445641 kubelet[2319]: I0508 00:37:46.445324 2319 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 8 00:37:46.445641 kubelet[2319]: I0508 00:37:46.445349 2319 reconciler.go:26] "Reconciler: start to sync state" May 8 00:37:46.445937 kubelet[2319]: W0508 00:37:46.445909 2319 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused May 8 00:37:46.446303 kubelet[2319]: E0508 00:37:46.446293 2319 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" May 8 00:37:46.446452 kubelet[2319]: I0508 00:37:46.446443 2319 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:37:46.448439 kubelet[2319]: E0508 00:37:46.448429 2319 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:37:46.448825 kubelet[2319]: I0508 00:37:46.448817 2319 factory.go:221] Registration of the containerd container factory successfully May 8 00:37:46.448869 kubelet[2319]: I0508 00:37:46.448864 2319 factory.go:221] Registration of the systemd container factory successfully May 8 00:37:46.450109 kubelet[2319]: I0508 00:37:46.450087 2319 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:37:46.450651 kubelet[2319]: I0508 00:37:46.450640 2319 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:37:46.450678 kubelet[2319]: I0508 00:37:46.450654 2319 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:37:46.450678 kubelet[2319]: I0508 00:37:46.450666 2319 kubelet.go:2321] "Starting kubelet main sync loop" May 8 00:37:46.450715 kubelet[2319]: E0508 00:37:46.450695 2319 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:37:46.455590 kubelet[2319]: W0508 00:37:46.455554 2319 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused May 8 00:37:46.455590 kubelet[2319]: E0508 00:37:46.455590 2319 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" May 8 00:37:46.469560 kubelet[2319]: I0508 00:37:46.469544 2319 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:37:46.469560 kubelet[2319]: I0508 00:37:46.469554 2319 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:37:46.469560 kubelet[2319]: I0508 00:37:46.469563 2319 state_mem.go:36] "Initialized new in-memory state store" May 8 00:37:46.472539 kubelet[2319]: I0508 00:37:46.472528 2319 policy_none.go:49] "None policy: Start" May 8 00:37:46.473013 kubelet[2319]: I0508 00:37:46.472835 2319 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:37:46.473013 kubelet[2319]: I0508 00:37:46.472847 2319 state_mem.go:35] "Initializing new in-memory state store" May 8 00:37:46.478955 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 8 00:37:46.488070 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 8 00:37:46.491746 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 8 00:37:46.498419 kubelet[2319]: I0508 00:37:46.498404 2319 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:37:46.498516 kubelet[2319]: I0508 00:37:46.498506 2319 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 00:37:46.498539 kubelet[2319]: I0508 00:37:46.498515 2319 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:37:46.499190 kubelet[2319]: I0508 00:37:46.498781 2319 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:37:46.500597 kubelet[2319]: E0508 00:37:46.500584 2319 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 8 00:37:46.561813 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 8 00:37:46.585847 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 8 00:37:46.589677 systemd[1]: Created slice kubepods-burstable-pod0a6a17d93d74faf34cb814d0f8f15050.slice - libcontainer container kubepods-burstable-pod0a6a17d93d74faf34cb814d0f8f15050.slice. May 8 00:37:46.599835 kubelet[2319]: I0508 00:37:46.599812 2319 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:37:46.600104 kubelet[2319]: E0508 00:37:46.600078 2319 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.103:6443/api/v1/nodes\": dial tcp 139.178.70.103:6443: connect: connection refused" node="localhost" May 8 00:37:46.645645 kubelet[2319]: E0508 00:37:46.645610 2319 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.103:6443: connect: connection refused" interval="400ms" May 8 00:37:46.745730 kubelet[2319]: I0508 00:37:46.745695 2319 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:37:46.745730 kubelet[2319]: I0508 00:37:46.745732 2319 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:37:46.745849 kubelet[2319]: I0508 00:37:46.745760 2319 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:37:46.745849 kubelet[2319]: I0508 00:37:46.745772 2319 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0a6a17d93d74faf34cb814d0f8f15050-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0a6a17d93d74faf34cb814d0f8f15050\") " pod="kube-system/kube-apiserver-localhost" May 8 00:37:46.745849 kubelet[2319]: I0508 00:37:46.745781 2319 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0a6a17d93d74faf34cb814d0f8f15050-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0a6a17d93d74faf34cb814d0f8f15050\") " pod="kube-system/kube-apiserver-localhost" May 8 00:37:46.745849 kubelet[2319]: I0508 00:37:46.745791 2319 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:37:46.745849 kubelet[2319]: I0508 00:37:46.745799 2319 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:37:46.745962 kubelet[2319]: I0508 00:37:46.745809 2319 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 8 00:37:46.745962 kubelet[2319]: I0508 00:37:46.745818 2319 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0a6a17d93d74faf34cb814d0f8f15050-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0a6a17d93d74faf34cb814d0f8f15050\") " pod="kube-system/kube-apiserver-localhost" May 8 00:37:46.801689 kubelet[2319]: I0508 00:37:46.801611 2319 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:37:46.801849 kubelet[2319]: E0508 00:37:46.801831 2319 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.103:6443/api/v1/nodes\": dial tcp 139.178.70.103:6443: connect: connection refused" node="localhost" May 8 00:37:46.883732 containerd[1549]: time="2025-05-08T00:37:46.883635316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 8 00:37:46.892592 containerd[1549]: time="2025-05-08T00:37:46.892544591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 8 00:37:46.892755 containerd[1549]: time="2025-05-08T00:37:46.892544657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0a6a17d93d74faf34cb814d0f8f15050,Namespace:kube-system,Attempt:0,}" May 8 00:37:47.046679 kubelet[2319]: E0508 00:37:47.046579 2319 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.103:6443: connect: connection refused" interval="800ms" May 8 00:37:47.202796 kubelet[2319]: I0508 00:37:47.202777 2319 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:37:47.203039 kubelet[2319]: E0508 00:37:47.202987 2319 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.103:6443/api/v1/nodes\": dial tcp 139.178.70.103:6443: connect: connection refused" node="localhost" May 8 00:37:47.382611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount999396736.mount: Deactivated successfully. May 8 00:37:47.386772 containerd[1549]: time="2025-05-08T00:37:47.386363276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:37:47.387103 containerd[1549]: time="2025-05-08T00:37:47.387077320Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 8 00:37:47.387776 containerd[1549]: time="2025-05-08T00:37:47.387761521Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:37:47.388331 containerd[1549]: time="2025-05-08T00:37:47.388311754Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:37:47.388863 containerd[1549]: time="2025-05-08T00:37:47.388842649Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:37:47.389060 containerd[1549]: time="2025-05-08T00:37:47.389040056Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:37:47.392339 containerd[1549]: time="2025-05-08T00:37:47.392322340Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:37:47.393802 containerd[1549]: time="2025-05-08T00:37:47.393786215Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 510.105178ms" May 8 00:37:47.398299 containerd[1549]: time="2025-05-08T00:37:47.398285659Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 505.580764ms" May 8 00:37:47.398811 containerd[1549]: time="2025-05-08T00:37:47.398794117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:37:47.399029 containerd[1549]: time="2025-05-08T00:37:47.399013472Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 506.421ms" May 8 00:37:47.407157 kubelet[2319]: W0508 00:37:47.407097 2319 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.103:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused May 8 00:37:47.407157 kubelet[2319]: E0508 00:37:47.407137 2319 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.103:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" May 8 00:37:47.485546 kubelet[2319]: W0508 00:37:47.485491 2319 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused May 8 00:37:47.485546 kubelet[2319]: E0508 00:37:47.485531 2319 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" May 8 00:37:47.691512 containerd[1549]: time="2025-05-08T00:37:47.691414020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:37:47.691692 containerd[1549]: time="2025-05-08T00:37:47.691639730Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:37:47.691692 containerd[1549]: time="2025-05-08T00:37:47.691664050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:47.691858 containerd[1549]: time="2025-05-08T00:37:47.691812395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:47.692467 containerd[1549]: time="2025-05-08T00:37:47.692423630Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:37:47.692522 containerd[1549]: time="2025-05-08T00:37:47.692480541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:37:47.692564 containerd[1549]: time="2025-05-08T00:37:47.692495500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:47.692604 containerd[1549]: time="2025-05-08T00:37:47.692553134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:47.696391 containerd[1549]: time="2025-05-08T00:37:47.696239231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:37:47.696391 containerd[1549]: time="2025-05-08T00:37:47.696295620Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:37:47.696391 containerd[1549]: time="2025-05-08T00:37:47.696303200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:47.696391 containerd[1549]: time="2025-05-08T00:37:47.696344534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:47.713080 systemd[1]: Started cri-containerd-302fe69e148a79d209337b715ce6a9c314cadf61d4623cc7c533c5173d290aa4.scope - libcontainer container 302fe69e148a79d209337b715ce6a9c314cadf61d4623cc7c533c5173d290aa4. May 8 00:37:47.718350 systemd[1]: Started cri-containerd-383081911cf909b0c7a9ac2cff9fdc6d7519f3c8787946a2f5119e7cb6038c65.scope - libcontainer container 383081911cf909b0c7a9ac2cff9fdc6d7519f3c8787946a2f5119e7cb6038c65. May 8 00:37:47.720025 systemd[1]: Started cri-containerd-7f506756e2db933c4d8152cd5e466a4ad9655645fc7321a4eab272c19b6eb392.scope - libcontainer container 7f506756e2db933c4d8152cd5e466a4ad9655645fc7321a4eab272c19b6eb392. May 8 00:37:47.762191 containerd[1549]: time="2025-05-08T00:37:47.761809272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"302fe69e148a79d209337b715ce6a9c314cadf61d4623cc7c533c5173d290aa4\"" May 8 00:37:47.765984 containerd[1549]: time="2025-05-08T00:37:47.765960191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0a6a17d93d74faf34cb814d0f8f15050,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f506756e2db933c4d8152cd5e466a4ad9655645fc7321a4eab272c19b6eb392\"" May 8 00:37:47.769628 containerd[1549]: time="2025-05-08T00:37:47.769565707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"383081911cf909b0c7a9ac2cff9fdc6d7519f3c8787946a2f5119e7cb6038c65\"" May 8 00:37:47.773831 containerd[1549]: time="2025-05-08T00:37:47.773714077Z" level=info msg="CreateContainer within sandbox \"383081911cf909b0c7a9ac2cff9fdc6d7519f3c8787946a2f5119e7cb6038c65\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 8 00:37:47.773985 containerd[1549]: time="2025-05-08T00:37:47.773974005Z" level=info msg="CreateContainer within sandbox \"7f506756e2db933c4d8152cd5e466a4ad9655645fc7321a4eab272c19b6eb392\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 8 00:37:47.774187 containerd[1549]: time="2025-05-08T00:37:47.774176147Z" level=info msg="CreateContainer within sandbox \"302fe69e148a79d209337b715ce6a9c314cadf61d4623cc7c533c5173d290aa4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 8 00:37:47.847837 kubelet[2319]: E0508 00:37:47.847801 2319 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.103:6443: connect: connection refused" interval="1.6s" May 8 00:37:47.959050 kubelet[2319]: W0508 00:37:47.958957 2319 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused May 8 00:37:47.959050 kubelet[2319]: E0508 00:37:47.959013 2319 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" May 8 00:37:48.004225 kubelet[2319]: I0508 00:37:48.004199 2319 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:37:48.004485 kubelet[2319]: E0508 00:37:48.004450 2319 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.103:6443/api/v1/nodes\": dial tcp 139.178.70.103:6443: connect: connection refused" node="localhost" May 8 00:37:48.011162 containerd[1549]: time="2025-05-08T00:37:48.011138922Z" level=info msg="CreateContainer within sandbox \"383081911cf909b0c7a9ac2cff9fdc6d7519f3c8787946a2f5119e7cb6038c65\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d077b511541dc9e421813df603103d8199a0677f307f225538de58788d8453e0\"" May 8 00:37:48.011760 containerd[1549]: time="2025-05-08T00:37:48.011630605Z" level=info msg="CreateContainer within sandbox \"302fe69e148a79d209337b715ce6a9c314cadf61d4623cc7c533c5173d290aa4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b40fa75e13306268cbf24b8b73e70f80947ba46924711ee553cf547250466a97\"" May 8 00:37:48.012592 containerd[1549]: time="2025-05-08T00:37:48.011995754Z" level=info msg="StartContainer for \"b40fa75e13306268cbf24b8b73e70f80947ba46924711ee553cf547250466a97\"" May 8 00:37:48.013066 containerd[1549]: time="2025-05-08T00:37:48.013054853Z" level=info msg="StartContainer for \"d077b511541dc9e421813df603103d8199a0677f307f225538de58788d8453e0\"" May 8 00:37:48.015093 containerd[1549]: time="2025-05-08T00:37:48.015028885Z" level=info msg="CreateContainer within sandbox \"7f506756e2db933c4d8152cd5e466a4ad9655645fc7321a4eab272c19b6eb392\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bf16ea74fbb65075f19268cd4be7de053cf112def122ba96b31086dab44a3aad\"" May 8 00:37:48.015963 containerd[1549]: time="2025-05-08T00:37:48.015367244Z" level=info msg="StartContainer for \"bf16ea74fbb65075f19268cd4be7de053cf112def122ba96b31086dab44a3aad\"" May 8 00:37:48.018524 kubelet[2319]: W0508 00:37:48.018493 2319 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused May 8 00:37:48.018605 kubelet[2319]: E0508 00:37:48.018596 2319 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" May 8 00:37:48.038031 systemd[1]: Started cri-containerd-b40fa75e13306268cbf24b8b73e70f80947ba46924711ee553cf547250466a97.scope - libcontainer container b40fa75e13306268cbf24b8b73e70f80947ba46924711ee553cf547250466a97. May 8 00:37:48.038875 systemd[1]: Started cri-containerd-d077b511541dc9e421813df603103d8199a0677f307f225538de58788d8453e0.scope - libcontainer container d077b511541dc9e421813df603103d8199a0677f307f225538de58788d8453e0. May 8 00:37:48.044160 systemd[1]: Started cri-containerd-bf16ea74fbb65075f19268cd4be7de053cf112def122ba96b31086dab44a3aad.scope - libcontainer container bf16ea74fbb65075f19268cd4be7de053cf112def122ba96b31086dab44a3aad. May 8 00:37:48.079173 containerd[1549]: time="2025-05-08T00:37:48.079149373Z" level=info msg="StartContainer for \"b40fa75e13306268cbf24b8b73e70f80947ba46924711ee553cf547250466a97\" returns successfully" May 8 00:37:48.099416 containerd[1549]: time="2025-05-08T00:37:48.099394275Z" level=info msg="StartContainer for \"d077b511541dc9e421813df603103d8199a0677f307f225538de58788d8453e0\" returns successfully" May 8 00:37:48.102066 containerd[1549]: time="2025-05-08T00:37:48.102046166Z" level=info msg="StartContainer for \"bf16ea74fbb65075f19268cd4be7de053cf112def122ba96b31086dab44a3aad\" returns successfully" May 8 00:37:49.451053 kubelet[2319]: E0508 00:37:49.451011 2319 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 8 00:37:49.605953 kubelet[2319]: I0508 00:37:49.605855 2319 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:37:49.613717 kubelet[2319]: I0508 00:37:49.613692 2319 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 8 00:37:49.613717 kubelet[2319]: E0508 00:37:49.613718 2319 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 8 00:37:50.432416 kubelet[2319]: I0508 00:37:50.432371 2319 apiserver.go:52] "Watching apiserver" May 8 00:37:50.446458 kubelet[2319]: I0508 00:37:50.446342 2319 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 8 00:37:51.083689 systemd[1]: Reloading requested from client PID 2606 ('systemctl') (unit session-9.scope)... May 8 00:37:51.083906 systemd[1]: Reloading... May 8 00:37:51.141975 zram_generator::config[2651]: No configuration found. May 8 00:37:51.200619 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 8 00:37:51.216237 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:37:51.268979 systemd[1]: Reloading finished in 184 ms. May 8 00:37:51.296136 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:37:51.309622 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:37:51.309771 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:37:51.314085 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:37:52.017892 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:37:52.021569 (kubelet)[2711]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:37:52.132289 kubelet[2711]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:37:52.132506 kubelet[2711]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:37:52.132506 kubelet[2711]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:37:52.149831 kubelet[2711]: I0508 00:37:52.149277 2711 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:37:52.160414 kubelet[2711]: I0508 00:37:52.160391 2711 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 8 00:37:52.160414 kubelet[2711]: I0508 00:37:52.160409 2711 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:37:52.160586 kubelet[2711]: I0508 00:37:52.160575 2711 server.go:929] "Client rotation is on, will bootstrap in background" May 8 00:37:52.162450 kubelet[2711]: I0508 00:37:52.161682 2711 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 8 00:37:52.184003 kubelet[2711]: I0508 00:37:52.183868 2711 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:37:52.206857 kubelet[2711]: E0508 00:37:52.206609 2711 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 00:37:52.207178 kubelet[2711]: I0508 00:37:52.206967 2711 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 00:37:52.209858 kubelet[2711]: I0508 00:37:52.209829 2711 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:37:52.216253 kubelet[2711]: I0508 00:37:52.216192 2711 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 8 00:37:52.216604 kubelet[2711]: I0508 00:37:52.216473 2711 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:37:52.216898 kubelet[2711]: I0508 00:37:52.216670 2711 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 00:37:52.216898 kubelet[2711]: I0508 00:37:52.216869 2711 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:37:52.216898 kubelet[2711]: I0508 00:37:52.216881 2711 container_manager_linux.go:300] "Creating device plugin manager" May 8 00:37:52.217369 kubelet[2711]: I0508 00:37:52.217131 2711 state_mem.go:36] "Initialized new in-memory state store" May 8 00:37:52.220864 kubelet[2711]: I0508 00:37:52.220693 2711 kubelet.go:408] "Attempting to sync node with API server" May 8 00:37:52.220864 kubelet[2711]: I0508 00:37:52.220824 2711 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:37:52.223902 kubelet[2711]: I0508 00:37:52.223273 2711 kubelet.go:314] "Adding apiserver pod source" May 8 00:37:52.223902 kubelet[2711]: I0508 00:37:52.223309 2711 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:37:52.226953 kubelet[2711]: I0508 00:37:52.225481 2711 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 8 00:37:52.226953 kubelet[2711]: I0508 00:37:52.225895 2711 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:37:52.228284 kubelet[2711]: I0508 00:37:52.227958 2711 server.go:1269] "Started kubelet" May 8 00:37:52.228284 kubelet[2711]: I0508 00:37:52.228012 2711 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:37:52.229962 kubelet[2711]: I0508 00:37:52.229611 2711 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:37:52.239260 kubelet[2711]: I0508 00:37:52.239237 2711 server.go:460] "Adding debug handlers to kubelet server" May 8 00:37:52.239836 kubelet[2711]: I0508 00:37:52.239710 2711 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:37:52.240116 kubelet[2711]: I0508 00:37:52.240108 2711 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:37:52.248756 kubelet[2711]: I0508 00:37:52.248731 2711 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 00:37:52.258933 kubelet[2711]: I0508 00:37:52.257141 2711 volume_manager.go:289] "Starting Kubelet Volume Manager" May 8 00:37:52.258933 kubelet[2711]: E0508 00:37:52.257319 2711 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:37:52.263462 kubelet[2711]: I0508 00:37:52.263442 2711 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 8 00:37:52.264042 kubelet[2711]: I0508 00:37:52.264035 2711 reconciler.go:26] "Reconciler: start to sync state" May 8 00:37:52.271132 kubelet[2711]: E0508 00:37:52.271065 2711 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:37:52.272002 kubelet[2711]: I0508 00:37:52.271835 2711 factory.go:221] Registration of the containerd container factory successfully May 8 00:37:52.272002 kubelet[2711]: I0508 00:37:52.271850 2711 factory.go:221] Registration of the systemd container factory successfully May 8 00:37:52.272002 kubelet[2711]: I0508 00:37:52.271925 2711 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:37:52.274094 kubelet[2711]: I0508 00:37:52.274072 2711 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:37:52.274901 kubelet[2711]: I0508 00:37:52.274891 2711 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:37:52.275003 kubelet[2711]: I0508 00:37:52.274996 2711 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:37:52.275067 kubelet[2711]: I0508 00:37:52.275061 2711 kubelet.go:2321] "Starting kubelet main sync loop" May 8 00:37:52.275128 kubelet[2711]: E0508 00:37:52.275118 2711 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:37:52.328474 kubelet[2711]: I0508 00:37:52.328449 2711 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:37:52.328474 kubelet[2711]: I0508 00:37:52.328463 2711 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:37:52.328474 kubelet[2711]: I0508 00:37:52.328481 2711 state_mem.go:36] "Initialized new in-memory state store" May 8 00:37:52.328643 kubelet[2711]: I0508 00:37:52.328633 2711 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 8 00:37:52.328673 kubelet[2711]: I0508 00:37:52.328641 2711 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 8 00:37:52.328673 kubelet[2711]: I0508 00:37:52.328656 2711 policy_none.go:49] "None policy: Start" May 8 00:37:52.330946 kubelet[2711]: I0508 00:37:52.330548 2711 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:37:52.330946 kubelet[2711]: I0508 00:37:52.330572 2711 state_mem.go:35] "Initializing new in-memory state store" May 8 00:37:52.334754 kubelet[2711]: I0508 00:37:52.334732 2711 state_mem.go:75] "Updated machine memory state" May 8 00:37:52.338141 kubelet[2711]: I0508 00:37:52.338127 2711 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:37:52.338359 kubelet[2711]: I0508 00:37:52.338322 2711 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 00:37:52.339109 kubelet[2711]: I0508 00:37:52.339080 2711 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:37:52.339315 kubelet[2711]: I0508 00:37:52.339242 2711 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:37:52.386653 kubelet[2711]: E0508 00:37:52.386575 2711 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 8 00:37:52.387159 kubelet[2711]: E0508 00:37:52.387136 2711 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 8 00:37:52.440306 kubelet[2711]: I0508 00:37:52.440293 2711 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:37:52.446579 kubelet[2711]: I0508 00:37:52.445689 2711 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 8 00:37:52.446579 kubelet[2711]: I0508 00:37:52.445750 2711 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 8 00:37:52.587305 kubelet[2711]: I0508 00:37:52.587115 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:37:52.587305 kubelet[2711]: I0508 00:37:52.587143 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0a6a17d93d74faf34cb814d0f8f15050-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0a6a17d93d74faf34cb814d0f8f15050\") " pod="kube-system/kube-apiserver-localhost" May 8 00:37:52.587305 kubelet[2711]: I0508 00:37:52.587159 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:37:52.587305 kubelet[2711]: I0508 00:37:52.587168 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:37:52.587305 kubelet[2711]: I0508 00:37:52.587181 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:37:52.587459 kubelet[2711]: I0508 00:37:52.587190 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:37:52.587459 kubelet[2711]: I0508 00:37:52.587198 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 8 00:37:52.587459 kubelet[2711]: I0508 00:37:52.587206 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0a6a17d93d74faf34cb814d0f8f15050-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0a6a17d93d74faf34cb814d0f8f15050\") " pod="kube-system/kube-apiserver-localhost" May 8 00:37:52.587459 kubelet[2711]: I0508 00:37:52.587215 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0a6a17d93d74faf34cb814d0f8f15050-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0a6a17d93d74faf34cb814d0f8f15050\") " pod="kube-system/kube-apiserver-localhost" May 8 00:37:53.225178 kubelet[2711]: I0508 00:37:53.225153 2711 apiserver.go:52] "Watching apiserver" May 8 00:37:53.264023 kubelet[2711]: I0508 00:37:53.263996 2711 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 8 00:37:53.377623 kubelet[2711]: I0508 00:37:53.377502 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.377488491 podStartE2EDuration="3.377488491s" podCreationTimestamp="2025-05-08 00:37:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:37:53.339201055 +0000 UTC m=+1.302109145" watchObservedRunningTime="2025-05-08 00:37:53.377488491 +0000 UTC m=+1.340396580" May 8 00:37:53.412907 kubelet[2711]: I0508 00:37:53.412730 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.412718522 podStartE2EDuration="1.412718522s" podCreationTimestamp="2025-05-08 00:37:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:37:53.37815389 +0000 UTC m=+1.341061988" watchObservedRunningTime="2025-05-08 00:37:53.412718522 +0000 UTC m=+1.375626614" May 8 00:37:53.432937 kubelet[2711]: I0508 00:37:53.432822 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.432814144 podStartE2EDuration="3.432814144s" podCreationTimestamp="2025-05-08 00:37:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:37:53.412921753 +0000 UTC m=+1.375829841" watchObservedRunningTime="2025-05-08 00:37:53.432814144 +0000 UTC m=+1.395722241" May 8 00:37:56.359279 sudo[1837]: pam_unix(sudo:session): session closed for user root May 8 00:37:56.361638 sshd[1834]: pam_unix(sshd:session): session closed for user core May 8 00:37:56.365068 systemd[1]: sshd@6-139.178.70.103:22-139.178.68.195:53992.service: Deactivated successfully. May 8 00:37:56.366481 systemd[1]: session-9.scope: Deactivated successfully. May 8 00:37:56.366671 systemd[1]: session-9.scope: Consumed 2.655s CPU time, 141.3M memory peak, 0B memory swap peak. May 8 00:37:56.367908 systemd-logind[1523]: Session 9 logged out. Waiting for processes to exit. May 8 00:37:56.368855 systemd-logind[1523]: Removed session 9. May 8 00:37:56.651045 kubelet[2711]: I0508 00:37:56.650770 2711 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 8 00:37:56.651363 containerd[1549]: time="2025-05-08T00:37:56.651254028Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 00:37:56.651536 kubelet[2711]: I0508 00:37:56.651381 2711 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 8 00:37:56.840771 systemd[1]: Created slice kubepods-besteffort-pod7734051b_4dac_4979_83ab_6b7eab136ac1.slice - libcontainer container kubepods-besteffort-pod7734051b_4dac_4979_83ab_6b7eab136ac1.slice. May 8 00:37:56.914885 kubelet[2711]: I0508 00:37:56.914762 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7734051b-4dac-4979-83ab-6b7eab136ac1-lib-modules\") pod \"kube-proxy-8x4xn\" (UID: \"7734051b-4dac-4979-83ab-6b7eab136ac1\") " pod="kube-system/kube-proxy-8x4xn" May 8 00:37:56.914885 kubelet[2711]: I0508 00:37:56.914802 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7734051b-4dac-4979-83ab-6b7eab136ac1-kube-proxy\") pod \"kube-proxy-8x4xn\" (UID: \"7734051b-4dac-4979-83ab-6b7eab136ac1\") " pod="kube-system/kube-proxy-8x4xn" May 8 00:37:56.914885 kubelet[2711]: I0508 00:37:56.914814 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7734051b-4dac-4979-83ab-6b7eab136ac1-xtables-lock\") pod \"kube-proxy-8x4xn\" (UID: \"7734051b-4dac-4979-83ab-6b7eab136ac1\") " pod="kube-system/kube-proxy-8x4xn" May 8 00:37:56.914885 kubelet[2711]: I0508 00:37:56.914834 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzb4l\" (UniqueName: \"kubernetes.io/projected/7734051b-4dac-4979-83ab-6b7eab136ac1-kube-api-access-rzb4l\") pod \"kube-proxy-8x4xn\" (UID: \"7734051b-4dac-4979-83ab-6b7eab136ac1\") " pod="kube-system/kube-proxy-8x4xn" May 8 00:37:57.020230 kubelet[2711]: E0508 00:37:57.020198 2711 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 8 00:37:57.020230 kubelet[2711]: E0508 00:37:57.020219 2711 projected.go:194] Error preparing data for projected volume kube-api-access-rzb4l for pod kube-system/kube-proxy-8x4xn: configmap "kube-root-ca.crt" not found May 8 00:37:57.020344 kubelet[2711]: E0508 00:37:57.020285 2711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7734051b-4dac-4979-83ab-6b7eab136ac1-kube-api-access-rzb4l podName:7734051b-4dac-4979-83ab-6b7eab136ac1 nodeName:}" failed. No retries permitted until 2025-05-08 00:37:57.520265069 +0000 UTC m=+5.483173160 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rzb4l" (UniqueName: "kubernetes.io/projected/7734051b-4dac-4979-83ab-6b7eab136ac1-kube-api-access-rzb4l") pod "kube-proxy-8x4xn" (UID: "7734051b-4dac-4979-83ab-6b7eab136ac1") : configmap "kube-root-ca.crt" not found May 8 00:37:57.747817 containerd[1549]: time="2025-05-08T00:37:57.747788783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8x4xn,Uid:7734051b-4dac-4979-83ab-6b7eab136ac1,Namespace:kube-system,Attempt:0,}" May 8 00:37:57.840442 containerd[1549]: time="2025-05-08T00:37:57.840160821Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:37:57.840442 containerd[1549]: time="2025-05-08T00:37:57.840244398Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:37:57.840442 containerd[1549]: time="2025-05-08T00:37:57.840272987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:57.840683 containerd[1549]: time="2025-05-08T00:37:57.840398951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:57.855015 systemd[1]: Started cri-containerd-b516efaf2cea43ac8f36f7ad8bbe6ca735820eaa0439f516958cbc821c072b80.scope - libcontainer container b516efaf2cea43ac8f36f7ad8bbe6ca735820eaa0439f516958cbc821c072b80. May 8 00:37:57.869328 containerd[1549]: time="2025-05-08T00:37:57.869300683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8x4xn,Uid:7734051b-4dac-4979-83ab-6b7eab136ac1,Namespace:kube-system,Attempt:0,} returns sandbox id \"b516efaf2cea43ac8f36f7ad8bbe6ca735820eaa0439f516958cbc821c072b80\"" May 8 00:37:57.871630 containerd[1549]: time="2025-05-08T00:37:57.871587939Z" level=info msg="CreateContainer within sandbox \"b516efaf2cea43ac8f36f7ad8bbe6ca735820eaa0439f516958cbc821c072b80\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 00:37:57.886034 systemd[1]: Created slice kubepods-besteffort-podbd334089_e493_4f42_976e_a29afc8b1fee.slice - libcontainer container kubepods-besteffort-podbd334089_e493_4f42_976e_a29afc8b1fee.slice. May 8 00:37:57.920062 kubelet[2711]: I0508 00:37:57.919992 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bd334089-e493-4f42-976e-a29afc8b1fee-var-lib-calico\") pod \"tigera-operator-6f6897fdc5-npvq5\" (UID: \"bd334089-e493-4f42-976e-a29afc8b1fee\") " pod="tigera-operator/tigera-operator-6f6897fdc5-npvq5" May 8 00:37:57.920062 kubelet[2711]: I0508 00:37:57.920026 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cj7r\" (UniqueName: \"kubernetes.io/projected/bd334089-e493-4f42-976e-a29afc8b1fee-kube-api-access-6cj7r\") pod \"tigera-operator-6f6897fdc5-npvq5\" (UID: \"bd334089-e493-4f42-976e-a29afc8b1fee\") " pod="tigera-operator/tigera-operator-6f6897fdc5-npvq5" May 8 00:37:57.967515 containerd[1549]: time="2025-05-08T00:37:57.967466244Z" level=info msg="CreateContainer within sandbox \"b516efaf2cea43ac8f36f7ad8bbe6ca735820eaa0439f516958cbc821c072b80\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9cefc89a51079809f8c71e9e6d5d07a8ee9735c3e5242568c3ec3bac43c3ff3c\"" May 8 00:37:57.968942 containerd[1549]: time="2025-05-08T00:37:57.968042917Z" level=info msg="StartContainer for \"9cefc89a51079809f8c71e9e6d5d07a8ee9735c3e5242568c3ec3bac43c3ff3c\"" May 8 00:37:57.992146 systemd[1]: Started cri-containerd-9cefc89a51079809f8c71e9e6d5d07a8ee9735c3e5242568c3ec3bac43c3ff3c.scope - libcontainer container 9cefc89a51079809f8c71e9e6d5d07a8ee9735c3e5242568c3ec3bac43c3ff3c. May 8 00:37:58.014425 containerd[1549]: time="2025-05-08T00:37:58.014276275Z" level=info msg="StartContainer for \"9cefc89a51079809f8c71e9e6d5d07a8ee9735c3e5242568c3ec3bac43c3ff3c\" returns successfully" May 8 00:37:58.187811 containerd[1549]: time="2025-05-08T00:37:58.187782905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-npvq5,Uid:bd334089-e493-4f42-976e-a29afc8b1fee,Namespace:tigera-operator,Attempt:0,}" May 8 00:37:58.309169 containerd[1549]: time="2025-05-08T00:37:58.309052496Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:37:58.309250 containerd[1549]: time="2025-05-08T00:37:58.309104752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:37:58.309250 containerd[1549]: time="2025-05-08T00:37:58.309116672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:58.309886 containerd[1549]: time="2025-05-08T00:37:58.309845997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:58.328942 systemd[1]: Started cri-containerd-15accaceae510ddf626d9245ce93ebf7ee3fa3fa29fad15294da7dc99e11c638.scope - libcontainer container 15accaceae510ddf626d9245ce93ebf7ee3fa3fa29fad15294da7dc99e11c638. May 8 00:37:58.365926 containerd[1549]: time="2025-05-08T00:37:58.365896270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-npvq5,Uid:bd334089-e493-4f42-976e-a29afc8b1fee,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"15accaceae510ddf626d9245ce93ebf7ee3fa3fa29fad15294da7dc99e11c638\"" May 8 00:37:58.368780 containerd[1549]: time="2025-05-08T00:37:58.368751833Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 8 00:37:58.623845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2003326745.mount: Deactivated successfully. May 8 00:37:58.842819 kubelet[2711]: I0508 00:37:58.842772 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8x4xn" podStartSLOduration=2.842756699 podStartE2EDuration="2.842756699s" podCreationTimestamp="2025-05-08 00:37:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:37:58.373025012 +0000 UTC m=+6.335933109" watchObservedRunningTime="2025-05-08 00:37:58.842756699 +0000 UTC m=+6.805664797" May 8 00:38:00.776550 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3310708690.mount: Deactivated successfully. May 8 00:38:01.319062 containerd[1549]: time="2025-05-08T00:38:01.319017242Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:38:01.326036 containerd[1549]: time="2025-05-08T00:38:01.325996803Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 8 00:38:01.333091 containerd[1549]: time="2025-05-08T00:38:01.333037818Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:38:01.338187 containerd[1549]: time="2025-05-08T00:38:01.338156827Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:38:01.338812 containerd[1549]: time="2025-05-08T00:38:01.338522253Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 2.96965112s" May 8 00:38:01.338812 containerd[1549]: time="2025-05-08T00:38:01.338541478Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 8 00:38:01.340064 containerd[1549]: time="2025-05-08T00:38:01.340051084Z" level=info msg="CreateContainer within sandbox \"15accaceae510ddf626d9245ce93ebf7ee3fa3fa29fad15294da7dc99e11c638\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 8 00:38:01.369444 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2690187059.mount: Deactivated successfully. May 8 00:38:01.383281 containerd[1549]: time="2025-05-08T00:38:01.383176304Z" level=info msg="CreateContainer within sandbox \"15accaceae510ddf626d9245ce93ebf7ee3fa3fa29fad15294da7dc99e11c638\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f8aab8a9cd06249ab3db6e784d2fdbc7a8c2f55ac6a3161b576ab592c65ac441\"" May 8 00:38:01.384001 containerd[1549]: time="2025-05-08T00:38:01.383557827Z" level=info msg="StartContainer for \"f8aab8a9cd06249ab3db6e784d2fdbc7a8c2f55ac6a3161b576ab592c65ac441\"" May 8 00:38:01.405101 systemd[1]: Started cri-containerd-f8aab8a9cd06249ab3db6e784d2fdbc7a8c2f55ac6a3161b576ab592c65ac441.scope - libcontainer container f8aab8a9cd06249ab3db6e784d2fdbc7a8c2f55ac6a3161b576ab592c65ac441. May 8 00:38:01.423806 containerd[1549]: time="2025-05-08T00:38:01.423770269Z" level=info msg="StartContainer for \"f8aab8a9cd06249ab3db6e784d2fdbc7a8c2f55ac6a3161b576ab592c65ac441\" returns successfully" May 8 00:38:05.096319 kubelet[2711]: I0508 00:38:05.096170 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6f6897fdc5-npvq5" podStartSLOduration=5.124372503 podStartE2EDuration="8.09615881s" podCreationTimestamp="2025-05-08 00:37:57 +0000 UTC" firstStartedPulling="2025-05-08 00:37:58.367228082 +0000 UTC m=+6.330136174" lastFinishedPulling="2025-05-08 00:38:01.339014393 +0000 UTC m=+9.301922481" observedRunningTime="2025-05-08 00:38:02.353938943 +0000 UTC m=+10.316847040" watchObservedRunningTime="2025-05-08 00:38:05.09615881 +0000 UTC m=+13.059066907" May 8 00:38:05.864121 systemd[1]: Created slice kubepods-besteffort-podd4b4cdff_0602_4ca5_839f_31fe2409aace.slice - libcontainer container kubepods-besteffort-podd4b4cdff_0602_4ca5_839f_31fe2409aace.slice. May 8 00:38:05.926510 kubelet[2711]: I0508 00:38:05.926475 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4b4cdff-0602-4ca5-839f-31fe2409aace-tigera-ca-bundle\") pod \"calico-typha-7d74dcc767-d2llh\" (UID: \"d4b4cdff-0602-4ca5-839f-31fe2409aace\") " pod="calico-system/calico-typha-7d74dcc767-d2llh" May 8 00:38:05.926613 kubelet[2711]: I0508 00:38:05.926523 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d4b4cdff-0602-4ca5-839f-31fe2409aace-typha-certs\") pod \"calico-typha-7d74dcc767-d2llh\" (UID: \"d4b4cdff-0602-4ca5-839f-31fe2409aace\") " pod="calico-system/calico-typha-7d74dcc767-d2llh" May 8 00:38:05.926613 kubelet[2711]: I0508 00:38:05.926551 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9twlc\" (UniqueName: \"kubernetes.io/projected/d4b4cdff-0602-4ca5-839f-31fe2409aace-kube-api-access-9twlc\") pod \"calico-typha-7d74dcc767-d2llh\" (UID: \"d4b4cdff-0602-4ca5-839f-31fe2409aace\") " pod="calico-system/calico-typha-7d74dcc767-d2llh" May 8 00:38:05.947928 systemd[1]: Created slice kubepods-besteffort-pod27e14b8e_f054_4cc5_a94f_78053ac3ed18.slice - libcontainer container kubepods-besteffort-pod27e14b8e_f054_4cc5_a94f_78053ac3ed18.slice. May 8 00:38:06.018320 kubelet[2711]: E0508 00:38:06.018043 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gvqtb" podUID="06105806-56a1-4100-9953-11ff7427bd13" May 8 00:38:06.028336 kubelet[2711]: I0508 00:38:06.027013 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/27e14b8e-f054-4cc5-a94f-78053ac3ed18-node-certs\") pod \"calico-node-9kk99\" (UID: \"27e14b8e-f054-4cc5-a94f-78053ac3ed18\") " pod="calico-system/calico-node-9kk99" May 8 00:38:06.028336 kubelet[2711]: I0508 00:38:06.027046 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvlxp\" (UniqueName: \"kubernetes.io/projected/27e14b8e-f054-4cc5-a94f-78053ac3ed18-kube-api-access-dvlxp\") pod \"calico-node-9kk99\" (UID: \"27e14b8e-f054-4cc5-a94f-78053ac3ed18\") " pod="calico-system/calico-node-9kk99" May 8 00:38:06.028336 kubelet[2711]: I0508 00:38:06.027063 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/27e14b8e-f054-4cc5-a94f-78053ac3ed18-lib-modules\") pod \"calico-node-9kk99\" (UID: \"27e14b8e-f054-4cc5-a94f-78053ac3ed18\") " pod="calico-system/calico-node-9kk99" May 8 00:38:06.028336 kubelet[2711]: I0508 00:38:06.027071 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/27e14b8e-f054-4cc5-a94f-78053ac3ed18-policysync\") pod \"calico-node-9kk99\" (UID: \"27e14b8e-f054-4cc5-a94f-78053ac3ed18\") " pod="calico-system/calico-node-9kk99" May 8 00:38:06.028336 kubelet[2711]: I0508 00:38:06.027080 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27e14b8e-f054-4cc5-a94f-78053ac3ed18-tigera-ca-bundle\") pod \"calico-node-9kk99\" (UID: \"27e14b8e-f054-4cc5-a94f-78053ac3ed18\") " pod="calico-system/calico-node-9kk99" May 8 00:38:06.028548 kubelet[2711]: I0508 00:38:06.027088 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/27e14b8e-f054-4cc5-a94f-78053ac3ed18-cni-log-dir\") pod \"calico-node-9kk99\" (UID: \"27e14b8e-f054-4cc5-a94f-78053ac3ed18\") " pod="calico-system/calico-node-9kk99" May 8 00:38:06.028548 kubelet[2711]: I0508 00:38:06.027097 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/27e14b8e-f054-4cc5-a94f-78053ac3ed18-flexvol-driver-host\") pod \"calico-node-9kk99\" (UID: \"27e14b8e-f054-4cc5-a94f-78053ac3ed18\") " pod="calico-system/calico-node-9kk99" May 8 00:38:06.028548 kubelet[2711]: I0508 00:38:06.027106 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/27e14b8e-f054-4cc5-a94f-78053ac3ed18-cni-bin-dir\") pod \"calico-node-9kk99\" (UID: \"27e14b8e-f054-4cc5-a94f-78053ac3ed18\") " pod="calico-system/calico-node-9kk99" May 8 00:38:06.028548 kubelet[2711]: I0508 00:38:06.027127 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/27e14b8e-f054-4cc5-a94f-78053ac3ed18-cni-net-dir\") pod \"calico-node-9kk99\" (UID: \"27e14b8e-f054-4cc5-a94f-78053ac3ed18\") " pod="calico-system/calico-node-9kk99" May 8 00:38:06.028548 kubelet[2711]: I0508 00:38:06.027139 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/27e14b8e-f054-4cc5-a94f-78053ac3ed18-xtables-lock\") pod \"calico-node-9kk99\" (UID: \"27e14b8e-f054-4cc5-a94f-78053ac3ed18\") " pod="calico-system/calico-node-9kk99" May 8 00:38:06.028667 kubelet[2711]: I0508 00:38:06.027147 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/27e14b8e-f054-4cc5-a94f-78053ac3ed18-var-run-calico\") pod \"calico-node-9kk99\" (UID: \"27e14b8e-f054-4cc5-a94f-78053ac3ed18\") " pod="calico-system/calico-node-9kk99" May 8 00:38:06.028667 kubelet[2711]: I0508 00:38:06.027164 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/27e14b8e-f054-4cc5-a94f-78053ac3ed18-var-lib-calico\") pod \"calico-node-9kk99\" (UID: \"27e14b8e-f054-4cc5-a94f-78053ac3ed18\") " pod="calico-system/calico-node-9kk99" May 8 00:38:06.128069 kubelet[2711]: I0508 00:38:06.127989 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/06105806-56a1-4100-9953-11ff7427bd13-kubelet-dir\") pod \"csi-node-driver-gvqtb\" (UID: \"06105806-56a1-4100-9953-11ff7427bd13\") " pod="calico-system/csi-node-driver-gvqtb" May 8 00:38:06.128069 kubelet[2711]: I0508 00:38:06.128018 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/06105806-56a1-4100-9953-11ff7427bd13-socket-dir\") pod \"csi-node-driver-gvqtb\" (UID: \"06105806-56a1-4100-9953-11ff7427bd13\") " pod="calico-system/csi-node-driver-gvqtb" May 8 00:38:06.128069 kubelet[2711]: I0508 00:38:06.128034 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/06105806-56a1-4100-9953-11ff7427bd13-registration-dir\") pod \"csi-node-driver-gvqtb\" (UID: \"06105806-56a1-4100-9953-11ff7427bd13\") " pod="calico-system/csi-node-driver-gvqtb" May 8 00:38:06.128331 kubelet[2711]: I0508 00:38:06.128092 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/06105806-56a1-4100-9953-11ff7427bd13-varrun\") pod \"csi-node-driver-gvqtb\" (UID: \"06105806-56a1-4100-9953-11ff7427bd13\") " pod="calico-system/csi-node-driver-gvqtb" May 8 00:38:06.128331 kubelet[2711]: I0508 00:38:06.128103 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbtp2\" (UniqueName: \"kubernetes.io/projected/06105806-56a1-4100-9953-11ff7427bd13-kube-api-access-sbtp2\") pod \"csi-node-driver-gvqtb\" (UID: \"06105806-56a1-4100-9953-11ff7427bd13\") " pod="calico-system/csi-node-driver-gvqtb" May 8 00:38:06.178482 kubelet[2711]: E0508 00:38:06.178455 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:06.178482 kubelet[2711]: W0508 00:38:06.178480 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:06.178616 kubelet[2711]: E0508 00:38:06.178507 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:06.228932 kubelet[2711]: E0508 00:38:06.228853 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:06.228932 kubelet[2711]: W0508 00:38:06.228885 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:06.228932 kubelet[2711]: E0508 00:38:06.228901 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:06.229318 kubelet[2711]: E0508 00:38:06.229131 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:06.229318 kubelet[2711]: W0508 00:38:06.229138 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:06.229318 kubelet[2711]: E0508 00:38:06.229155 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:06.229413 kubelet[2711]: E0508 00:38:06.229333 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:06.229413 kubelet[2711]: W0508 00:38:06.229354 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:06.229413 kubelet[2711]: E0508 00:38:06.229364 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:06.229737 kubelet[2711]: E0508 00:38:06.229538 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:06.229737 kubelet[2711]: W0508 00:38:06.229545 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:06.229737 kubelet[2711]: E0508 00:38:06.229552 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:06.229737 kubelet[2711]: E0508 00:38:06.229674 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:06.229737 kubelet[2711]: W0508 00:38:06.229693 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:06.229737 kubelet[2711]: E0508 00:38:06.229708 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:06.229906 kubelet[2711]: E0508 00:38:06.229892 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:06.229906 kubelet[2711]: W0508 00:38:06.229900 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:06.230095 kubelet[2711]: E0508 00:38:06.229926 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:06.230095 kubelet[2711]: E0508 00:38:06.230072 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:06.230095 kubelet[2711]: W0508 00:38:06.230079 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:06.230095 kubelet[2711]: E0508 00:38:06.230092 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:06.230486 kubelet[2711]: E0508 00:38:06.230238 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:06.230486 kubelet[2711]: W0508 00:38:06.230244 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:06.230486 kubelet[2711]: E0508 00:38:06.230367 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:06.230486 kubelet[2711]: W0508 00:38:06.230395 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:06.230486 kubelet[2711]: E0508 00:38:06.230438 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:06.230486 kubelet[2711]: E0508 00:38:06.230455 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:06.230665 kubelet[2711]: E0508 00:38:06.230521 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:06.230665 kubelet[2711]: W0508 00:38:06.230528 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:06.230665 kubelet[2711]: E0508 00:38:06.230611 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:06.231964 kubelet[2711]: E0508 00:38:06.230686 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:06.231964 kubelet[2711]: W0508 00:38:06.230692 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:06.231964 kubelet[2711]: E0508 00:38:06.230708 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:06.231964 kubelet[2711]: E0508 00:38:06.230852 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:06.231964 kubelet[2711]: W0508 00:38:06.230858 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:06.231964 kubelet[2711]: E0508 00:38:06.230952 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:06.231964 kubelet[2711]: E0508 00:38:06.231072 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:06.231964 kubelet[2711]: W0508 00:38:06.231077 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:06.231964 kubelet[2711]: E0508 00:38:06.231128 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:06.231964 kubelet[2711]: E0508 00:38:06.231477 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:06.232229 kubelet[2711]: W0508 00:38:06.231484 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:06.232229 kubelet[2711]: E0508 00:38:06.231504 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:06.232229 kubelet[2711]: E0508 00:38:06.231828 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:06.232229 kubelet[2711]: W0508 00:38:06.231835 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:06.232229 kubelet[2711]: E0508 00:38:06.231846 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:06.232229 kubelet[2711]: E0508 00:38:06.232054 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:06.232229 kubelet[2711]: W0508 00:38:06.232060 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:06.232229 kubelet[2711]: E0508 00:38:06.232084 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:06.232229 kubelet[2711]: E0508 00:38:06.232206 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:06.232229 kubelet[2711]: W0508 00:38:06.232213 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:06.232438 kubelet[2711]: E0508 00:38:06.232338 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:06.232438 kubelet[2711]: W0508 00:38:06.232364 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:06.232477 kubelet[2711]: E0508 00:38:06.232473 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:06.232497 kubelet[2711]: W0508 00:38:06.232478 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:06.232987 kubelet[2711]: E0508 00:38:06.232930 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:06.232987 kubelet[2711]: E0508 00:38:06.232971 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:06.232987 kubelet[2711]: E0508 00:38:06.232979 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:06.234334 kubelet[2711]: E0508 00:38:06.234317 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:06.234334 kubelet[2711]: W0508 00:38:06.234328 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:06.234411 kubelet[2711]: E0508 00:38:06.234351 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:06.234524 kubelet[2711]: E0508 00:38:06.234502 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:06.234524 kubelet[2711]: W0508 00:38:06.234510 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:06.234524 kubelet[2711]: E0508 00:38:06.234525 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:06.234654 kubelet[2711]: E0508 00:38:06.234644 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:06.234654 kubelet[2711]: W0508 00:38:06.234652 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:06.234721 kubelet[2711]: E0508 00:38:06.234666 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:06.234867 kubelet[2711]: E0508 00:38:06.234855 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:06.234867 kubelet[2711]: W0508 00:38:06.234864 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:06.235039 kubelet[2711]: E0508 00:38:06.234870 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:06.235152 kubelet[2711]: E0508 00:38:06.235102 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:06.235152 kubelet[2711]: W0508 00:38:06.235110 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:06.235152 kubelet[2711]: E0508 00:38:06.235117 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:06.235306 kubelet[2711]: E0508 00:38:06.235228 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:06.235306 kubelet[2711]: W0508 00:38:06.235246 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:06.235306 kubelet[2711]: E0508 00:38:06.235255 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:06.241751 kubelet[2711]: E0508 00:38:06.241702 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:06.241751 kubelet[2711]: W0508 00:38:06.241714 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:06.241751 kubelet[2711]: E0508 00:38:06.241729 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:06.261820 containerd[1549]: time="2025-05-08T00:38:06.261652984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9kk99,Uid:27e14b8e-f054-4cc5-a94f-78053ac3ed18,Namespace:calico-system,Attempt:0,}" May 8 00:38:06.261820 containerd[1549]: time="2025-05-08T00:38:06.261693515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7d74dcc767-d2llh,Uid:d4b4cdff-0602-4ca5-839f-31fe2409aace,Namespace:calico-system,Attempt:0,}" May 8 00:38:06.429008 containerd[1549]: time="2025-05-08T00:38:06.428869526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:38:06.429115 containerd[1549]: time="2025-05-08T00:38:06.428918405Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:38:06.429115 containerd[1549]: time="2025-05-08T00:38:06.428943526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:38:06.429115 containerd[1549]: time="2025-05-08T00:38:06.428992285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:38:06.447014 systemd[1]: Started cri-containerd-f8c30ac93b514c38288d3d9e290d44b8b2e34665bbac79bb445525f22dc98161.scope - libcontainer container f8c30ac93b514c38288d3d9e290d44b8b2e34665bbac79bb445525f22dc98161. May 8 00:38:06.454139 containerd[1549]: time="2025-05-08T00:38:06.454004527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:38:06.454139 containerd[1549]: time="2025-05-08T00:38:06.454047452Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:38:06.454139 containerd[1549]: time="2025-05-08T00:38:06.454061555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:38:06.454139 containerd[1549]: time="2025-05-08T00:38:06.454124022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:38:06.470030 systemd[1]: Started cri-containerd-99041687496a20f2a794d955933b8708c04816e8fb3a2dc0a6cf8c4ab0a92363.scope - libcontainer container 99041687496a20f2a794d955933b8708c04816e8fb3a2dc0a6cf8c4ab0a92363. May 8 00:38:06.501998 containerd[1549]: time="2025-05-08T00:38:06.501964741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9kk99,Uid:27e14b8e-f054-4cc5-a94f-78053ac3ed18,Namespace:calico-system,Attempt:0,} returns sandbox id \"f8c30ac93b514c38288d3d9e290d44b8b2e34665bbac79bb445525f22dc98161\"" May 8 00:38:06.503426 containerd[1549]: time="2025-05-08T00:38:06.503370953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7d74dcc767-d2llh,Uid:d4b4cdff-0602-4ca5-839f-31fe2409aace,Namespace:calico-system,Attempt:0,} returns sandbox id \"99041687496a20f2a794d955933b8708c04816e8fb3a2dc0a6cf8c4ab0a92363\"" May 8 00:38:06.585807 containerd[1549]: time="2025-05-08T00:38:06.585668042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 8 00:38:07.275753 kubelet[2711]: E0508 00:38:07.275514 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gvqtb" podUID="06105806-56a1-4100-9953-11ff7427bd13" May 8 00:38:08.716171 containerd[1549]: time="2025-05-08T00:38:08.716119177Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:38:08.729926 containerd[1549]: time="2025-05-08T00:38:08.729743623Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 8 00:38:08.743207 containerd[1549]: time="2025-05-08T00:38:08.743142603Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:38:08.759761 containerd[1549]: time="2025-05-08T00:38:08.759713324Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:38:08.760385 containerd[1549]: time="2025-05-08T00:38:08.760126066Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 2.174435321s" May 8 00:38:08.760385 containerd[1549]: time="2025-05-08T00:38:08.760146756Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 8 00:38:08.773432 containerd[1549]: time="2025-05-08T00:38:08.773354320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 8 00:38:08.773907 containerd[1549]: time="2025-05-08T00:38:08.773893327Z" level=info msg="CreateContainer within sandbox \"99041687496a20f2a794d955933b8708c04816e8fb3a2dc0a6cf8c4ab0a92363\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 8 00:38:08.828070 containerd[1549]: time="2025-05-08T00:38:08.827906594Z" level=info msg="CreateContainer within sandbox \"99041687496a20f2a794d955933b8708c04816e8fb3a2dc0a6cf8c4ab0a92363\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3d4b5af97329cda77ef2f484956d291eab56a12ff6c1a4b41ba6a5e1bbbb7a86\"" May 8 00:38:08.828958 containerd[1549]: time="2025-05-08T00:38:08.828562690Z" level=info msg="StartContainer for \"3d4b5af97329cda77ef2f484956d291eab56a12ff6c1a4b41ba6a5e1bbbb7a86\"" May 8 00:38:08.878038 systemd[1]: Started cri-containerd-3d4b5af97329cda77ef2f484956d291eab56a12ff6c1a4b41ba6a5e1bbbb7a86.scope - libcontainer container 3d4b5af97329cda77ef2f484956d291eab56a12ff6c1a4b41ba6a5e1bbbb7a86. May 8 00:38:08.924328 containerd[1549]: time="2025-05-08T00:38:08.924300134Z" level=info msg="StartContainer for \"3d4b5af97329cda77ef2f484956d291eab56a12ff6c1a4b41ba6a5e1bbbb7a86\" returns successfully" May 8 00:38:09.293383 kubelet[2711]: E0508 00:38:09.293333 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gvqtb" podUID="06105806-56a1-4100-9953-11ff7427bd13" May 8 00:38:09.751046 kubelet[2711]: E0508 00:38:09.750939 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:09.751046 kubelet[2711]: W0508 00:38:09.750961 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:09.751046 kubelet[2711]: E0508 00:38:09.750977 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:09.751248 kubelet[2711]: E0508 00:38:09.751240 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:09.751295 kubelet[2711]: W0508 00:38:09.751288 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:09.751409 kubelet[2711]: E0508 00:38:09.751338 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:09.751559 kubelet[2711]: E0508 00:38:09.751485 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:09.751559 kubelet[2711]: W0508 00:38:09.751493 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:09.751559 kubelet[2711]: E0508 00:38:09.751500 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:09.751684 kubelet[2711]: E0508 00:38:09.751676 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:09.751728 kubelet[2711]: W0508 00:38:09.751720 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:09.751827 kubelet[2711]: E0508 00:38:09.751764 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:09.751975 kubelet[2711]: E0508 00:38:09.751899 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:09.751975 kubelet[2711]: W0508 00:38:09.751906 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:09.751975 kubelet[2711]: E0508 00:38:09.751926 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:09.752098 kubelet[2711]: E0508 00:38:09.752090 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:09.762978 kubelet[2711]: W0508 00:38:09.752193 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:09.762978 kubelet[2711]: E0508 00:38:09.752203 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:09.762978 kubelet[2711]: E0508 00:38:09.752329 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:09.762978 kubelet[2711]: W0508 00:38:09.752336 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:09.762978 kubelet[2711]: E0508 00:38:09.752349 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:09.762978 kubelet[2711]: E0508 00:38:09.755804 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:09.762978 kubelet[2711]: W0508 00:38:09.755814 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:09.762978 kubelet[2711]: E0508 00:38:09.755825 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:09.762978 kubelet[2711]: E0508 00:38:09.756329 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:09.762978 kubelet[2711]: W0508 00:38:09.756337 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:09.763222 kubelet[2711]: E0508 00:38:09.756346 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:09.763222 kubelet[2711]: E0508 00:38:09.756548 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:09.763222 kubelet[2711]: W0508 00:38:09.756555 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:09.763222 kubelet[2711]: E0508 00:38:09.756562 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:09.763222 kubelet[2711]: E0508 00:38:09.757202 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:09.763222 kubelet[2711]: W0508 00:38:09.757208 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:09.763222 kubelet[2711]: E0508 00:38:09.757216 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:09.763222 kubelet[2711]: E0508 00:38:09.757332 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:09.763222 kubelet[2711]: W0508 00:38:09.757338 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:09.763222 kubelet[2711]: E0508 00:38:09.757344 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:09.763435 kubelet[2711]: E0508 00:38:09.757463 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:09.763435 kubelet[2711]: W0508 00:38:09.757471 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:09.763435 kubelet[2711]: E0508 00:38:09.757478 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:09.763435 kubelet[2711]: E0508 00:38:09.757583 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:09.763435 kubelet[2711]: W0508 00:38:09.757589 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:09.763435 kubelet[2711]: E0508 00:38:09.757595 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:09.763435 kubelet[2711]: E0508 00:38:09.757789 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:09.763435 kubelet[2711]: W0508 00:38:09.757794 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:09.763435 kubelet[2711]: E0508 00:38:09.757801 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:09.779390 kubelet[2711]: E0508 00:38:09.779334 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:09.779763 kubelet[2711]: W0508 00:38:09.779475 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:09.779763 kubelet[2711]: E0508 00:38:09.779497 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:09.780573 kubelet[2711]: E0508 00:38:09.780384 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:09.780573 kubelet[2711]: W0508 00:38:09.780398 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:09.780573 kubelet[2711]: E0508 00:38:09.780414 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:09.780865 kubelet[2711]: E0508 00:38:09.780609 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:09.780865 kubelet[2711]: W0508 00:38:09.780618 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:09.780865 kubelet[2711]: E0508 00:38:09.780629 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:09.781081 kubelet[2711]: E0508 00:38:09.780991 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:09.781081 kubelet[2711]: W0508 00:38:09.781002 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:09.781081 kubelet[2711]: E0508 00:38:09.781015 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:09.781320 kubelet[2711]: E0508 00:38:09.781253 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:09.781320 kubelet[2711]: W0508 00:38:09.781261 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:09.781320 kubelet[2711]: E0508 00:38:09.781286 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:09.781452 kubelet[2711]: E0508 00:38:09.781444 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:09.781545 kubelet[2711]: W0508 00:38:09.781489 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:09.781545 kubelet[2711]: E0508 00:38:09.781516 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:09.781655 kubelet[2711]: E0508 00:38:09.781647 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:09.781776 kubelet[2711]: W0508 00:38:09.781693 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:09.781776 kubelet[2711]: E0508 00:38:09.781708 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:09.781879 kubelet[2711]: E0508 00:38:09.781872 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:09.781942 kubelet[2711]: W0508 00:38:09.781929 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:09.782006 kubelet[2711]: E0508 00:38:09.781991 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:09.782173 kubelet[2711]: E0508 00:38:09.782156 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:09.782173 kubelet[2711]: W0508 00:38:09.782170 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:09.782234 kubelet[2711]: E0508 00:38:09.782182 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:09.782369 kubelet[2711]: E0508 00:38:09.782353 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:09.782369 kubelet[2711]: W0508 00:38:09.782367 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:09.782422 kubelet[2711]: E0508 00:38:09.782390 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:09.782643 kubelet[2711]: E0508 00:38:09.782626 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:09.782643 kubelet[2711]: W0508 00:38:09.782641 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:09.782708 kubelet[2711]: E0508 00:38:09.782655 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:09.782823 kubelet[2711]: E0508 00:38:09.782807 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:09.782823 kubelet[2711]: W0508 00:38:09.782817 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:09.782888 kubelet[2711]: E0508 00:38:09.782840 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:09.783067 kubelet[2711]: E0508 00:38:09.783053 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:09.783108 kubelet[2711]: W0508 00:38:09.783067 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:09.783108 kubelet[2711]: E0508 00:38:09.783092 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:09.783374 kubelet[2711]: E0508 00:38:09.783359 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:09.783374 kubelet[2711]: W0508 00:38:09.783370 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:09.783431 kubelet[2711]: E0508 00:38:09.783382 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:09.783582 kubelet[2711]: E0508 00:38:09.783567 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:09.783582 kubelet[2711]: W0508 00:38:09.783580 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:09.783641 kubelet[2711]: E0508 00:38:09.783601 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:09.783990 kubelet[2711]: E0508 00:38:09.783975 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:09.784030 kubelet[2711]: W0508 00:38:09.783989 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:09.784030 kubelet[2711]: E0508 00:38:09.784005 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:09.784186 kubelet[2711]: E0508 00:38:09.784170 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:09.784186 kubelet[2711]: W0508 00:38:09.784183 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:09.784256 kubelet[2711]: E0508 00:38:09.784199 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:09.784384 kubelet[2711]: E0508 00:38:09.784371 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:09.784384 kubelet[2711]: W0508 00:38:09.784383 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:09.784438 kubelet[2711]: E0508 00:38:09.784390 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:10.555722 containerd[1549]: time="2025-05-08T00:38:10.555355058Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:38:10.559148 containerd[1549]: time="2025-05-08T00:38:10.557889118Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 8 00:38:10.570635 containerd[1549]: time="2025-05-08T00:38:10.570601134Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:38:10.572128 containerd[1549]: time="2025-05-08T00:38:10.572103690Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:38:10.572661 containerd[1549]: time="2025-05-08T00:38:10.572635893Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 1.799257042s" May 8 00:38:10.572738 containerd[1549]: time="2025-05-08T00:38:10.572664193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 8 00:38:10.574539 containerd[1549]: time="2025-05-08T00:38:10.574508750Z" level=info msg="CreateContainer within sandbox \"f8c30ac93b514c38288d3d9e290d44b8b2e34665bbac79bb445525f22dc98161\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 8 00:38:10.635844 containerd[1549]: time="2025-05-08T00:38:10.635811361Z" level=info msg="CreateContainer within sandbox \"f8c30ac93b514c38288d3d9e290d44b8b2e34665bbac79bb445525f22dc98161\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"39fa1a0504363ce400fd3cb2db77ba0f1e20ebc2db99b8311e6cf07321e6eaaa\"" May 8 00:38:10.636308 containerd[1549]: time="2025-05-08T00:38:10.636238304Z" level=info msg="StartContainer for \"39fa1a0504363ce400fd3cb2db77ba0f1e20ebc2db99b8311e6cf07321e6eaaa\"" May 8 00:38:10.640474 kubelet[2711]: I0508 00:38:10.640330 2711 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:38:10.661572 systemd[1]: run-containerd-runc-k8s.io-39fa1a0504363ce400fd3cb2db77ba0f1e20ebc2db99b8311e6cf07321e6eaaa-runc.PFjfwL.mount: Deactivated successfully. May 8 00:38:10.664066 kubelet[2711]: E0508 00:38:10.663997 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:10.664066 kubelet[2711]: W0508 00:38:10.664010 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:10.664066 kubelet[2711]: E0508 00:38:10.664025 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:10.664561 kubelet[2711]: E0508 00:38:10.664147 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:10.664561 kubelet[2711]: W0508 00:38:10.664153 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:10.664561 kubelet[2711]: E0508 00:38:10.664161 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:10.664561 kubelet[2711]: E0508 00:38:10.664276 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:10.664561 kubelet[2711]: W0508 00:38:10.664294 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:10.664561 kubelet[2711]: E0508 00:38:10.664302 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:10.664561 kubelet[2711]: E0508 00:38:10.664408 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:10.664561 kubelet[2711]: W0508 00:38:10.664413 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:10.664561 kubelet[2711]: E0508 00:38:10.664419 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:10.664561 kubelet[2711]: E0508 00:38:10.664541 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:10.664842 kubelet[2711]: W0508 00:38:10.664545 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:10.664842 kubelet[2711]: E0508 00:38:10.664550 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:10.664842 kubelet[2711]: E0508 00:38:10.664637 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:10.664842 kubelet[2711]: W0508 00:38:10.664643 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:10.664842 kubelet[2711]: E0508 00:38:10.664648 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:10.664842 kubelet[2711]: E0508 00:38:10.664735 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:10.664842 kubelet[2711]: W0508 00:38:10.664740 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:10.664842 kubelet[2711]: E0508 00:38:10.664744 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:10.664842 kubelet[2711]: E0508 00:38:10.664829 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:10.664842 kubelet[2711]: W0508 00:38:10.664833 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:10.666073 kubelet[2711]: E0508 00:38:10.664839 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:10.666073 kubelet[2711]: E0508 00:38:10.664937 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:10.666073 kubelet[2711]: W0508 00:38:10.664942 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:10.666073 kubelet[2711]: E0508 00:38:10.664946 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:10.666073 kubelet[2711]: E0508 00:38:10.665025 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:10.666073 kubelet[2711]: W0508 00:38:10.665029 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:10.666073 kubelet[2711]: E0508 00:38:10.665033 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:10.666073 kubelet[2711]: E0508 00:38:10.665153 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:10.666073 kubelet[2711]: W0508 00:38:10.665159 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:10.666073 kubelet[2711]: E0508 00:38:10.665167 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:10.666309 kubelet[2711]: E0508 00:38:10.665275 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:10.666309 kubelet[2711]: W0508 00:38:10.665280 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:10.666309 kubelet[2711]: E0508 00:38:10.665299 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:10.666309 kubelet[2711]: E0508 00:38:10.665423 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:10.666309 kubelet[2711]: W0508 00:38:10.665429 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:10.666309 kubelet[2711]: E0508 00:38:10.665437 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:10.666309 kubelet[2711]: E0508 00:38:10.665558 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:10.666309 kubelet[2711]: W0508 00:38:10.665564 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:10.666309 kubelet[2711]: E0508 00:38:10.665572 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:10.666309 kubelet[2711]: E0508 00:38:10.665711 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:10.666491 kubelet[2711]: W0508 00:38:10.665723 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:10.666491 kubelet[2711]: E0508 00:38:10.665730 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:10.669085 systemd[1]: Started cri-containerd-39fa1a0504363ce400fd3cb2db77ba0f1e20ebc2db99b8311e6cf07321e6eaaa.scope - libcontainer container 39fa1a0504363ce400fd3cb2db77ba0f1e20ebc2db99b8311e6cf07321e6eaaa. May 8 00:38:10.686878 kubelet[2711]: E0508 00:38:10.686856 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:10.686878 kubelet[2711]: W0508 00:38:10.686869 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:10.687023 kubelet[2711]: E0508 00:38:10.686892 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:10.687739 kubelet[2711]: E0508 00:38:10.687130 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:10.687739 kubelet[2711]: W0508 00:38:10.687137 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:10.687739 kubelet[2711]: E0508 00:38:10.687150 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:10.687739 kubelet[2711]: E0508 00:38:10.687308 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:10.687739 kubelet[2711]: W0508 00:38:10.687315 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:10.687739 kubelet[2711]: E0508 00:38:10.687426 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:10.687739 kubelet[2711]: E0508 00:38:10.687466 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:10.687739 kubelet[2711]: W0508 00:38:10.687473 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:10.687739 kubelet[2711]: E0508 00:38:10.687481 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:10.687739 kubelet[2711]: E0508 00:38:10.687628 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:10.688104 kubelet[2711]: W0508 00:38:10.687634 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:10.688104 kubelet[2711]: E0508 00:38:10.687753 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:10.688104 kubelet[2711]: E0508 00:38:10.687991 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:10.688104 kubelet[2711]: W0508 00:38:10.687997 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:10.688104 kubelet[2711]: E0508 00:38:10.688008 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:10.688341 kubelet[2711]: E0508 00:38:10.688329 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:10.688341 kubelet[2711]: W0508 00:38:10.688338 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:10.688428 kubelet[2711]: E0508 00:38:10.688346 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:10.689813 kubelet[2711]: E0508 00:38:10.688960 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:10.689813 kubelet[2711]: W0508 00:38:10.688970 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:10.689813 kubelet[2711]: E0508 00:38:10.689637 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:10.690716 kubelet[2711]: E0508 00:38:10.690343 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:10.690716 kubelet[2711]: W0508 00:38:10.690355 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:10.690716 kubelet[2711]: E0508 00:38:10.690377 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:10.690716 kubelet[2711]: E0508 00:38:10.690511 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:10.690716 kubelet[2711]: W0508 00:38:10.690517 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:10.691207 kubelet[2711]: E0508 00:38:10.690958 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:10.691595 kubelet[2711]: E0508 00:38:10.691585 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:10.691787 kubelet[2711]: W0508 00:38:10.691700 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:10.691787 kubelet[2711]: E0508 00:38:10.691739 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:10.692562 kubelet[2711]: E0508 00:38:10.692253 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:10.692562 kubelet[2711]: W0508 00:38:10.692509 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:10.692562 kubelet[2711]: E0508 00:38:10.692525 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:10.693027 kubelet[2711]: E0508 00:38:10.692957 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:10.693027 kubelet[2711]: W0508 00:38:10.692968 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:10.693290 kubelet[2711]: E0508 00:38:10.692980 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:10.693617 kubelet[2711]: E0508 00:38:10.693487 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:10.693617 kubelet[2711]: W0508 00:38:10.693494 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:10.693617 kubelet[2711]: E0508 00:38:10.693537 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:10.694021 kubelet[2711]: E0508 00:38:10.693884 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:10.694021 kubelet[2711]: W0508 00:38:10.693890 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:10.694021 kubelet[2711]: E0508 00:38:10.693897 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:10.696299 kubelet[2711]: E0508 00:38:10.694233 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:10.696299 kubelet[2711]: W0508 00:38:10.694239 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:10.696299 kubelet[2711]: E0508 00:38:10.694245 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:10.696299 kubelet[2711]: E0508 00:38:10.694527 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:10.696299 kubelet[2711]: W0508 00:38:10.694534 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:10.696299 kubelet[2711]: E0508 00:38:10.694545 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:10.696299 kubelet[2711]: E0508 00:38:10.694694 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:38:10.696299 kubelet[2711]: W0508 00:38:10.694699 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:38:10.696299 kubelet[2711]: E0508 00:38:10.694705 2711 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:38:10.700142 systemd[1]: cri-containerd-39fa1a0504363ce400fd3cb2db77ba0f1e20ebc2db99b8311e6cf07321e6eaaa.scope: Deactivated successfully. May 8 00:38:10.701115 containerd[1549]: time="2025-05-08T00:38:10.701095073Z" level=info msg="StartContainer for \"39fa1a0504363ce400fd3cb2db77ba0f1e20ebc2db99b8311e6cf07321e6eaaa\" returns successfully" May 8 00:38:10.769409 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39fa1a0504363ce400fd3cb2db77ba0f1e20ebc2db99b8311e6cf07321e6eaaa-rootfs.mount: Deactivated successfully. May 8 00:38:11.276026 kubelet[2711]: E0508 00:38:11.275949 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gvqtb" podUID="06105806-56a1-4100-9953-11ff7427bd13" May 8 00:38:11.370790 containerd[1549]: time="2025-05-08T00:38:11.361879346Z" level=info msg="shim disconnected" id=39fa1a0504363ce400fd3cb2db77ba0f1e20ebc2db99b8311e6cf07321e6eaaa namespace=k8s.io May 8 00:38:11.370790 containerd[1549]: time="2025-05-08T00:38:11.370788293Z" level=warning msg="cleaning up after shim disconnected" id=39fa1a0504363ce400fd3cb2db77ba0f1e20ebc2db99b8311e6cf07321e6eaaa namespace=k8s.io May 8 00:38:11.370929 containerd[1549]: time="2025-05-08T00:38:11.370799447Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:38:11.642540 containerd[1549]: time="2025-05-08T00:38:11.642504513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 8 00:38:11.679356 kubelet[2711]: I0508 00:38:11.679304 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7d74dcc767-d2llh" podStartSLOduration=4.491067413 podStartE2EDuration="6.666196674s" podCreationTimestamp="2025-05-08 00:38:05 +0000 UTC" firstStartedPulling="2025-05-08 00:38:06.585402245 +0000 UTC m=+14.548310333" lastFinishedPulling="2025-05-08 00:38:08.760531502 +0000 UTC m=+16.723439594" observedRunningTime="2025-05-08 00:38:09.716210541 +0000 UTC m=+17.679118631" watchObservedRunningTime="2025-05-08 00:38:11.666196674 +0000 UTC m=+19.629104766" May 8 00:38:13.276540 kubelet[2711]: E0508 00:38:13.276263 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gvqtb" podUID="06105806-56a1-4100-9953-11ff7427bd13" May 8 00:38:15.276579 kubelet[2711]: E0508 00:38:15.276147 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gvqtb" podUID="06105806-56a1-4100-9953-11ff7427bd13" May 8 00:38:15.714221 containerd[1549]: time="2025-05-08T00:38:15.714188075Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:38:15.715090 containerd[1549]: time="2025-05-08T00:38:15.715057172Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 8 00:38:15.715228 containerd[1549]: time="2025-05-08T00:38:15.715190566Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:38:15.717202 containerd[1549]: time="2025-05-08T00:38:15.717156172Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:38:15.718129 containerd[1549]: time="2025-05-08T00:38:15.717587248Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 4.075047485s" May 8 00:38:15.718129 containerd[1549]: time="2025-05-08T00:38:15.717610339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 8 00:38:15.720276 containerd[1549]: time="2025-05-08T00:38:15.720256011Z" level=info msg="CreateContainer within sandbox \"f8c30ac93b514c38288d3d9e290d44b8b2e34665bbac79bb445525f22dc98161\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 8 00:38:15.738015 containerd[1549]: time="2025-05-08T00:38:15.737802145Z" level=info msg="CreateContainer within sandbox \"f8c30ac93b514c38288d3d9e290d44b8b2e34665bbac79bb445525f22dc98161\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"25a4772e25e9c721fb241d037fbc1b5e27b80b776826ee484cb7aa05c14f98e3\"" May 8 00:38:15.739116 containerd[1549]: time="2025-05-08T00:38:15.738229878Z" level=info msg="StartContainer for \"25a4772e25e9c721fb241d037fbc1b5e27b80b776826ee484cb7aa05c14f98e3\"" May 8 00:38:15.777616 systemd[1]: Started cri-containerd-25a4772e25e9c721fb241d037fbc1b5e27b80b776826ee484cb7aa05c14f98e3.scope - libcontainer container 25a4772e25e9c721fb241d037fbc1b5e27b80b776826ee484cb7aa05c14f98e3. May 8 00:38:15.800479 containerd[1549]: time="2025-05-08T00:38:15.800451369Z" level=info msg="StartContainer for \"25a4772e25e9c721fb241d037fbc1b5e27b80b776826ee484cb7aa05c14f98e3\" returns successfully" May 8 00:38:16.853023 systemd[1]: cri-containerd-25a4772e25e9c721fb241d037fbc1b5e27b80b776826ee484cb7aa05c14f98e3.scope: Deactivated successfully. May 8 00:38:16.876340 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-25a4772e25e9c721fb241d037fbc1b5e27b80b776826ee484cb7aa05c14f98e3-rootfs.mount: Deactivated successfully. May 8 00:38:16.879926 kubelet[2711]: I0508 00:38:16.879666 2711 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 8 00:38:16.981445 systemd[1]: Created slice kubepods-besteffort-pod5260c901_675e_42fb_a9e6_84eb23b95893.slice - libcontainer container kubepods-besteffort-pod5260c901_675e_42fb_a9e6_84eb23b95893.slice. May 8 00:38:16.991232 systemd[1]: Created slice kubepods-burstable-pod882fb80e_90cb_449a_87c0_4bbb4fd3e432.slice - libcontainer container kubepods-burstable-pod882fb80e_90cb_449a_87c0_4bbb4fd3e432.slice. May 8 00:38:16.994560 systemd[1]: Created slice kubepods-besteffort-pod0ec7b516_7af4_4ea9_8c59_9667bc29c59e.slice - libcontainer container kubepods-besteffort-pod0ec7b516_7af4_4ea9_8c59_9667bc29c59e.slice. May 8 00:38:17.004270 systemd[1]: Created slice kubepods-burstable-podbcac0e1d_a484_4def_954d_e37294951ec1.slice - libcontainer container kubepods-burstable-podbcac0e1d_a484_4def_954d_e37294951ec1.slice. May 8 00:38:17.008140 systemd[1]: Created slice kubepods-besteffort-pod9353a2d8_021f_4963_930a_ab008f3fd909.slice - libcontainer container kubepods-besteffort-pod9353a2d8_021f_4963_930a_ab008f3fd909.slice. May 8 00:38:17.011003 systemd[1]: Created slice kubepods-besteffort-pod2ee8fd8b_a4bc_43c5_bff4_0631d474067b.slice - libcontainer container kubepods-besteffort-pod2ee8fd8b_a4bc_43c5_bff4_0631d474067b.slice. May 8 00:38:17.030649 kubelet[2711]: I0508 00:38:17.030609 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5260c901-675e-42fb-a9e6-84eb23b95893-tigera-ca-bundle\") pod \"calico-kube-controllers-79589f44bf-xp2wm\" (UID: \"5260c901-675e-42fb-a9e6-84eb23b95893\") " pod="calico-system/calico-kube-controllers-79589f44bf-xp2wm" May 8 00:38:17.030649 kubelet[2711]: I0508 00:38:17.030644 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljs4f\" (UniqueName: \"kubernetes.io/projected/0ec7b516-7af4-4ea9-8c59-9667bc29c59e-kube-api-access-ljs4f\") pod \"calico-apiserver-577f55d98d-45j6l\" (UID: \"0ec7b516-7af4-4ea9-8c59-9667bc29c59e\") " pod="calico-apiserver/calico-apiserver-577f55d98d-45j6l" May 8 00:38:17.030795 kubelet[2711]: I0508 00:38:17.030670 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr8h8\" (UniqueName: \"kubernetes.io/projected/2ee8fd8b-a4bc-43c5-bff4-0631d474067b-kube-api-access-qr8h8\") pod \"calico-apiserver-577f55d98d-bjswh\" (UID: \"2ee8fd8b-a4bc-43c5-bff4-0631d474067b\") " pod="calico-apiserver/calico-apiserver-577f55d98d-bjswh" May 8 00:38:17.030795 kubelet[2711]: I0508 00:38:17.030686 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gm6x\" (UniqueName: \"kubernetes.io/projected/bcac0e1d-a484-4def-954d-e37294951ec1-kube-api-access-9gm6x\") pod \"coredns-6f6b679f8f-gljcw\" (UID: \"bcac0e1d-a484-4def-954d-e37294951ec1\") " pod="kube-system/coredns-6f6b679f8f-gljcw" May 8 00:38:17.030795 kubelet[2711]: I0508 00:38:17.030698 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bcac0e1d-a484-4def-954d-e37294951ec1-config-volume\") pod \"coredns-6f6b679f8f-gljcw\" (UID: \"bcac0e1d-a484-4def-954d-e37294951ec1\") " pod="kube-system/coredns-6f6b679f8f-gljcw" May 8 00:38:17.030795 kubelet[2711]: I0508 00:38:17.030712 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/882fb80e-90cb-449a-87c0-4bbb4fd3e432-config-volume\") pod \"coredns-6f6b679f8f-bfnqq\" (UID: \"882fb80e-90cb-449a-87c0-4bbb4fd3e432\") " pod="kube-system/coredns-6f6b679f8f-bfnqq" May 8 00:38:17.030795 kubelet[2711]: I0508 00:38:17.030723 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2rmp\" (UniqueName: \"kubernetes.io/projected/9353a2d8-021f-4963-930a-ab008f3fd909-kube-api-access-p2rmp\") pod \"calico-apiserver-5d465466c4-h88jf\" (UID: \"9353a2d8-021f-4963-930a-ab008f3fd909\") " pod="calico-apiserver/calico-apiserver-5d465466c4-h88jf" May 8 00:38:17.035799 kubelet[2711]: I0508 00:38:17.030734 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0ec7b516-7af4-4ea9-8c59-9667bc29c59e-calico-apiserver-certs\") pod \"calico-apiserver-577f55d98d-45j6l\" (UID: \"0ec7b516-7af4-4ea9-8c59-9667bc29c59e\") " pod="calico-apiserver/calico-apiserver-577f55d98d-45j6l" May 8 00:38:17.035799 kubelet[2711]: I0508 00:38:17.030750 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2ee8fd8b-a4bc-43c5-bff4-0631d474067b-calico-apiserver-certs\") pod \"calico-apiserver-577f55d98d-bjswh\" (UID: \"2ee8fd8b-a4bc-43c5-bff4-0631d474067b\") " pod="calico-apiserver/calico-apiserver-577f55d98d-bjswh" May 8 00:38:17.035799 kubelet[2711]: I0508 00:38:17.030763 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9353a2d8-021f-4963-930a-ab008f3fd909-calico-apiserver-certs\") pod \"calico-apiserver-5d465466c4-h88jf\" (UID: \"9353a2d8-021f-4963-930a-ab008f3fd909\") " pod="calico-apiserver/calico-apiserver-5d465466c4-h88jf" May 8 00:38:17.035799 kubelet[2711]: I0508 00:38:17.030776 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfgkn\" (UniqueName: \"kubernetes.io/projected/5260c901-675e-42fb-a9e6-84eb23b95893-kube-api-access-bfgkn\") pod \"calico-kube-controllers-79589f44bf-xp2wm\" (UID: \"5260c901-675e-42fb-a9e6-84eb23b95893\") " pod="calico-system/calico-kube-controllers-79589f44bf-xp2wm" May 8 00:38:17.035799 kubelet[2711]: I0508 00:38:17.030789 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv7lj\" (UniqueName: \"kubernetes.io/projected/882fb80e-90cb-449a-87c0-4bbb4fd3e432-kube-api-access-zv7lj\") pod \"coredns-6f6b679f8f-bfnqq\" (UID: \"882fb80e-90cb-449a-87c0-4bbb4fd3e432\") " pod="kube-system/coredns-6f6b679f8f-bfnqq" May 8 00:38:17.135123 containerd[1549]: time="2025-05-08T00:38:17.133640847Z" level=info msg="shim disconnected" id=25a4772e25e9c721fb241d037fbc1b5e27b80b776826ee484cb7aa05c14f98e3 namespace=k8s.io May 8 00:38:17.135123 containerd[1549]: time="2025-05-08T00:38:17.133702160Z" level=warning msg="cleaning up after shim disconnected" id=25a4772e25e9c721fb241d037fbc1b5e27b80b776826ee484cb7aa05c14f98e3 namespace=k8s.io May 8 00:38:17.135123 containerd[1549]: time="2025-05-08T00:38:17.133715630Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:38:17.279797 systemd[1]: Created slice kubepods-besteffort-pod06105806_56a1_4100_9953_11ff7427bd13.slice - libcontainer container kubepods-besteffort-pod06105806_56a1_4100_9953_11ff7427bd13.slice. May 8 00:38:17.282586 containerd[1549]: time="2025-05-08T00:38:17.281298513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gvqtb,Uid:06105806-56a1-4100-9953-11ff7427bd13,Namespace:calico-system,Attempt:0,}" May 8 00:38:17.287951 containerd[1549]: time="2025-05-08T00:38:17.287755271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79589f44bf-xp2wm,Uid:5260c901-675e-42fb-a9e6-84eb23b95893,Namespace:calico-system,Attempt:0,}" May 8 00:38:17.294153 containerd[1549]: time="2025-05-08T00:38:17.294019381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bfnqq,Uid:882fb80e-90cb-449a-87c0-4bbb4fd3e432,Namespace:kube-system,Attempt:0,}" May 8 00:38:17.302522 containerd[1549]: time="2025-05-08T00:38:17.302319590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-577f55d98d-45j6l,Uid:0ec7b516-7af4-4ea9-8c59-9667bc29c59e,Namespace:calico-apiserver,Attempt:0,}" May 8 00:38:17.306902 containerd[1549]: time="2025-05-08T00:38:17.306883985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gljcw,Uid:bcac0e1d-a484-4def-954d-e37294951ec1,Namespace:kube-system,Attempt:0,}" May 8 00:38:17.313728 containerd[1549]: time="2025-05-08T00:38:17.313595834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-577f55d98d-bjswh,Uid:2ee8fd8b-a4bc-43c5-bff4-0631d474067b,Namespace:calico-apiserver,Attempt:0,}" May 8 00:38:17.314743 containerd[1549]: time="2025-05-08T00:38:17.314631722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d465466c4-h88jf,Uid:9353a2d8-021f-4963-930a-ab008f3fd909,Namespace:calico-apiserver,Attempt:0,}" May 8 00:38:17.516603 containerd[1549]: time="2025-05-08T00:38:17.516565066Z" level=error msg="Failed to destroy network for sandbox \"ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:17.516932 containerd[1549]: time="2025-05-08T00:38:17.516827482Z" level=error msg="Failed to destroy network for sandbox \"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:17.519018 containerd[1549]: time="2025-05-08T00:38:17.518997175Z" level=error msg="encountered an error cleaning up failed sandbox \"ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:17.519058 containerd[1549]: time="2025-05-08T00:38:17.519034988Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gljcw,Uid:bcac0e1d-a484-4def-954d-e37294951ec1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:17.521088 containerd[1549]: time="2025-05-08T00:38:17.521066675Z" level=error msg="encountered an error cleaning up failed sandbox \"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:17.521259 containerd[1549]: time="2025-05-08T00:38:17.521184517Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gvqtb,Uid:06105806-56a1-4100-9953-11ff7427bd13,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:17.526329 kubelet[2711]: E0508 00:38:17.525772 2711 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:17.526329 kubelet[2711]: E0508 00:38:17.525845 2711 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:17.526329 kubelet[2711]: E0508 00:38:17.525864 2711 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-gljcw" May 8 00:38:17.526329 kubelet[2711]: E0508 00:38:17.525876 2711 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-gljcw" May 8 00:38:17.526473 kubelet[2711]: E0508 00:38:17.525903 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-gljcw_kube-system(bcac0e1d-a484-4def-954d-e37294951ec1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-gljcw_kube-system(bcac0e1d-a484-4def-954d-e37294951ec1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-gljcw" podUID="bcac0e1d-a484-4def-954d-e37294951ec1" May 8 00:38:17.527004 kubelet[2711]: E0508 00:38:17.525830 2711 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gvqtb" May 8 00:38:17.527004 kubelet[2711]: E0508 00:38:17.526582 2711 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gvqtb" May 8 00:38:17.527004 kubelet[2711]: E0508 00:38:17.526602 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gvqtb_calico-system(06105806-56a1-4100-9953-11ff7427bd13)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gvqtb_calico-system(06105806-56a1-4100-9953-11ff7427bd13)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gvqtb" podUID="06105806-56a1-4100-9953-11ff7427bd13" May 8 00:38:17.549303 containerd[1549]: time="2025-05-08T00:38:17.549249249Z" level=error msg="Failed to destroy network for sandbox \"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:17.550399 containerd[1549]: time="2025-05-08T00:38:17.549455968Z" level=error msg="encountered an error cleaning up failed sandbox \"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:17.550399 containerd[1549]: time="2025-05-08T00:38:17.549486770Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-577f55d98d-bjswh,Uid:2ee8fd8b-a4bc-43c5-bff4-0631d474067b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:17.550862 kubelet[2711]: E0508 00:38:17.549625 2711 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:17.550862 kubelet[2711]: E0508 00:38:17.549659 2711 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-577f55d98d-bjswh" May 8 00:38:17.550862 kubelet[2711]: E0508 00:38:17.549675 2711 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-577f55d98d-bjswh" May 8 00:38:17.550971 containerd[1549]: time="2025-05-08T00:38:17.550609541Z" level=error msg="Failed to destroy network for sandbox \"be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:17.550993 kubelet[2711]: E0508 00:38:17.549704 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-577f55d98d-bjswh_calico-apiserver(2ee8fd8b-a4bc-43c5-bff4-0631d474067b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-577f55d98d-bjswh_calico-apiserver(2ee8fd8b-a4bc-43c5-bff4-0631d474067b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-577f55d98d-bjswh" podUID="2ee8fd8b-a4bc-43c5-bff4-0631d474067b" May 8 00:38:17.554829 containerd[1549]: time="2025-05-08T00:38:17.551112810Z" level=error msg="Failed to destroy network for sandbox \"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:17.554994 containerd[1549]: time="2025-05-08T00:38:17.552739020Z" level=error msg="Failed to destroy network for sandbox \"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:17.555166 containerd[1549]: time="2025-05-08T00:38:17.555146477Z" level=error msg="encountered an error cleaning up failed sandbox \"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:17.555199 containerd[1549]: time="2025-05-08T00:38:17.555185264Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-577f55d98d-45j6l,Uid:0ec7b516-7af4-4ea9-8c59-9667bc29c59e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:17.555260 containerd[1549]: time="2025-05-08T00:38:17.553483362Z" level=error msg="encountered an error cleaning up failed sandbox \"be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:17.555284 containerd[1549]: time="2025-05-08T00:38:17.555263987Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bfnqq,Uid:882fb80e-90cb-449a-87c0-4bbb4fd3e432,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:17.555306 containerd[1549]: time="2025-05-08T00:38:17.553516012Z" level=error msg="Failed to destroy network for sandbox \"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:17.555438 containerd[1549]: time="2025-05-08T00:38:17.555421062Z" level=error msg="encountered an error cleaning up failed sandbox \"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:17.555465 containerd[1549]: time="2025-05-08T00:38:17.555449270Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79589f44bf-xp2wm,Uid:5260c901-675e-42fb-a9e6-84eb23b95893,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:17.556010 containerd[1549]: time="2025-05-08T00:38:17.555598453Z" level=error msg="encountered an error cleaning up failed sandbox \"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:17.556010 containerd[1549]: time="2025-05-08T00:38:17.555623453Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d465466c4-h88jf,Uid:9353a2d8-021f-4963-930a-ab008f3fd909,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:17.556317 kubelet[2711]: E0508 00:38:17.555685 2711 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:17.556317 kubelet[2711]: E0508 00:38:17.555732 2711 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-577f55d98d-45j6l" May 8 00:38:17.556317 kubelet[2711]: E0508 00:38:17.555746 2711 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-577f55d98d-45j6l" May 8 00:38:17.556814 kubelet[2711]: E0508 00:38:17.555777 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-577f55d98d-45j6l_calico-apiserver(0ec7b516-7af4-4ea9-8c59-9667bc29c59e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-577f55d98d-45j6l_calico-apiserver(0ec7b516-7af4-4ea9-8c59-9667bc29c59e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-577f55d98d-45j6l" podUID="0ec7b516-7af4-4ea9-8c59-9667bc29c59e" May 8 00:38:17.556814 kubelet[2711]: E0508 00:38:17.556257 2711 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:17.556814 kubelet[2711]: E0508 00:38:17.556284 2711 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79589f44bf-xp2wm" May 8 00:38:17.556899 kubelet[2711]: E0508 00:38:17.556296 2711 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79589f44bf-xp2wm" May 8 00:38:17.556899 kubelet[2711]: E0508 00:38:17.556329 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-79589f44bf-xp2wm_calico-system(5260c901-675e-42fb-a9e6-84eb23b95893)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-79589f44bf-xp2wm_calico-system(5260c901-675e-42fb-a9e6-84eb23b95893)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79589f44bf-xp2wm" podUID="5260c901-675e-42fb-a9e6-84eb23b95893" May 8 00:38:17.556899 kubelet[2711]: E0508 00:38:17.556356 2711 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:17.557001 kubelet[2711]: E0508 00:38:17.556367 2711 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-bfnqq" May 8 00:38:17.557001 kubelet[2711]: E0508 00:38:17.556377 2711 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-bfnqq" May 8 00:38:17.557001 kubelet[2711]: E0508 00:38:17.556392 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-bfnqq_kube-system(882fb80e-90cb-449a-87c0-4bbb4fd3e432)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-bfnqq_kube-system(882fb80e-90cb-449a-87c0-4bbb4fd3e432)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-bfnqq" podUID="882fb80e-90cb-449a-87c0-4bbb4fd3e432" May 8 00:38:17.557070 kubelet[2711]: E0508 00:38:17.556437 2711 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:17.557070 kubelet[2711]: E0508 00:38:17.556451 2711 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d465466c4-h88jf" May 8 00:38:17.557070 kubelet[2711]: E0508 00:38:17.556460 2711 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d465466c4-h88jf" May 8 00:38:17.557130 kubelet[2711]: E0508 00:38:17.556474 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d465466c4-h88jf_calico-apiserver(9353a2d8-021f-4963-930a-ab008f3fd909)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d465466c4-h88jf_calico-apiserver(9353a2d8-021f-4963-930a-ab008f3fd909)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d465466c4-h88jf" podUID="9353a2d8-021f-4963-930a-ab008f3fd909" May 8 00:38:17.654488 kubelet[2711]: I0508 00:38:17.654014 2711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724" May 8 00:38:17.658720 containerd[1549]: time="2025-05-08T00:38:17.658476629Z" level=info msg="StopPodSandbox for \"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\"" May 8 00:38:17.660071 containerd[1549]: time="2025-05-08T00:38:17.659813320Z" level=info msg="Ensure that sandbox 806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724 in task-service has been cleanup successfully" May 8 00:38:17.662304 kubelet[2711]: I0508 00:38:17.662289 2711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b" May 8 00:38:17.663496 containerd[1549]: time="2025-05-08T00:38:17.663476813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 8 00:38:17.664585 containerd[1549]: time="2025-05-08T00:38:17.664572605Z" level=info msg="StopPodSandbox for \"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\"" May 8 00:38:17.664835 containerd[1549]: time="2025-05-08T00:38:17.664824683Z" level=info msg="Ensure that sandbox fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b in task-service has been cleanup successfully" May 8 00:38:17.666948 kubelet[2711]: I0508 00:38:17.666937 2711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23" May 8 00:38:17.668425 containerd[1549]: time="2025-05-08T00:38:17.668036238Z" level=info msg="StopPodSandbox for \"be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23\"" May 8 00:38:17.670102 containerd[1549]: time="2025-05-08T00:38:17.669963134Z" level=info msg="Ensure that sandbox be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23 in task-service has been cleanup successfully" May 8 00:38:17.671859 kubelet[2711]: I0508 00:38:17.671225 2711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" May 8 00:38:17.672352 containerd[1549]: time="2025-05-08T00:38:17.672209289Z" level=info msg="StopPodSandbox for \"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\"" May 8 00:38:17.673243 containerd[1549]: time="2025-05-08T00:38:17.673143749Z" level=info msg="Ensure that sandbox a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460 in task-service has been cleanup successfully" May 8 00:38:17.674553 kubelet[2711]: I0508 00:38:17.674520 2711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762" May 8 00:38:17.675875 containerd[1549]: time="2025-05-08T00:38:17.675342183Z" level=info msg="StopPodSandbox for \"ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762\"" May 8 00:38:17.675875 containerd[1549]: time="2025-05-08T00:38:17.675428292Z" level=info msg="Ensure that sandbox ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762 in task-service has been cleanup successfully" May 8 00:38:17.677188 kubelet[2711]: I0508 00:38:17.677174 2711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f" May 8 00:38:17.678908 containerd[1549]: time="2025-05-08T00:38:17.678737729Z" level=info msg="StopPodSandbox for \"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\"" May 8 00:38:17.679130 kubelet[2711]: I0508 00:38:17.679121 2711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" May 8 00:38:17.680145 containerd[1549]: time="2025-05-08T00:38:17.679886438Z" level=info msg="Ensure that sandbox 12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f in task-service has been cleanup successfully" May 8 00:38:17.680667 containerd[1549]: time="2025-05-08T00:38:17.680496718Z" level=info msg="StopPodSandbox for \"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\"" May 8 00:38:17.680667 containerd[1549]: time="2025-05-08T00:38:17.680600123Z" level=info msg="Ensure that sandbox 02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6 in task-service has been cleanup successfully" May 8 00:38:17.730276 containerd[1549]: time="2025-05-08T00:38:17.730249489Z" level=error msg="StopPodSandbox for \"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\" failed" error="failed to destroy network for sandbox \"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:17.730608 kubelet[2711]: E0508 00:38:17.730479 2711 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f" May 8 00:38:17.730608 kubelet[2711]: E0508 00:38:17.730516 2711 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f"} May 8 00:38:17.730608 kubelet[2711]: E0508 00:38:17.730558 2711 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"06105806-56a1-4100-9953-11ff7427bd13\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:38:17.730608 kubelet[2711]: E0508 00:38:17.730577 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"06105806-56a1-4100-9953-11ff7427bd13\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gvqtb" podUID="06105806-56a1-4100-9953-11ff7427bd13" May 8 00:38:17.732011 containerd[1549]: time="2025-05-08T00:38:17.731978303Z" level=error msg="StopPodSandbox for \"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\" failed" error="failed to destroy network for sandbox \"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:17.732175 kubelet[2711]: E0508 00:38:17.732074 2711 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" May 8 00:38:17.732175 kubelet[2711]: E0508 00:38:17.732107 2711 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460"} May 8 00:38:17.732175 kubelet[2711]: E0508 00:38:17.732122 2711 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2ee8fd8b-a4bc-43c5-bff4-0631d474067b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:38:17.732175 kubelet[2711]: E0508 00:38:17.732133 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2ee8fd8b-a4bc-43c5-bff4-0631d474067b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-577f55d98d-bjswh" podUID="2ee8fd8b-a4bc-43c5-bff4-0631d474067b" May 8 00:38:17.732574 containerd[1549]: time="2025-05-08T00:38:17.732559327Z" level=error msg="StopPodSandbox for \"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\" failed" error="failed to destroy network for sandbox \"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:17.732720 kubelet[2711]: E0508 00:38:17.732659 2711 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b" May 8 00:38:17.732720 kubelet[2711]: E0508 00:38:17.732675 2711 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b"} May 8 00:38:17.732720 kubelet[2711]: E0508 00:38:17.732691 2711 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9353a2d8-021f-4963-930a-ab008f3fd909\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:38:17.732720 kubelet[2711]: E0508 00:38:17.732701 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9353a2d8-021f-4963-930a-ab008f3fd909\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d465466c4-h88jf" podUID="9353a2d8-021f-4963-930a-ab008f3fd909" May 8 00:38:17.741800 containerd[1549]: time="2025-05-08T00:38:17.741776344Z" level=error msg="StopPodSandbox for \"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\" failed" error="failed to destroy network for sandbox \"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:17.742074 kubelet[2711]: E0508 00:38:17.741986 2711 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" May 8 00:38:17.742074 kubelet[2711]: E0508 00:38:17.742012 2711 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6"} May 8 00:38:17.742074 kubelet[2711]: E0508 00:38:17.742031 2711 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0ec7b516-7af4-4ea9-8c59-9667bc29c59e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:38:17.742074 kubelet[2711]: E0508 00:38:17.742043 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0ec7b516-7af4-4ea9-8c59-9667bc29c59e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-577f55d98d-45j6l" podUID="0ec7b516-7af4-4ea9-8c59-9667bc29c59e" May 8 00:38:17.743805 containerd[1549]: time="2025-05-08T00:38:17.743761112Z" level=error msg="StopPodSandbox for \"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\" failed" error="failed to destroy network for sandbox \"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:17.744031 kubelet[2711]: E0508 00:38:17.743957 2711 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724" May 8 00:38:17.744031 kubelet[2711]: E0508 00:38:17.743974 2711 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724"} May 8 00:38:17.744031 kubelet[2711]: E0508 00:38:17.743988 2711 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5260c901-675e-42fb-a9e6-84eb23b95893\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:38:17.744031 kubelet[2711]: E0508 00:38:17.743999 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5260c901-675e-42fb-a9e6-84eb23b95893\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79589f44bf-xp2wm" podUID="5260c901-675e-42fb-a9e6-84eb23b95893" May 8 00:38:17.745205 containerd[1549]: time="2025-05-08T00:38:17.745185061Z" level=error msg="StopPodSandbox for \"be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23\" failed" error="failed to destroy network for sandbox \"be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:17.745325 kubelet[2711]: E0508 00:38:17.745299 2711 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23" May 8 00:38:17.745433 kubelet[2711]: E0508 00:38:17.745380 2711 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23"} May 8 00:38:17.745433 kubelet[2711]: E0508 00:38:17.745398 2711 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"882fb80e-90cb-449a-87c0-4bbb4fd3e432\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:38:17.745433 kubelet[2711]: E0508 00:38:17.745409 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"882fb80e-90cb-449a-87c0-4bbb4fd3e432\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-bfnqq" podUID="882fb80e-90cb-449a-87c0-4bbb4fd3e432" May 8 00:38:17.745971 containerd[1549]: time="2025-05-08T00:38:17.745951174Z" level=error msg="StopPodSandbox for \"ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762\" failed" error="failed to destroy network for sandbox \"ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:17.746110 kubelet[2711]: E0508 00:38:17.746029 2711 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762" May 8 00:38:17.746110 kubelet[2711]: E0508 00:38:17.746047 2711 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762"} May 8 00:38:17.746110 kubelet[2711]: E0508 00:38:17.746062 2711 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bcac0e1d-a484-4def-954d-e37294951ec1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:38:17.746110 kubelet[2711]: E0508 00:38:17.746073 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bcac0e1d-a484-4def-954d-e37294951ec1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-gljcw" podUID="bcac0e1d-a484-4def-954d-e37294951ec1" May 8 00:38:21.768524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1262174957.mount: Deactivated successfully. May 8 00:38:21.862552 containerd[1549]: time="2025-05-08T00:38:21.858225564Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 8 00:38:21.863572 containerd[1549]: time="2025-05-08T00:38:21.857707417Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:38:21.863831 containerd[1549]: time="2025-05-08T00:38:21.863808122Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 4.200256248s" May 8 00:38:21.863854 containerd[1549]: time="2025-05-08T00:38:21.863834607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 8 00:38:21.874519 containerd[1549]: time="2025-05-08T00:38:21.874490714Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:38:21.875373 containerd[1549]: time="2025-05-08T00:38:21.874865984Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:38:22.083637 containerd[1549]: time="2025-05-08T00:38:22.083561548Z" level=info msg="CreateContainer within sandbox \"f8c30ac93b514c38288d3d9e290d44b8b2e34665bbac79bb445525f22dc98161\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 8 00:38:22.160592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount549168771.mount: Deactivated successfully. May 8 00:38:22.174870 containerd[1549]: time="2025-05-08T00:38:22.173041300Z" level=info msg="CreateContainer within sandbox \"f8c30ac93b514c38288d3d9e290d44b8b2e34665bbac79bb445525f22dc98161\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c67fceccb45f50ebef95e5024fa17cf5d25097d071008b265528ca3a8573ce88\"" May 8 00:38:22.183777 containerd[1549]: time="2025-05-08T00:38:22.181835504Z" level=info msg="StartContainer for \"c67fceccb45f50ebef95e5024fa17cf5d25097d071008b265528ca3a8573ce88\"" May 8 00:38:22.282335 systemd[1]: Started cri-containerd-c67fceccb45f50ebef95e5024fa17cf5d25097d071008b265528ca3a8573ce88.scope - libcontainer container c67fceccb45f50ebef95e5024fa17cf5d25097d071008b265528ca3a8573ce88. May 8 00:38:22.326388 containerd[1549]: time="2025-05-08T00:38:22.326356081Z" level=info msg="StartContainer for \"c67fceccb45f50ebef95e5024fa17cf5d25097d071008b265528ca3a8573ce88\" returns successfully" May 8 00:38:22.412200 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 8 00:38:22.414547 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 8 00:38:22.441181 systemd[1]: cri-containerd-c67fceccb45f50ebef95e5024fa17cf5d25097d071008b265528ca3a8573ce88.scope: Deactivated successfully. May 8 00:38:22.458653 containerd[1549]: time="2025-05-08T00:38:22.458618434Z" level=info msg="shim disconnected" id=c67fceccb45f50ebef95e5024fa17cf5d25097d071008b265528ca3a8573ce88 namespace=k8s.io May 8 00:38:22.458781 containerd[1549]: time="2025-05-08T00:38:22.458770891Z" level=warning msg="cleaning up after shim disconnected" id=c67fceccb45f50ebef95e5024fa17cf5d25097d071008b265528ca3a8573ce88 namespace=k8s.io May 8 00:38:22.458817 containerd[1549]: time="2025-05-08T00:38:22.458810504Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:38:22.754743 kubelet[2711]: I0508 00:38:22.754085 2711 scope.go:117] "RemoveContainer" containerID="c67fceccb45f50ebef95e5024fa17cf5d25097d071008b265528ca3a8573ce88" May 8 00:38:22.756043 containerd[1549]: time="2025-05-08T00:38:22.755863478Z" level=info msg="CreateContainer within sandbox \"f8c30ac93b514c38288d3d9e290d44b8b2e34665bbac79bb445525f22dc98161\" for container &ContainerMetadata{Name:calico-node,Attempt:1,}" May 8 00:38:22.778311 containerd[1549]: time="2025-05-08T00:38:22.778219769Z" level=info msg="CreateContainer within sandbox \"f8c30ac93b514c38288d3d9e290d44b8b2e34665bbac79bb445525f22dc98161\" for &ContainerMetadata{Name:calico-node,Attempt:1,} returns container id \"720b083fd31220b5e9fcfb441a838965c622bd11a3c0e3102ca6f0d8d19804f4\"" May 8 00:38:22.780791 containerd[1549]: time="2025-05-08T00:38:22.779961695Z" level=info msg="StartContainer for \"720b083fd31220b5e9fcfb441a838965c622bd11a3c0e3102ca6f0d8d19804f4\"" May 8 00:38:22.807057 systemd[1]: Started cri-containerd-720b083fd31220b5e9fcfb441a838965c622bd11a3c0e3102ca6f0d8d19804f4.scope - libcontainer container 720b083fd31220b5e9fcfb441a838965c622bd11a3c0e3102ca6f0d8d19804f4. May 8 00:38:22.834064 containerd[1549]: time="2025-05-08T00:38:22.834029646Z" level=info msg="StartContainer for \"720b083fd31220b5e9fcfb441a838965c622bd11a3c0e3102ca6f0d8d19804f4\" returns successfully" May 8 00:38:22.943839 systemd[1]: cri-containerd-720b083fd31220b5e9fcfb441a838965c622bd11a3c0e3102ca6f0d8d19804f4.scope: Deactivated successfully. May 8 00:38:22.963262 containerd[1549]: time="2025-05-08T00:38:22.963099747Z" level=info msg="shim disconnected" id=720b083fd31220b5e9fcfb441a838965c622bd11a3c0e3102ca6f0d8d19804f4 namespace=k8s.io May 8 00:38:22.963262 containerd[1549]: time="2025-05-08T00:38:22.963143172Z" level=warning msg="cleaning up after shim disconnected" id=720b083fd31220b5e9fcfb441a838965c622bd11a3c0e3102ca6f0d8d19804f4 namespace=k8s.io May 8 00:38:22.963262 containerd[1549]: time="2025-05-08T00:38:22.963153035Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:38:23.770125 systemd[1]: run-containerd-runc-k8s.io-720b083fd31220b5e9fcfb441a838965c622bd11a3c0e3102ca6f0d8d19804f4-runc.Km6gUS.mount: Deactivated successfully. May 8 00:38:23.770195 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-720b083fd31220b5e9fcfb441a838965c622bd11a3c0e3102ca6f0d8d19804f4-rootfs.mount: Deactivated successfully. May 8 00:38:23.862620 kubelet[2711]: I0508 00:38:23.862598 2711 scope.go:117] "RemoveContainer" containerID="c67fceccb45f50ebef95e5024fa17cf5d25097d071008b265528ca3a8573ce88" May 8 00:38:23.862989 kubelet[2711]: I0508 00:38:23.862973 2711 scope.go:117] "RemoveContainer" containerID="720b083fd31220b5e9fcfb441a838965c622bd11a3c0e3102ca6f0d8d19804f4" May 8 00:38:23.863109 kubelet[2711]: E0508 00:38:23.863090 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-9kk99_calico-system(27e14b8e-f054-4cc5-a94f-78053ac3ed18)\"" pod="calico-system/calico-node-9kk99" podUID="27e14b8e-f054-4cc5-a94f-78053ac3ed18" May 8 00:38:23.901894 containerd[1549]: time="2025-05-08T00:38:23.901854434Z" level=info msg="RemoveContainer for \"c67fceccb45f50ebef95e5024fa17cf5d25097d071008b265528ca3a8573ce88\"" May 8 00:38:23.921332 containerd[1549]: time="2025-05-08T00:38:23.921298455Z" level=info msg="RemoveContainer for \"c67fceccb45f50ebef95e5024fa17cf5d25097d071008b265528ca3a8573ce88\" returns successfully" May 8 00:38:24.866445 kubelet[2711]: I0508 00:38:24.866412 2711 scope.go:117] "RemoveContainer" containerID="720b083fd31220b5e9fcfb441a838965c622bd11a3c0e3102ca6f0d8d19804f4" May 8 00:38:24.866805 kubelet[2711]: E0508 00:38:24.866543 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-9kk99_calico-system(27e14b8e-f054-4cc5-a94f-78053ac3ed18)\"" pod="calico-system/calico-node-9kk99" podUID="27e14b8e-f054-4cc5-a94f-78053ac3ed18" May 8 00:38:25.198697 kubelet[2711]: I0508 00:38:25.198571 2711 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:38:27.405259 kubelet[2711]: I0508 00:38:27.405186 2711 scope.go:117] "RemoveContainer" containerID="720b083fd31220b5e9fcfb441a838965c622bd11a3c0e3102ca6f0d8d19804f4" May 8 00:38:27.405648 kubelet[2711]: E0508 00:38:27.405315 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-9kk99_calico-system(27e14b8e-f054-4cc5-a94f-78053ac3ed18)\"" pod="calico-system/calico-node-9kk99" podUID="27e14b8e-f054-4cc5-a94f-78053ac3ed18" May 8 00:38:29.276176 containerd[1549]: time="2025-05-08T00:38:29.276100462Z" level=info msg="StopPodSandbox for \"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\"" May 8 00:38:29.347826 containerd[1549]: time="2025-05-08T00:38:29.347784161Z" level=error msg="StopPodSandbox for \"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\" failed" error="failed to destroy network for sandbox \"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:29.347983 kubelet[2711]: E0508 00:38:29.347952 2711 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" May 8 00:38:29.348235 kubelet[2711]: E0508 00:38:29.348007 2711 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460"} May 8 00:38:29.348235 kubelet[2711]: E0508 00:38:29.348030 2711 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2ee8fd8b-a4bc-43c5-bff4-0631d474067b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:38:29.348235 kubelet[2711]: E0508 00:38:29.348044 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2ee8fd8b-a4bc-43c5-bff4-0631d474067b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-577f55d98d-bjswh" podUID="2ee8fd8b-a4bc-43c5-bff4-0631d474067b" May 8 00:38:30.276923 containerd[1549]: time="2025-05-08T00:38:30.276854444Z" level=info msg="StopPodSandbox for \"ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762\"" May 8 00:38:30.278093 containerd[1549]: time="2025-05-08T00:38:30.277925356Z" level=info msg="StopPodSandbox for \"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\"" May 8 00:38:30.278751 containerd[1549]: time="2025-05-08T00:38:30.278214277Z" level=info msg="StopPodSandbox for \"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\"" May 8 00:38:30.315508 containerd[1549]: time="2025-05-08T00:38:30.315409285Z" level=error msg="StopPodSandbox for \"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\" failed" error="failed to destroy network for sandbox \"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:30.315959 containerd[1549]: time="2025-05-08T00:38:30.315800966Z" level=error msg="StopPodSandbox for \"ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762\" failed" error="failed to destroy network for sandbox \"ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:30.316023 kubelet[2711]: E0508 00:38:30.315932 2711 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f" May 8 00:38:30.316023 kubelet[2711]: E0508 00:38:30.315969 2711 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f"} May 8 00:38:30.316023 kubelet[2711]: E0508 00:38:30.315994 2711 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"06105806-56a1-4100-9953-11ff7427bd13\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:38:30.316023 kubelet[2711]: E0508 00:38:30.316008 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"06105806-56a1-4100-9953-11ff7427bd13\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gvqtb" podUID="06105806-56a1-4100-9953-11ff7427bd13" May 8 00:38:30.316185 kubelet[2711]: E0508 00:38:30.315931 2711 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762" May 8 00:38:30.316185 kubelet[2711]: E0508 00:38:30.316028 2711 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762"} May 8 00:38:30.316185 kubelet[2711]: E0508 00:38:30.316040 2711 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bcac0e1d-a484-4def-954d-e37294951ec1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:38:30.316185 kubelet[2711]: E0508 00:38:30.316055 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bcac0e1d-a484-4def-954d-e37294951ec1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-gljcw" podUID="bcac0e1d-a484-4def-954d-e37294951ec1" May 8 00:38:30.318896 containerd[1549]: time="2025-05-08T00:38:30.318863562Z" level=error msg="StopPodSandbox for \"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\" failed" error="failed to destroy network for sandbox \"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:30.319320 kubelet[2711]: E0508 00:38:30.319130 2711 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b" May 8 00:38:30.319320 kubelet[2711]: E0508 00:38:30.319249 2711 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b"} May 8 00:38:30.319320 kubelet[2711]: E0508 00:38:30.319285 2711 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9353a2d8-021f-4963-930a-ab008f3fd909\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:38:30.319320 kubelet[2711]: E0508 00:38:30.319302 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9353a2d8-021f-4963-930a-ab008f3fd909\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d465466c4-h88jf" podUID="9353a2d8-021f-4963-930a-ab008f3fd909" May 8 00:38:31.276422 containerd[1549]: time="2025-05-08T00:38:31.276387534Z" level=info msg="StopPodSandbox for \"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\"" May 8 00:38:31.296215 containerd[1549]: time="2025-05-08T00:38:31.296172742Z" level=error msg="StopPodSandbox for \"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\" failed" error="failed to destroy network for sandbox \"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:31.296698 kubelet[2711]: E0508 00:38:31.296591 2711 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" May 8 00:38:31.296698 kubelet[2711]: E0508 00:38:31.296636 2711 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6"} May 8 00:38:31.296698 kubelet[2711]: E0508 00:38:31.296661 2711 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0ec7b516-7af4-4ea9-8c59-9667bc29c59e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:38:31.296698 kubelet[2711]: E0508 00:38:31.296681 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0ec7b516-7af4-4ea9-8c59-9667bc29c59e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-577f55d98d-45j6l" podUID="0ec7b516-7af4-4ea9-8c59-9667bc29c59e" May 8 00:38:32.276685 containerd[1549]: time="2025-05-08T00:38:32.276646532Z" level=info msg="StopPodSandbox for \"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\"" May 8 00:38:32.277631 containerd[1549]: time="2025-05-08T00:38:32.277426761Z" level=info msg="StopPodSandbox for \"be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23\"" May 8 00:38:32.298931 containerd[1549]: time="2025-05-08T00:38:32.298831697Z" level=error msg="StopPodSandbox for \"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\" failed" error="failed to destroy network for sandbox \"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:32.299233 kubelet[2711]: E0508 00:38:32.298972 2711 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724" May 8 00:38:32.299233 kubelet[2711]: E0508 00:38:32.299005 2711 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724"} May 8 00:38:32.299233 kubelet[2711]: E0508 00:38:32.299026 2711 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5260c901-675e-42fb-a9e6-84eb23b95893\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:38:32.299233 kubelet[2711]: E0508 00:38:32.299040 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5260c901-675e-42fb-a9e6-84eb23b95893\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79589f44bf-xp2wm" podUID="5260c901-675e-42fb-a9e6-84eb23b95893" May 8 00:38:32.302786 containerd[1549]: time="2025-05-08T00:38:32.302711211Z" level=error msg="StopPodSandbox for \"be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23\" failed" error="failed to destroy network for sandbox \"be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:32.302954 kubelet[2711]: E0508 00:38:32.302901 2711 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23" May 8 00:38:32.302990 kubelet[2711]: E0508 00:38:32.302967 2711 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23"} May 8 00:38:32.303021 kubelet[2711]: E0508 00:38:32.302999 2711 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"882fb80e-90cb-449a-87c0-4bbb4fd3e432\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:38:32.303074 kubelet[2711]: E0508 00:38:32.303019 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"882fb80e-90cb-449a-87c0-4bbb4fd3e432\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-bfnqq" podUID="882fb80e-90cb-449a-87c0-4bbb4fd3e432" May 8 00:38:35.848175 kubelet[2711]: I0508 00:38:35.847993 2711 scope.go:117] "RemoveContainer" containerID="720b083fd31220b5e9fcfb441a838965c622bd11a3c0e3102ca6f0d8d19804f4" May 8 00:38:35.878259 containerd[1549]: time="2025-05-08T00:38:35.878226976Z" level=info msg="CreateContainer within sandbox \"f8c30ac93b514c38288d3d9e290d44b8b2e34665bbac79bb445525f22dc98161\" for container &ContainerMetadata{Name:calico-node,Attempt:2,}" May 8 00:38:35.886083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1343781809.mount: Deactivated successfully. May 8 00:38:35.888134 containerd[1549]: time="2025-05-08T00:38:35.888108353Z" level=info msg="CreateContainer within sandbox \"f8c30ac93b514c38288d3d9e290d44b8b2e34665bbac79bb445525f22dc98161\" for &ContainerMetadata{Name:calico-node,Attempt:2,} returns container id \"24918ac6296860143b4c6cef93c1d1ca48f4749a150fa244fca49529c84c5052\"" May 8 00:38:35.888771 containerd[1549]: time="2025-05-08T00:38:35.888506134Z" level=info msg="StartContainer for \"24918ac6296860143b4c6cef93c1d1ca48f4749a150fa244fca49529c84c5052\"" May 8 00:38:35.909510 systemd[1]: run-containerd-runc-k8s.io-24918ac6296860143b4c6cef93c1d1ca48f4749a150fa244fca49529c84c5052-runc.fuNKy0.mount: Deactivated successfully. May 8 00:38:35.917078 systemd[1]: Started cri-containerd-24918ac6296860143b4c6cef93c1d1ca48f4749a150fa244fca49529c84c5052.scope - libcontainer container 24918ac6296860143b4c6cef93c1d1ca48f4749a150fa244fca49529c84c5052. May 8 00:38:35.935170 containerd[1549]: time="2025-05-08T00:38:35.935138923Z" level=info msg="StartContainer for \"24918ac6296860143b4c6cef93c1d1ca48f4749a150fa244fca49529c84c5052\" returns successfully" May 8 00:38:36.023726 systemd[1]: cri-containerd-24918ac6296860143b4c6cef93c1d1ca48f4749a150fa244fca49529c84c5052.scope: Deactivated successfully. May 8 00:38:36.046990 containerd[1549]: time="2025-05-08T00:38:36.046830180Z" level=info msg="shim disconnected" id=24918ac6296860143b4c6cef93c1d1ca48f4749a150fa244fca49529c84c5052 namespace=k8s.io May 8 00:38:36.046990 containerd[1549]: time="2025-05-08T00:38:36.046860687Z" level=warning msg="cleaning up after shim disconnected" id=24918ac6296860143b4c6cef93c1d1ca48f4749a150fa244fca49529c84c5052 namespace=k8s.io May 8 00:38:36.046990 containerd[1549]: time="2025-05-08T00:38:36.046866083Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:38:36.883890 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24918ac6296860143b4c6cef93c1d1ca48f4749a150fa244fca49529c84c5052-rootfs.mount: Deactivated successfully. May 8 00:38:36.886956 kubelet[2711]: I0508 00:38:36.886864 2711 scope.go:117] "RemoveContainer" containerID="720b083fd31220b5e9fcfb441a838965c622bd11a3c0e3102ca6f0d8d19804f4" May 8 00:38:36.887189 kubelet[2711]: I0508 00:38:36.887137 2711 scope.go:117] "RemoveContainer" containerID="24918ac6296860143b4c6cef93c1d1ca48f4749a150fa244fca49529c84c5052" May 8 00:38:36.887360 kubelet[2711]: E0508 00:38:36.887228 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-9kk99_calico-system(27e14b8e-f054-4cc5-a94f-78053ac3ed18)\"" pod="calico-system/calico-node-9kk99" podUID="27e14b8e-f054-4cc5-a94f-78053ac3ed18" May 8 00:38:36.888985 containerd[1549]: time="2025-05-08T00:38:36.888962883Z" level=info msg="RemoveContainer for \"720b083fd31220b5e9fcfb441a838965c622bd11a3c0e3102ca6f0d8d19804f4\"" May 8 00:38:36.898986 containerd[1549]: time="2025-05-08T00:38:36.898944985Z" level=info msg="RemoveContainer for \"720b083fd31220b5e9fcfb441a838965c622bd11a3c0e3102ca6f0d8d19804f4\" returns successfully" May 8 00:38:41.276680 containerd[1549]: time="2025-05-08T00:38:41.276626984Z" level=info msg="StopPodSandbox for \"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\"" May 8 00:38:41.299491 containerd[1549]: time="2025-05-08T00:38:41.299456699Z" level=error msg="StopPodSandbox for \"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\" failed" error="failed to destroy network for sandbox \"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:41.299666 kubelet[2711]: E0508 00:38:41.299635 2711 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" May 8 00:38:41.299897 kubelet[2711]: E0508 00:38:41.299677 2711 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460"} May 8 00:38:41.299897 kubelet[2711]: E0508 00:38:41.299703 2711 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2ee8fd8b-a4bc-43c5-bff4-0631d474067b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:38:41.299897 kubelet[2711]: E0508 00:38:41.299721 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2ee8fd8b-a4bc-43c5-bff4-0631d474067b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-577f55d98d-bjswh" podUID="2ee8fd8b-a4bc-43c5-bff4-0631d474067b" May 8 00:38:42.280345 containerd[1549]: time="2025-05-08T00:38:42.278098241Z" level=info msg="StopPodSandbox for \"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\"" May 8 00:38:42.302271 containerd[1549]: time="2025-05-08T00:38:42.302239109Z" level=error msg="StopPodSandbox for \"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\" failed" error="failed to destroy network for sandbox \"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:42.326399 kubelet[2711]: E0508 00:38:42.326376 2711 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b" May 8 00:38:42.326741 kubelet[2711]: E0508 00:38:42.326724 2711 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b"} May 8 00:38:42.327409 kubelet[2711]: E0508 00:38:42.326750 2711 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9353a2d8-021f-4963-930a-ab008f3fd909\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:38:42.327409 kubelet[2711]: E0508 00:38:42.326764 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9353a2d8-021f-4963-930a-ab008f3fd909\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d465466c4-h88jf" podUID="9353a2d8-021f-4963-930a-ab008f3fd909" May 8 00:38:42.340708 systemd[1]: Started sshd@7-139.178.70.103:22-194.0.234.19:63106.service - OpenSSH per-connection server daemon (194.0.234.19:63106). May 8 00:38:43.276631 containerd[1549]: time="2025-05-08T00:38:43.276416422Z" level=info msg="StopPodSandbox for \"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\"" May 8 00:38:43.276781 containerd[1549]: time="2025-05-08T00:38:43.276767666Z" level=info msg="StopPodSandbox for \"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\"" May 8 00:38:43.296905 containerd[1549]: time="2025-05-08T00:38:43.296879593Z" level=error msg="StopPodSandbox for \"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\" failed" error="failed to destroy network for sandbox \"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:43.297453 kubelet[2711]: E0508 00:38:43.297364 2711 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" May 8 00:38:43.297453 kubelet[2711]: E0508 00:38:43.297398 2711 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6"} May 8 00:38:43.297453 kubelet[2711]: E0508 00:38:43.297422 2711 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0ec7b516-7af4-4ea9-8c59-9667bc29c59e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:38:43.297453 kubelet[2711]: E0508 00:38:43.297435 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0ec7b516-7af4-4ea9-8c59-9667bc29c59e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-577f55d98d-45j6l" podUID="0ec7b516-7af4-4ea9-8c59-9667bc29c59e" May 8 00:38:43.299096 containerd[1549]: time="2025-05-08T00:38:43.299075779Z" level=error msg="StopPodSandbox for \"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\" failed" error="failed to destroy network for sandbox \"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:43.299169 kubelet[2711]: E0508 00:38:43.299152 2711 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f" May 8 00:38:43.299199 kubelet[2711]: E0508 00:38:43.299173 2711 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f"} May 8 00:38:43.299199 kubelet[2711]: E0508 00:38:43.299188 2711 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"06105806-56a1-4100-9953-11ff7427bd13\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:38:43.299250 kubelet[2711]: E0508 00:38:43.299206 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"06105806-56a1-4100-9953-11ff7427bd13\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gvqtb" podUID="06105806-56a1-4100-9953-11ff7427bd13" May 8 00:38:43.679527 sshd[4182]: Connection closed by authenticating user root 194.0.234.19 port 63106 [preauth] May 8 00:38:43.680403 systemd[1]: sshd@7-139.178.70.103:22-194.0.234.19:63106.service: Deactivated successfully. May 8 00:38:44.276884 containerd[1549]: time="2025-05-08T00:38:44.276695174Z" level=info msg="StopPodSandbox for \"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\"" May 8 00:38:44.294871 containerd[1549]: time="2025-05-08T00:38:44.294843808Z" level=error msg="StopPodSandbox for \"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\" failed" error="failed to destroy network for sandbox \"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:44.295194 kubelet[2711]: E0508 00:38:44.295100 2711 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724" May 8 00:38:44.295194 kubelet[2711]: E0508 00:38:44.295142 2711 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724"} May 8 00:38:44.295194 kubelet[2711]: E0508 00:38:44.295165 2711 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5260c901-675e-42fb-a9e6-84eb23b95893\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:38:44.295194 kubelet[2711]: E0508 00:38:44.295178 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5260c901-675e-42fb-a9e6-84eb23b95893\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79589f44bf-xp2wm" podUID="5260c901-675e-42fb-a9e6-84eb23b95893" May 8 00:38:46.277591 containerd[1549]: time="2025-05-08T00:38:46.277294673Z" level=info msg="StopPodSandbox for \"ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762\"" May 8 00:38:46.280829 containerd[1549]: time="2025-05-08T00:38:46.280733046Z" level=info msg="StopPodSandbox for \"be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23\"" May 8 00:38:46.300889 containerd[1549]: time="2025-05-08T00:38:46.300844718Z" level=error msg="StopPodSandbox for \"ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762\" failed" error="failed to destroy network for sandbox \"ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:46.301159 kubelet[2711]: E0508 00:38:46.301126 2711 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762" May 8 00:38:46.301357 kubelet[2711]: E0508 00:38:46.301170 2711 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762"} May 8 00:38:46.301357 kubelet[2711]: E0508 00:38:46.301202 2711 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bcac0e1d-a484-4def-954d-e37294951ec1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:38:46.301357 kubelet[2711]: E0508 00:38:46.301222 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bcac0e1d-a484-4def-954d-e37294951ec1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-gljcw" podUID="bcac0e1d-a484-4def-954d-e37294951ec1" May 8 00:38:46.313874 containerd[1549]: time="2025-05-08T00:38:46.313848186Z" level=error msg="StopPodSandbox for \"be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23\" failed" error="failed to destroy network for sandbox \"be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:46.314224 kubelet[2711]: E0508 00:38:46.314106 2711 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23" May 8 00:38:46.314224 kubelet[2711]: E0508 00:38:46.314160 2711 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23"} May 8 00:38:46.314224 kubelet[2711]: E0508 00:38:46.314180 2711 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"882fb80e-90cb-449a-87c0-4bbb4fd3e432\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:38:46.314224 kubelet[2711]: E0508 00:38:46.314198 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"882fb80e-90cb-449a-87c0-4bbb4fd3e432\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-bfnqq" podUID="882fb80e-90cb-449a-87c0-4bbb4fd3e432" May 8 00:38:51.276553 kubelet[2711]: I0508 00:38:51.276473 2711 scope.go:117] "RemoveContainer" containerID="24918ac6296860143b4c6cef93c1d1ca48f4749a150fa244fca49529c84c5052" May 8 00:38:51.277441 kubelet[2711]: E0508 00:38:51.277175 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-9kk99_calico-system(27e14b8e-f054-4cc5-a94f-78053ac3ed18)\"" pod="calico-system/calico-node-9kk99" podUID="27e14b8e-f054-4cc5-a94f-78053ac3ed18" May 8 00:38:52.278615 containerd[1549]: time="2025-05-08T00:38:52.278508577Z" level=info msg="StopPodSandbox for \"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\"" May 8 00:38:52.298800 containerd[1549]: time="2025-05-08T00:38:52.298758184Z" level=error msg="StopPodSandbox for \"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\" failed" error="failed to destroy network for sandbox \"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:52.298928 kubelet[2711]: E0508 00:38:52.298886 2711 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" May 8 00:38:52.299195 kubelet[2711]: E0508 00:38:52.298937 2711 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460"} May 8 00:38:52.299195 kubelet[2711]: E0508 00:38:52.298966 2711 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2ee8fd8b-a4bc-43c5-bff4-0631d474067b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:38:52.299195 kubelet[2711]: E0508 00:38:52.298980 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2ee8fd8b-a4bc-43c5-bff4-0631d474067b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-577f55d98d-bjswh" podUID="2ee8fd8b-a4bc-43c5-bff4-0631d474067b" May 8 00:38:55.276899 containerd[1549]: time="2025-05-08T00:38:55.276625141Z" level=info msg="StopPodSandbox for \"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\"" May 8 00:38:55.276899 containerd[1549]: time="2025-05-08T00:38:55.276655839Z" level=info msg="StopPodSandbox for \"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\"" May 8 00:38:55.297868 containerd[1549]: time="2025-05-08T00:38:55.297840799Z" level=error msg="StopPodSandbox for \"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\" failed" error="failed to destroy network for sandbox \"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:55.298643 kubelet[2711]: E0508 00:38:55.298084 2711 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724" May 8 00:38:55.298643 kubelet[2711]: E0508 00:38:55.298127 2711 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724"} May 8 00:38:55.298643 kubelet[2711]: E0508 00:38:55.298148 2711 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5260c901-675e-42fb-a9e6-84eb23b95893\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:38:55.298643 kubelet[2711]: E0508 00:38:55.298162 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5260c901-675e-42fb-a9e6-84eb23b95893\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79589f44bf-xp2wm" podUID="5260c901-675e-42fb-a9e6-84eb23b95893" May 8 00:38:55.300155 containerd[1549]: time="2025-05-08T00:38:55.300137275Z" level=error msg="StopPodSandbox for \"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\" failed" error="failed to destroy network for sandbox \"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:55.300277 kubelet[2711]: E0508 00:38:55.300263 2711 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f" May 8 00:38:55.300333 kubelet[2711]: E0508 00:38:55.300325 2711 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f"} May 8 00:38:55.300387 kubelet[2711]: E0508 00:38:55.300374 2711 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"06105806-56a1-4100-9953-11ff7427bd13\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:38:55.300447 kubelet[2711]: E0508 00:38:55.300438 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"06105806-56a1-4100-9953-11ff7427bd13\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gvqtb" podUID="06105806-56a1-4100-9953-11ff7427bd13" May 8 00:38:56.276216 containerd[1549]: time="2025-05-08T00:38:56.275997117Z" level=info msg="StopPodSandbox for \"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\"" May 8 00:38:56.293385 containerd[1549]: time="2025-05-08T00:38:56.293350349Z" level=error msg="StopPodSandbox for \"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\" failed" error="failed to destroy network for sandbox \"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:56.293862 kubelet[2711]: E0508 00:38:56.293757 2711 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b" May 8 00:38:56.293862 kubelet[2711]: E0508 00:38:56.293800 2711 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b"} May 8 00:38:56.293862 kubelet[2711]: E0508 00:38:56.293824 2711 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9353a2d8-021f-4963-930a-ab008f3fd909\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:38:56.293862 kubelet[2711]: E0508 00:38:56.293840 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9353a2d8-021f-4963-930a-ab008f3fd909\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d465466c4-h88jf" podUID="9353a2d8-021f-4963-930a-ab008f3fd909" May 8 00:38:58.276163 containerd[1549]: time="2025-05-08T00:38:58.275987415Z" level=info msg="StopPodSandbox for \"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\"" May 8 00:38:58.295045 containerd[1549]: time="2025-05-08T00:38:58.295012054Z" level=error msg="StopPodSandbox for \"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\" failed" error="failed to destroy network for sandbox \"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:38:58.295170 kubelet[2711]: E0508 00:38:58.295138 2711 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" May 8 00:38:58.295367 kubelet[2711]: E0508 00:38:58.295179 2711 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6"} May 8 00:38:58.295367 kubelet[2711]: E0508 00:38:58.295199 2711 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0ec7b516-7af4-4ea9-8c59-9667bc29c59e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:38:58.295367 kubelet[2711]: E0508 00:38:58.295214 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0ec7b516-7af4-4ea9-8c59-9667bc29c59e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-577f55d98d-45j6l" podUID="0ec7b516-7af4-4ea9-8c59-9667bc29c59e" May 8 00:39:01.277511 containerd[1549]: time="2025-05-08T00:39:01.276644390Z" level=info msg="StopPodSandbox for \"be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23\"" May 8 00:39:01.277511 containerd[1549]: time="2025-05-08T00:39:01.276771608Z" level=info msg="StopPodSandbox for \"ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762\"" May 8 00:39:01.301006 containerd[1549]: time="2025-05-08T00:39:01.300893071Z" level=error msg="StopPodSandbox for \"be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23\" failed" error="failed to destroy network for sandbox \"be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:39:01.301006 containerd[1549]: time="2025-05-08T00:39:01.300945396Z" level=error msg="StopPodSandbox for \"ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762\" failed" error="failed to destroy network for sandbox \"ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:39:01.301249 kubelet[2711]: E0508 00:39:01.301058 2711 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762" May 8 00:39:01.301249 kubelet[2711]: E0508 00:39:01.301090 2711 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23" May 8 00:39:01.301249 kubelet[2711]: E0508 00:39:01.301111 2711 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23"} May 8 00:39:01.301249 kubelet[2711]: E0508 00:39:01.301129 2711 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"882fb80e-90cb-449a-87c0-4bbb4fd3e432\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:39:01.301505 kubelet[2711]: E0508 00:39:01.301144 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"882fb80e-90cb-449a-87c0-4bbb4fd3e432\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-bfnqq" podUID="882fb80e-90cb-449a-87c0-4bbb4fd3e432" May 8 00:39:01.301505 kubelet[2711]: E0508 00:39:01.301094 2711 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762"} May 8 00:39:01.301505 kubelet[2711]: E0508 00:39:01.301168 2711 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bcac0e1d-a484-4def-954d-e37294951ec1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:39:01.301505 kubelet[2711]: E0508 00:39:01.301180 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bcac0e1d-a484-4def-954d-e37294951ec1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-gljcw" podUID="bcac0e1d-a484-4def-954d-e37294951ec1" May 8 00:39:04.276191 kubelet[2711]: I0508 00:39:04.275992 2711 scope.go:117] "RemoveContainer" containerID="24918ac6296860143b4c6cef93c1d1ca48f4749a150fa244fca49529c84c5052" May 8 00:39:04.279338 containerd[1549]: time="2025-05-08T00:39:04.279309137Z" level=info msg="CreateContainer within sandbox \"f8c30ac93b514c38288d3d9e290d44b8b2e34665bbac79bb445525f22dc98161\" for container &ContainerMetadata{Name:calico-node,Attempt:3,}" May 8 00:39:04.305894 containerd[1549]: time="2025-05-08T00:39:04.305821821Z" level=info msg="CreateContainer within sandbox \"f8c30ac93b514c38288d3d9e290d44b8b2e34665bbac79bb445525f22dc98161\" for &ContainerMetadata{Name:calico-node,Attempt:3,} returns container id \"35ac4b797a95bb8799e1a064f17708655a91c2182494a1e61906c57d01351169\"" May 8 00:39:04.307718 containerd[1549]: time="2025-05-08T00:39:04.307040392Z" level=info msg="StartContainer for \"35ac4b797a95bb8799e1a064f17708655a91c2182494a1e61906c57d01351169\"" May 8 00:39:04.336053 systemd[1]: Started cri-containerd-35ac4b797a95bb8799e1a064f17708655a91c2182494a1e61906c57d01351169.scope - libcontainer container 35ac4b797a95bb8799e1a064f17708655a91c2182494a1e61906c57d01351169. May 8 00:39:04.357411 containerd[1549]: time="2025-05-08T00:39:04.357384705Z" level=info msg="StartContainer for \"35ac4b797a95bb8799e1a064f17708655a91c2182494a1e61906c57d01351169\" returns successfully" May 8 00:39:04.464991 systemd[1]: cri-containerd-35ac4b797a95bb8799e1a064f17708655a91c2182494a1e61906c57d01351169.scope: Deactivated successfully. May 8 00:39:04.477123 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35ac4b797a95bb8799e1a064f17708655a91c2182494a1e61906c57d01351169-rootfs.mount: Deactivated successfully. May 8 00:39:04.500469 containerd[1549]: time="2025-05-08T00:39:04.500361024Z" level=info msg="shim disconnected" id=35ac4b797a95bb8799e1a064f17708655a91c2182494a1e61906c57d01351169 namespace=k8s.io May 8 00:39:04.500585 containerd[1549]: time="2025-05-08T00:39:04.500482025Z" level=warning msg="cleaning up after shim disconnected" id=35ac4b797a95bb8799e1a064f17708655a91c2182494a1e61906c57d01351169 namespace=k8s.io May 8 00:39:04.500585 containerd[1549]: time="2025-05-08T00:39:04.500494785Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:39:04.966809 kubelet[2711]: I0508 00:39:04.966771 2711 scope.go:117] "RemoveContainer" containerID="24918ac6296860143b4c6cef93c1d1ca48f4749a150fa244fca49529c84c5052" May 8 00:39:04.967077 kubelet[2711]: I0508 00:39:04.967056 2711 scope.go:117] "RemoveContainer" containerID="35ac4b797a95bb8799e1a064f17708655a91c2182494a1e61906c57d01351169" May 8 00:39:04.967180 kubelet[2711]: E0508 00:39:04.967158 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 40s restarting failed container=calico-node pod=calico-node-9kk99_calico-system(27e14b8e-f054-4cc5-a94f-78053ac3ed18)\"" pod="calico-system/calico-node-9kk99" podUID="27e14b8e-f054-4cc5-a94f-78053ac3ed18" May 8 00:39:04.967907 containerd[1549]: time="2025-05-08T00:39:04.967873357Z" level=info msg="RemoveContainer for \"24918ac6296860143b4c6cef93c1d1ca48f4749a150fa244fca49529c84c5052\"" May 8 00:39:04.975062 containerd[1549]: time="2025-05-08T00:39:04.975024571Z" level=info msg="RemoveContainer for \"24918ac6296860143b4c6cef93c1d1ca48f4749a150fa244fca49529c84c5052\" returns successfully" May 8 00:39:05.276594 containerd[1549]: time="2025-05-08T00:39:05.276309772Z" level=info msg="StopPodSandbox for \"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\"" May 8 00:39:05.293706 containerd[1549]: time="2025-05-08T00:39:05.293654596Z" level=error msg="StopPodSandbox for \"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\" failed" error="failed to destroy network for sandbox \"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:39:05.293946 kubelet[2711]: E0508 00:39:05.293829 2711 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" May 8 00:39:05.293946 kubelet[2711]: E0508 00:39:05.293870 2711 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460"} May 8 00:39:05.293946 kubelet[2711]: E0508 00:39:05.293891 2711 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2ee8fd8b-a4bc-43c5-bff4-0631d474067b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:39:05.293946 kubelet[2711]: E0508 00:39:05.293906 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2ee8fd8b-a4bc-43c5-bff4-0631d474067b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-577f55d98d-bjswh" podUID="2ee8fd8b-a4bc-43c5-bff4-0631d474067b" May 8 00:39:05.970222 kubelet[2711]: I0508 00:39:05.970034 2711 scope.go:117] "RemoveContainer" containerID="35ac4b797a95bb8799e1a064f17708655a91c2182494a1e61906c57d01351169" May 8 00:39:05.970222 kubelet[2711]: E0508 00:39:05.970128 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 40s restarting failed container=calico-node pod=calico-node-9kk99_calico-system(27e14b8e-f054-4cc5-a94f-78053ac3ed18)\"" pod="calico-system/calico-node-9kk99" podUID="27e14b8e-f054-4cc5-a94f-78053ac3ed18" May 8 00:39:06.278840 containerd[1549]: time="2025-05-08T00:39:06.278465191Z" level=info msg="StopPodSandbox for \"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\"" May 8 00:39:06.303754 containerd[1549]: time="2025-05-08T00:39:06.303716152Z" level=error msg="StopPodSandbox for \"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\" failed" error="failed to destroy network for sandbox \"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:39:06.304367 kubelet[2711]: E0508 00:39:06.303991 2711 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724" May 8 00:39:06.304367 kubelet[2711]: E0508 00:39:06.304067 2711 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724"} May 8 00:39:06.304367 kubelet[2711]: E0508 00:39:06.304102 2711 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5260c901-675e-42fb-a9e6-84eb23b95893\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:39:06.304367 kubelet[2711]: E0508 00:39:06.304117 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5260c901-675e-42fb-a9e6-84eb23b95893\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79589f44bf-xp2wm" podUID="5260c901-675e-42fb-a9e6-84eb23b95893" May 8 00:39:06.399603 containerd[1549]: time="2025-05-08T00:39:06.399386829Z" level=info msg="StopContainer for \"3d4b5af97329cda77ef2f484956d291eab56a12ff6c1a4b41ba6a5e1bbbb7a86\" with timeout 300 (s)" May 8 00:39:06.400100 containerd[1549]: time="2025-05-08T00:39:06.400085071Z" level=info msg="Stop container \"3d4b5af97329cda77ef2f484956d291eab56a12ff6c1a4b41ba6a5e1bbbb7a86\" with signal terminated" May 8 00:39:06.436519 systemd[1]: cri-containerd-3d4b5af97329cda77ef2f484956d291eab56a12ff6c1a4b41ba6a5e1bbbb7a86.scope: Deactivated successfully. May 8 00:39:06.459182 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d4b5af97329cda77ef2f484956d291eab56a12ff6c1a4b41ba6a5e1bbbb7a86-rootfs.mount: Deactivated successfully. May 8 00:39:06.464930 containerd[1549]: time="2025-05-08T00:39:06.464864523Z" level=info msg="shim disconnected" id=3d4b5af97329cda77ef2f484956d291eab56a12ff6c1a4b41ba6a5e1bbbb7a86 namespace=k8s.io May 8 00:39:06.464930 containerd[1549]: time="2025-05-08T00:39:06.464897919Z" level=warning msg="cleaning up after shim disconnected" id=3d4b5af97329cda77ef2f484956d291eab56a12ff6c1a4b41ba6a5e1bbbb7a86 namespace=k8s.io May 8 00:39:06.464930 containerd[1549]: time="2025-05-08T00:39:06.464903451Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:39:06.475070 containerd[1549]: time="2025-05-08T00:39:06.474990473Z" level=info msg="StopContainer for \"3d4b5af97329cda77ef2f484956d291eab56a12ff6c1a4b41ba6a5e1bbbb7a86\" returns successfully" May 8 00:39:06.475559 containerd[1549]: time="2025-05-08T00:39:06.475400431Z" level=info msg="StopPodSandbox for \"99041687496a20f2a794d955933b8708c04816e8fb3a2dc0a6cf8c4ab0a92363\"" May 8 00:39:06.475559 containerd[1549]: time="2025-05-08T00:39:06.475426429Z" level=info msg="Container to stop \"3d4b5af97329cda77ef2f484956d291eab56a12ff6c1a4b41ba6a5e1bbbb7a86\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:39:06.477781 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-99041687496a20f2a794d955933b8708c04816e8fb3a2dc0a6cf8c4ab0a92363-shm.mount: Deactivated successfully. May 8 00:39:06.481104 systemd[1]: cri-containerd-99041687496a20f2a794d955933b8708c04816e8fb3a2dc0a6cf8c4ab0a92363.scope: Deactivated successfully. May 8 00:39:06.496281 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99041687496a20f2a794d955933b8708c04816e8fb3a2dc0a6cf8c4ab0a92363-rootfs.mount: Deactivated successfully. May 8 00:39:06.519049 containerd[1549]: time="2025-05-08T00:39:06.518946482Z" level=info msg="shim disconnected" id=99041687496a20f2a794d955933b8708c04816e8fb3a2dc0a6cf8c4ab0a92363 namespace=k8s.io May 8 00:39:06.519049 containerd[1549]: time="2025-05-08T00:39:06.518994045Z" level=warning msg="cleaning up after shim disconnected" id=99041687496a20f2a794d955933b8708c04816e8fb3a2dc0a6cf8c4ab0a92363 namespace=k8s.io May 8 00:39:06.519049 containerd[1549]: time="2025-05-08T00:39:06.519005815Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:39:06.527949 containerd[1549]: time="2025-05-08T00:39:06.527895698Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:39:06Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 8 00:39:06.528831 containerd[1549]: time="2025-05-08T00:39:06.528807894Z" level=info msg="TearDown network for sandbox \"99041687496a20f2a794d955933b8708c04816e8fb3a2dc0a6cf8c4ab0a92363\" successfully" May 8 00:39:06.528831 containerd[1549]: time="2025-05-08T00:39:06.528822719Z" level=info msg="StopPodSandbox for \"99041687496a20f2a794d955933b8708c04816e8fb3a2dc0a6cf8c4ab0a92363\" returns successfully" May 8 00:39:06.620391 kubelet[2711]: I0508 00:39:06.620306 2711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4b4cdff-0602-4ca5-839f-31fe2409aace-tigera-ca-bundle\") pod \"d4b4cdff-0602-4ca5-839f-31fe2409aace\" (UID: \"d4b4cdff-0602-4ca5-839f-31fe2409aace\") " May 8 00:39:06.620391 kubelet[2711]: I0508 00:39:06.620336 2711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9twlc\" (UniqueName: \"kubernetes.io/projected/d4b4cdff-0602-4ca5-839f-31fe2409aace-kube-api-access-9twlc\") pod \"d4b4cdff-0602-4ca5-839f-31fe2409aace\" (UID: \"d4b4cdff-0602-4ca5-839f-31fe2409aace\") " May 8 00:39:06.620391 kubelet[2711]: I0508 00:39:06.620352 2711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d4b4cdff-0602-4ca5-839f-31fe2409aace-typha-certs\") pod \"d4b4cdff-0602-4ca5-839f-31fe2409aace\" (UID: \"d4b4cdff-0602-4ca5-839f-31fe2409aace\") " May 8 00:39:06.633633 systemd[1]: var-lib-kubelet-pods-d4b4cdff\x2d0602\x2d4ca5\x2d839f\x2d31fe2409aace-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9twlc.mount: Deactivated successfully. May 8 00:39:06.635623 systemd[1]: var-lib-kubelet-pods-d4b4cdff\x2d0602\x2d4ca5\x2d839f\x2d31fe2409aace-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. May 8 00:39:06.638685 systemd[1]: var-lib-kubelet-pods-d4b4cdff\x2d0602\x2d4ca5\x2d839f\x2d31fe2409aace-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. May 8 00:39:06.640036 kubelet[2711]: I0508 00:39:06.639624 2711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4b4cdff-0602-4ca5-839f-31fe2409aace-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "d4b4cdff-0602-4ca5-839f-31fe2409aace" (UID: "d4b4cdff-0602-4ca5-839f-31fe2409aace"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 8 00:39:06.640036 kubelet[2711]: I0508 00:39:06.638955 2711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4b4cdff-0602-4ca5-839f-31fe2409aace-kube-api-access-9twlc" (OuterVolumeSpecName: "kube-api-access-9twlc") pod "d4b4cdff-0602-4ca5-839f-31fe2409aace" (UID: "d4b4cdff-0602-4ca5-839f-31fe2409aace"). InnerVolumeSpecName "kube-api-access-9twlc". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:39:06.640360 kubelet[2711]: I0508 00:39:06.640343 2711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4b4cdff-0602-4ca5-839f-31fe2409aace-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "d4b4cdff-0602-4ca5-839f-31fe2409aace" (UID: "d4b4cdff-0602-4ca5-839f-31fe2409aace"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 8 00:39:06.720535 kubelet[2711]: I0508 00:39:06.720503 2711 reconciler_common.go:288] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d4b4cdff-0602-4ca5-839f-31fe2409aace-typha-certs\") on node \"localhost\" DevicePath \"\"" May 8 00:39:06.720535 kubelet[2711]: I0508 00:39:06.720535 2711 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-9twlc\" (UniqueName: \"kubernetes.io/projected/d4b4cdff-0602-4ca5-839f-31fe2409aace-kube-api-access-9twlc\") on node \"localhost\" DevicePath \"\"" May 8 00:39:06.720535 kubelet[2711]: I0508 00:39:06.720547 2711 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4b4cdff-0602-4ca5-839f-31fe2409aace-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 8 00:39:06.972097 kubelet[2711]: I0508 00:39:06.972038 2711 scope.go:117] "RemoveContainer" containerID="3d4b5af97329cda77ef2f484956d291eab56a12ff6c1a4b41ba6a5e1bbbb7a86" May 8 00:39:06.973755 containerd[1549]: time="2025-05-08T00:39:06.972966607Z" level=info msg="RemoveContainer for \"3d4b5af97329cda77ef2f484956d291eab56a12ff6c1a4b41ba6a5e1bbbb7a86\"" May 8 00:39:06.973755 containerd[1549]: time="2025-05-08T00:39:06.973609066Z" level=info msg="StopPodSandbox for \"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\"" May 8 00:39:06.973898 containerd[1549]: time="2025-05-08T00:39:06.973831129Z" level=info msg="StopPodSandbox for \"f8c30ac93b514c38288d3d9e290d44b8b2e34665bbac79bb445525f22dc98161\"" May 8 00:39:06.973898 containerd[1549]: time="2025-05-08T00:39:06.973852012Z" level=info msg="Container to stop \"35ac4b797a95bb8799e1a064f17708655a91c2182494a1e61906c57d01351169\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:39:06.973898 containerd[1549]: time="2025-05-08T00:39:06.973858946Z" level=info msg="Container to stop \"39fa1a0504363ce400fd3cb2db77ba0f1e20ebc2db99b8311e6cf07321e6eaaa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:39:06.973898 containerd[1549]: time="2025-05-08T00:39:06.973863919Z" level=info msg="Container to stop \"25a4772e25e9c721fb241d037fbc1b5e27b80b776826ee484cb7aa05c14f98e3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:39:06.976617 systemd[1]: Removed slice kubepods-besteffort-podd4b4cdff_0602_4ca5_839f_31fe2409aace.slice - libcontainer container kubepods-besteffort-podd4b4cdff_0602_4ca5_839f_31fe2409aace.slice. May 8 00:39:06.986193 systemd[1]: cri-containerd-f8c30ac93b514c38288d3d9e290d44b8b2e34665bbac79bb445525f22dc98161.scope: Deactivated successfully. May 8 00:39:06.994600 kubelet[2711]: I0508 00:39:06.991065 2711 scope.go:117] "RemoveContainer" containerID="3d4b5af97329cda77ef2f484956d291eab56a12ff6c1a4b41ba6a5e1bbbb7a86" May 8 00:39:06.994635 containerd[1549]: time="2025-05-08T00:39:06.990783500Z" level=info msg="RemoveContainer for \"3d4b5af97329cda77ef2f484956d291eab56a12ff6c1a4b41ba6a5e1bbbb7a86\" returns successfully" May 8 00:39:06.997459 containerd[1549]: time="2025-05-08T00:39:06.991451820Z" level=error msg="ContainerStatus for \"3d4b5af97329cda77ef2f484956d291eab56a12ff6c1a4b41ba6a5e1bbbb7a86\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3d4b5af97329cda77ef2f484956d291eab56a12ff6c1a4b41ba6a5e1bbbb7a86\": not found" May 8 00:39:07.019860 kubelet[2711]: E0508 00:39:07.019713 2711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3d4b5af97329cda77ef2f484956d291eab56a12ff6c1a4b41ba6a5e1bbbb7a86\": not found" containerID="3d4b5af97329cda77ef2f484956d291eab56a12ff6c1a4b41ba6a5e1bbbb7a86" May 8 00:39:07.019860 kubelet[2711]: I0508 00:39:07.019746 2711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3d4b5af97329cda77ef2f484956d291eab56a12ff6c1a4b41ba6a5e1bbbb7a86"} err="failed to get container status \"3d4b5af97329cda77ef2f484956d291eab56a12ff6c1a4b41ba6a5e1bbbb7a86\": rpc error: code = NotFound desc = an error occurred when try to find container \"3d4b5af97329cda77ef2f484956d291eab56a12ff6c1a4b41ba6a5e1bbbb7a86\": not found" May 8 00:39:07.022338 containerd[1549]: time="2025-05-08T00:39:07.022306464Z" level=error msg="StopPodSandbox for \"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\" failed" error="failed to destroy network for sandbox \"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:39:07.022555 kubelet[2711]: E0508 00:39:07.022498 2711 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724" May 8 00:39:07.022555 kubelet[2711]: E0508 00:39:07.022527 2711 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724"} May 8 00:39:07.023081 kubelet[2711]: E0508 00:39:07.022646 2711 kubelet.go:2027] "Unhandled Error" err="failed to \"KillPodSandbox\" for \"5260c901-675e-42fb-a9e6-84eb23b95893\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" logger="UnhandledError" May 8 00:39:07.023800 kubelet[2711]: E0508 00:39:07.023761 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5260c901-675e-42fb-a9e6-84eb23b95893\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79589f44bf-xp2wm" podUID="5260c901-675e-42fb-a9e6-84eb23b95893" May 8 00:39:07.082210 containerd[1549]: time="2025-05-08T00:39:07.082165604Z" level=info msg="shim disconnected" id=f8c30ac93b514c38288d3d9e290d44b8b2e34665bbac79bb445525f22dc98161 namespace=k8s.io May 8 00:39:07.083100 containerd[1549]: time="2025-05-08T00:39:07.082512624Z" level=warning msg="cleaning up after shim disconnected" id=f8c30ac93b514c38288d3d9e290d44b8b2e34665bbac79bb445525f22dc98161 namespace=k8s.io May 8 00:39:07.083100 containerd[1549]: time="2025-05-08T00:39:07.082527436Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:39:07.093855 containerd[1549]: time="2025-05-08T00:39:07.093832031Z" level=info msg="TearDown network for sandbox \"f8c30ac93b514c38288d3d9e290d44b8b2e34665bbac79bb445525f22dc98161\" successfully" May 8 00:39:07.093996 containerd[1549]: time="2025-05-08T00:39:07.093985824Z" level=info msg="StopPodSandbox for \"f8c30ac93b514c38288d3d9e290d44b8b2e34665bbac79bb445525f22dc98161\" returns successfully" May 8 00:39:07.122004 kubelet[2711]: I0508 00:39:07.121982 2711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/27e14b8e-f054-4cc5-a94f-78053ac3ed18-cni-log-dir\") pod \"27e14b8e-f054-4cc5-a94f-78053ac3ed18\" (UID: \"27e14b8e-f054-4cc5-a94f-78053ac3ed18\") " May 8 00:39:07.124146 kubelet[2711]: I0508 00:39:07.124117 2711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvlxp\" (UniqueName: \"kubernetes.io/projected/27e14b8e-f054-4cc5-a94f-78053ac3ed18-kube-api-access-dvlxp\") pod \"27e14b8e-f054-4cc5-a94f-78053ac3ed18\" (UID: \"27e14b8e-f054-4cc5-a94f-78053ac3ed18\") " May 8 00:39:07.124146 kubelet[2711]: I0508 00:39:07.124138 2711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/27e14b8e-f054-4cc5-a94f-78053ac3ed18-flexvol-driver-host\") pod \"27e14b8e-f054-4cc5-a94f-78053ac3ed18\" (UID: \"27e14b8e-f054-4cc5-a94f-78053ac3ed18\") " May 8 00:39:07.124146 kubelet[2711]: I0508 00:39:07.124151 2711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/27e14b8e-f054-4cc5-a94f-78053ac3ed18-node-certs\") pod \"27e14b8e-f054-4cc5-a94f-78053ac3ed18\" (UID: \"27e14b8e-f054-4cc5-a94f-78053ac3ed18\") " May 8 00:39:07.124756 kubelet[2711]: I0508 00:39:07.124161 2711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/27e14b8e-f054-4cc5-a94f-78053ac3ed18-policysync\") pod \"27e14b8e-f054-4cc5-a94f-78053ac3ed18\" (UID: \"27e14b8e-f054-4cc5-a94f-78053ac3ed18\") " May 8 00:39:07.124756 kubelet[2711]: I0508 00:39:07.124169 2711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/27e14b8e-f054-4cc5-a94f-78053ac3ed18-cni-bin-dir\") pod \"27e14b8e-f054-4cc5-a94f-78053ac3ed18\" (UID: \"27e14b8e-f054-4cc5-a94f-78053ac3ed18\") " May 8 00:39:07.124756 kubelet[2711]: I0508 00:39:07.124178 2711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/27e14b8e-f054-4cc5-a94f-78053ac3ed18-cni-net-dir\") pod \"27e14b8e-f054-4cc5-a94f-78053ac3ed18\" (UID: \"27e14b8e-f054-4cc5-a94f-78053ac3ed18\") " May 8 00:39:07.124756 kubelet[2711]: I0508 00:39:07.124188 2711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/27e14b8e-f054-4cc5-a94f-78053ac3ed18-lib-modules\") pod \"27e14b8e-f054-4cc5-a94f-78053ac3ed18\" (UID: \"27e14b8e-f054-4cc5-a94f-78053ac3ed18\") " May 8 00:39:07.124756 kubelet[2711]: I0508 00:39:07.124215 2711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/27e14b8e-f054-4cc5-a94f-78053ac3ed18-var-lib-calico\") pod \"27e14b8e-f054-4cc5-a94f-78053ac3ed18\" (UID: \"27e14b8e-f054-4cc5-a94f-78053ac3ed18\") " May 8 00:39:07.124756 kubelet[2711]: I0508 00:39:07.124228 2711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27e14b8e-f054-4cc5-a94f-78053ac3ed18-tigera-ca-bundle\") pod \"27e14b8e-f054-4cc5-a94f-78053ac3ed18\" (UID: \"27e14b8e-f054-4cc5-a94f-78053ac3ed18\") " May 8 00:39:07.127276 kubelet[2711]: I0508 00:39:07.124237 2711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/27e14b8e-f054-4cc5-a94f-78053ac3ed18-var-run-calico\") pod \"27e14b8e-f054-4cc5-a94f-78053ac3ed18\" (UID: \"27e14b8e-f054-4cc5-a94f-78053ac3ed18\") " May 8 00:39:07.127276 kubelet[2711]: I0508 00:39:07.124246 2711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/27e14b8e-f054-4cc5-a94f-78053ac3ed18-xtables-lock\") pod \"27e14b8e-f054-4cc5-a94f-78053ac3ed18\" (UID: \"27e14b8e-f054-4cc5-a94f-78053ac3ed18\") " May 8 00:39:07.127276 kubelet[2711]: I0508 00:39:07.124074 2711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27e14b8e-f054-4cc5-a94f-78053ac3ed18-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "27e14b8e-f054-4cc5-a94f-78053ac3ed18" (UID: "27e14b8e-f054-4cc5-a94f-78053ac3ed18"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:39:07.127276 kubelet[2711]: I0508 00:39:07.124296 2711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27e14b8e-f054-4cc5-a94f-78053ac3ed18-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "27e14b8e-f054-4cc5-a94f-78053ac3ed18" (UID: "27e14b8e-f054-4cc5-a94f-78053ac3ed18"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:39:07.127276 kubelet[2711]: E0508 00:39:07.126704 2711 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="27e14b8e-f054-4cc5-a94f-78053ac3ed18" containerName="install-cni" May 8 00:39:07.127276 kubelet[2711]: E0508 00:39:07.126735 2711 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="27e14b8e-f054-4cc5-a94f-78053ac3ed18" containerName="calico-node" May 8 00:39:07.127276 kubelet[2711]: E0508 00:39:07.126742 2711 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="27e14b8e-f054-4cc5-a94f-78053ac3ed18" containerName="calico-node" May 8 00:39:07.127745 kubelet[2711]: E0508 00:39:07.126748 2711 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d4b4cdff-0602-4ca5-839f-31fe2409aace" containerName="calico-typha" May 8 00:39:07.127745 kubelet[2711]: E0508 00:39:07.126754 2711 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="27e14b8e-f054-4cc5-a94f-78053ac3ed18" containerName="flexvol-driver" May 8 00:39:07.127745 kubelet[2711]: E0508 00:39:07.126760 2711 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="27e14b8e-f054-4cc5-a94f-78053ac3ed18" containerName="calico-node" May 8 00:39:07.127745 kubelet[2711]: E0508 00:39:07.126765 2711 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="27e14b8e-f054-4cc5-a94f-78053ac3ed18" containerName="calico-node" May 8 00:39:07.128283 kubelet[2711]: I0508 00:39:07.128055 2711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27e14b8e-f054-4cc5-a94f-78053ac3ed18-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "27e14b8e-f054-4cc5-a94f-78053ac3ed18" (UID: "27e14b8e-f054-4cc5-a94f-78053ac3ed18"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:39:07.129313 kubelet[2711]: I0508 00:39:07.129296 2711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27e14b8e-f054-4cc5-a94f-78053ac3ed18-policysync" (OuterVolumeSpecName: "policysync") pod "27e14b8e-f054-4cc5-a94f-78053ac3ed18" (UID: "27e14b8e-f054-4cc5-a94f-78053ac3ed18"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:39:07.130144 kubelet[2711]: I0508 00:39:07.130098 2711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27e14b8e-f054-4cc5-a94f-78053ac3ed18-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "27e14b8e-f054-4cc5-a94f-78053ac3ed18" (UID: "27e14b8e-f054-4cc5-a94f-78053ac3ed18"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:39:07.130310 kubelet[2711]: I0508 00:39:07.130127 2711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27e14b8e-f054-4cc5-a94f-78053ac3ed18-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "27e14b8e-f054-4cc5-a94f-78053ac3ed18" (UID: "27e14b8e-f054-4cc5-a94f-78053ac3ed18"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:39:07.130310 kubelet[2711]: I0508 00:39:07.130241 2711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27e14b8e-f054-4cc5-a94f-78053ac3ed18-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "27e14b8e-f054-4cc5-a94f-78053ac3ed18" (UID: "27e14b8e-f054-4cc5-a94f-78053ac3ed18"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:39:07.130310 kubelet[2711]: I0508 00:39:07.130259 2711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27e14b8e-f054-4cc5-a94f-78053ac3ed18-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "27e14b8e-f054-4cc5-a94f-78053ac3ed18" (UID: "27e14b8e-f054-4cc5-a94f-78053ac3ed18"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:39:07.131053 kubelet[2711]: I0508 00:39:07.131039 2711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27e14b8e-f054-4cc5-a94f-78053ac3ed18-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "27e14b8e-f054-4cc5-a94f-78053ac3ed18" (UID: "27e14b8e-f054-4cc5-a94f-78053ac3ed18"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:39:07.131264 kubelet[2711]: I0508 00:39:07.131254 2711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27e14b8e-f054-4cc5-a94f-78053ac3ed18-kube-api-access-dvlxp" (OuterVolumeSpecName: "kube-api-access-dvlxp") pod "27e14b8e-f054-4cc5-a94f-78053ac3ed18" (UID: "27e14b8e-f054-4cc5-a94f-78053ac3ed18"). InnerVolumeSpecName "kube-api-access-dvlxp". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:39:07.132405 kubelet[2711]: I0508 00:39:07.131990 2711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27e14b8e-f054-4cc5-a94f-78053ac3ed18-node-certs" (OuterVolumeSpecName: "node-certs") pod "27e14b8e-f054-4cc5-a94f-78053ac3ed18" (UID: "27e14b8e-f054-4cc5-a94f-78053ac3ed18"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 8 00:39:07.133884 kubelet[2711]: I0508 00:39:07.133843 2711 memory_manager.go:354] "RemoveStaleState removing state" podUID="27e14b8e-f054-4cc5-a94f-78053ac3ed18" containerName="calico-node" May 8 00:39:07.134071 kubelet[2711]: I0508 00:39:07.133943 2711 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4b4cdff-0602-4ca5-839f-31fe2409aace" containerName="calico-typha" May 8 00:39:07.134071 kubelet[2711]: I0508 00:39:07.133954 2711 memory_manager.go:354] "RemoveStaleState removing state" podUID="27e14b8e-f054-4cc5-a94f-78053ac3ed18" containerName="calico-node" May 8 00:39:07.134071 kubelet[2711]: I0508 00:39:07.133960 2711 memory_manager.go:354] "RemoveStaleState removing state" podUID="27e14b8e-f054-4cc5-a94f-78053ac3ed18" containerName="calico-node" May 8 00:39:07.134071 kubelet[2711]: I0508 00:39:07.133964 2711 memory_manager.go:354] "RemoveStaleState removing state" podUID="27e14b8e-f054-4cc5-a94f-78053ac3ed18" containerName="calico-node" May 8 00:39:07.136005 kubelet[2711]: I0508 00:39:07.135742 2711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27e14b8e-f054-4cc5-a94f-78053ac3ed18-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "27e14b8e-f054-4cc5-a94f-78053ac3ed18" (UID: "27e14b8e-f054-4cc5-a94f-78053ac3ed18"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 8 00:39:07.140743 systemd[1]: Created slice kubepods-besteffort-pod91b620c9_3d6f_4977_bcb5_5b8705512c48.slice - libcontainer container kubepods-besteffort-pod91b620c9_3d6f_4977_bcb5_5b8705512c48.slice. May 8 00:39:07.225440 kubelet[2711]: I0508 00:39:07.225272 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/91b620c9-3d6f-4977-bcb5-5b8705512c48-policysync\") pod \"calico-node-8lzpq\" (UID: \"91b620c9-3d6f-4977-bcb5-5b8705512c48\") " pod="calico-system/calico-node-8lzpq" May 8 00:39:07.225440 kubelet[2711]: I0508 00:39:07.225312 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91b620c9-3d6f-4977-bcb5-5b8705512c48-lib-modules\") pod \"calico-node-8lzpq\" (UID: \"91b620c9-3d6f-4977-bcb5-5b8705512c48\") " pod="calico-system/calico-node-8lzpq" May 8 00:39:07.225440 kubelet[2711]: I0508 00:39:07.225330 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/91b620c9-3d6f-4977-bcb5-5b8705512c48-node-certs\") pod \"calico-node-8lzpq\" (UID: \"91b620c9-3d6f-4977-bcb5-5b8705512c48\") " pod="calico-system/calico-node-8lzpq" May 8 00:39:07.225440 kubelet[2711]: I0508 00:39:07.225347 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/91b620c9-3d6f-4977-bcb5-5b8705512c48-cni-bin-dir\") pod \"calico-node-8lzpq\" (UID: \"91b620c9-3d6f-4977-bcb5-5b8705512c48\") " pod="calico-system/calico-node-8lzpq" May 8 00:39:07.225440 kubelet[2711]: I0508 00:39:07.225364 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/91b620c9-3d6f-4977-bcb5-5b8705512c48-cni-net-dir\") pod \"calico-node-8lzpq\" (UID: \"91b620c9-3d6f-4977-bcb5-5b8705512c48\") " pod="calico-system/calico-node-8lzpq" May 8 00:39:07.225662 kubelet[2711]: I0508 00:39:07.225381 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/91b620c9-3d6f-4977-bcb5-5b8705512c48-var-run-calico\") pod \"calico-node-8lzpq\" (UID: \"91b620c9-3d6f-4977-bcb5-5b8705512c48\") " pod="calico-system/calico-node-8lzpq" May 8 00:39:07.225662 kubelet[2711]: I0508 00:39:07.225395 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/91b620c9-3d6f-4977-bcb5-5b8705512c48-var-lib-calico\") pod \"calico-node-8lzpq\" (UID: \"91b620c9-3d6f-4977-bcb5-5b8705512c48\") " pod="calico-system/calico-node-8lzpq" May 8 00:39:07.225662 kubelet[2711]: I0508 00:39:07.225431 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/91b620c9-3d6f-4977-bcb5-5b8705512c48-flexvol-driver-host\") pod \"calico-node-8lzpq\" (UID: \"91b620c9-3d6f-4977-bcb5-5b8705512c48\") " pod="calico-system/calico-node-8lzpq" May 8 00:39:07.225662 kubelet[2711]: I0508 00:39:07.225448 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91b620c9-3d6f-4977-bcb5-5b8705512c48-tigera-ca-bundle\") pod \"calico-node-8lzpq\" (UID: \"91b620c9-3d6f-4977-bcb5-5b8705512c48\") " pod="calico-system/calico-node-8lzpq" May 8 00:39:07.225662 kubelet[2711]: I0508 00:39:07.225471 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/91b620c9-3d6f-4977-bcb5-5b8705512c48-cni-log-dir\") pod \"calico-node-8lzpq\" (UID: \"91b620c9-3d6f-4977-bcb5-5b8705512c48\") " pod="calico-system/calico-node-8lzpq" May 8 00:39:07.225753 kubelet[2711]: I0508 00:39:07.225492 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctnpb\" (UniqueName: \"kubernetes.io/projected/91b620c9-3d6f-4977-bcb5-5b8705512c48-kube-api-access-ctnpb\") pod \"calico-node-8lzpq\" (UID: \"91b620c9-3d6f-4977-bcb5-5b8705512c48\") " pod="calico-system/calico-node-8lzpq" May 8 00:39:07.225753 kubelet[2711]: I0508 00:39:07.225532 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91b620c9-3d6f-4977-bcb5-5b8705512c48-xtables-lock\") pod \"calico-node-8lzpq\" (UID: \"91b620c9-3d6f-4977-bcb5-5b8705512c48\") " pod="calico-system/calico-node-8lzpq" May 8 00:39:07.225753 kubelet[2711]: I0508 00:39:07.225577 2711 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27e14b8e-f054-4cc5-a94f-78053ac3ed18-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 8 00:39:07.225753 kubelet[2711]: I0508 00:39:07.225595 2711 reconciler_common.go:288] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/27e14b8e-f054-4cc5-a94f-78053ac3ed18-node-certs\") on node \"localhost\" DevicePath \"\"" May 8 00:39:07.225753 kubelet[2711]: I0508 00:39:07.225603 2711 reconciler_common.go:288] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/27e14b8e-f054-4cc5-a94f-78053ac3ed18-policysync\") on node \"localhost\" DevicePath \"\"" May 8 00:39:07.225753 kubelet[2711]: I0508 00:39:07.225618 2711 reconciler_common.go:288] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/27e14b8e-f054-4cc5-a94f-78053ac3ed18-cni-bin-dir\") on node \"localhost\" DevicePath \"\"" May 8 00:39:07.225753 kubelet[2711]: I0508 00:39:07.225626 2711 reconciler_common.go:288] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/27e14b8e-f054-4cc5-a94f-78053ac3ed18-cni-net-dir\") on node \"localhost\" DevicePath \"\"" May 8 00:39:07.225881 kubelet[2711]: I0508 00:39:07.225634 2711 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/27e14b8e-f054-4cc5-a94f-78053ac3ed18-lib-modules\") on node \"localhost\" DevicePath \"\"" May 8 00:39:07.225881 kubelet[2711]: I0508 00:39:07.225642 2711 reconciler_common.go:288] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/27e14b8e-f054-4cc5-a94f-78053ac3ed18-var-lib-calico\") on node \"localhost\" DevicePath \"\"" May 8 00:39:07.225881 kubelet[2711]: I0508 00:39:07.225650 2711 reconciler_common.go:288] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/27e14b8e-f054-4cc5-a94f-78053ac3ed18-var-run-calico\") on node \"localhost\" DevicePath \"\"" May 8 00:39:07.225881 kubelet[2711]: I0508 00:39:07.225657 2711 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/27e14b8e-f054-4cc5-a94f-78053ac3ed18-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 8 00:39:07.225881 kubelet[2711]: I0508 00:39:07.225664 2711 reconciler_common.go:288] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/27e14b8e-f054-4cc5-a94f-78053ac3ed18-cni-log-dir\") on node \"localhost\" DevicePath \"\"" May 8 00:39:07.225881 kubelet[2711]: I0508 00:39:07.225671 2711 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-dvlxp\" (UniqueName: \"kubernetes.io/projected/27e14b8e-f054-4cc5-a94f-78053ac3ed18-kube-api-access-dvlxp\") on node \"localhost\" DevicePath \"\"" May 8 00:39:07.225881 kubelet[2711]: I0508 00:39:07.225679 2711 reconciler_common.go:288] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/27e14b8e-f054-4cc5-a94f-78053ac3ed18-flexvol-driver-host\") on node \"localhost\" DevicePath \"\"" May 8 00:39:07.277329 containerd[1549]: time="2025-05-08T00:39:07.276805766Z" level=info msg="StopPodSandbox for \"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\"" May 8 00:39:07.277676 containerd[1549]: time="2025-05-08T00:39:07.277457843Z" level=info msg="StopPodSandbox for \"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\"" May 8 00:39:07.300256 containerd[1549]: time="2025-05-08T00:39:07.300116615Z" level=error msg="StopPodSandbox for \"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\" failed" error="failed to destroy network for sandbox \"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:39:07.300527 kubelet[2711]: E0508 00:39:07.300412 2711 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f" May 8 00:39:07.300527 kubelet[2711]: E0508 00:39:07.300452 2711 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f"} May 8 00:39:07.300527 kubelet[2711]: E0508 00:39:07.300474 2711 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"06105806-56a1-4100-9953-11ff7427bd13\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:39:07.300527 kubelet[2711]: E0508 00:39:07.300489 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"06105806-56a1-4100-9953-11ff7427bd13\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gvqtb" podUID="06105806-56a1-4100-9953-11ff7427bd13" May 8 00:39:07.306107 containerd[1549]: time="2025-05-08T00:39:07.306036723Z" level=error msg="StopPodSandbox for \"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\" failed" error="failed to destroy network for sandbox \"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:39:07.306538 kubelet[2711]: E0508 00:39:07.306279 2711 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b" May 8 00:39:07.306538 kubelet[2711]: E0508 00:39:07.306324 2711 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b"} May 8 00:39:07.306538 kubelet[2711]: E0508 00:39:07.306345 2711 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9353a2d8-021f-4963-930a-ab008f3fd909\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:39:07.306538 kubelet[2711]: E0508 00:39:07.306359 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9353a2d8-021f-4963-930a-ab008f3fd909\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d465466c4-h88jf" podUID="9353a2d8-021f-4963-930a-ab008f3fd909" May 8 00:39:07.443990 containerd[1549]: time="2025-05-08T00:39:07.443760301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8lzpq,Uid:91b620c9-3d6f-4977-bcb5-5b8705512c48,Namespace:calico-system,Attempt:0,}" May 8 00:39:07.462189 systemd[1]: var-lib-kubelet-pods-27e14b8e\x2df054\x2d4cc5\x2da94f\x2d78053ac3ed18-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. May 8 00:39:07.462276 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8c30ac93b514c38288d3d9e290d44b8b2e34665bbac79bb445525f22dc98161-rootfs.mount: Deactivated successfully. May 8 00:39:07.462332 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f8c30ac93b514c38288d3d9e290d44b8b2e34665bbac79bb445525f22dc98161-shm.mount: Deactivated successfully. May 8 00:39:07.462392 systemd[1]: var-lib-kubelet-pods-27e14b8e\x2df054\x2d4cc5\x2da94f\x2d78053ac3ed18-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddvlxp.mount: Deactivated successfully. May 8 00:39:07.462450 systemd[1]: var-lib-kubelet-pods-27e14b8e\x2df054\x2d4cc5\x2da94f\x2d78053ac3ed18-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. May 8 00:39:07.496704 containerd[1549]: time="2025-05-08T00:39:07.496339953Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:39:07.496704 containerd[1549]: time="2025-05-08T00:39:07.496380898Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:39:07.496704 containerd[1549]: time="2025-05-08T00:39:07.496404826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:07.497141 containerd[1549]: time="2025-05-08T00:39:07.497080581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:07.514012 systemd[1]: Started cri-containerd-d2838e32d6eda8ccd3682e6de0c2ebfd4688eb0a9031e9a6f6231c81ad846ab8.scope - libcontainer container d2838e32d6eda8ccd3682e6de0c2ebfd4688eb0a9031e9a6f6231c81ad846ab8. May 8 00:39:07.527684 containerd[1549]: time="2025-05-08T00:39:07.527449804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8lzpq,Uid:91b620c9-3d6f-4977-bcb5-5b8705512c48,Namespace:calico-system,Attempt:0,} returns sandbox id \"d2838e32d6eda8ccd3682e6de0c2ebfd4688eb0a9031e9a6f6231c81ad846ab8\"" May 8 00:39:07.529812 containerd[1549]: time="2025-05-08T00:39:07.529763854Z" level=info msg="CreateContainer within sandbox \"d2838e32d6eda8ccd3682e6de0c2ebfd4688eb0a9031e9a6f6231c81ad846ab8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 8 00:39:07.625313 containerd[1549]: time="2025-05-08T00:39:07.625284770Z" level=info msg="CreateContainer within sandbox \"d2838e32d6eda8ccd3682e6de0c2ebfd4688eb0a9031e9a6f6231c81ad846ab8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"251d5d16be12a427866f2cc90dcf4594da7d519b1127f2e34131297666806b47\"" May 8 00:39:07.625869 containerd[1549]: time="2025-05-08T00:39:07.625850528Z" level=info msg="StartContainer for \"251d5d16be12a427866f2cc90dcf4594da7d519b1127f2e34131297666806b47\"" May 8 00:39:07.650084 systemd[1]: Started cri-containerd-251d5d16be12a427866f2cc90dcf4594da7d519b1127f2e34131297666806b47.scope - libcontainer container 251d5d16be12a427866f2cc90dcf4594da7d519b1127f2e34131297666806b47. May 8 00:39:07.681179 containerd[1549]: time="2025-05-08T00:39:07.681118684Z" level=info msg="StartContainer for \"251d5d16be12a427866f2cc90dcf4594da7d519b1127f2e34131297666806b47\" returns successfully" May 8 00:39:07.717366 systemd[1]: Created slice kubepods-besteffort-pod1139ac9d_e146_4f3e_bfe8_fbf238abc0d3.slice - libcontainer container kubepods-besteffort-pod1139ac9d_e146_4f3e_bfe8_fbf238abc0d3.slice. May 8 00:39:07.760509 kubelet[2711]: I0508 00:39:07.760380 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1139ac9d-e146-4f3e-bfe8-fbf238abc0d3-tigera-ca-bundle\") pod \"calico-typha-744cbc8fd8-jcgw9\" (UID: \"1139ac9d-e146-4f3e-bfe8-fbf238abc0d3\") " pod="calico-system/calico-typha-744cbc8fd8-jcgw9" May 8 00:39:07.760509 kubelet[2711]: I0508 00:39:07.760429 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1139ac9d-e146-4f3e-bfe8-fbf238abc0d3-typha-certs\") pod \"calico-typha-744cbc8fd8-jcgw9\" (UID: \"1139ac9d-e146-4f3e-bfe8-fbf238abc0d3\") " pod="calico-system/calico-typha-744cbc8fd8-jcgw9" May 8 00:39:07.760509 kubelet[2711]: I0508 00:39:07.760446 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46tcm\" (UniqueName: \"kubernetes.io/projected/1139ac9d-e146-4f3e-bfe8-fbf238abc0d3-kube-api-access-46tcm\") pod \"calico-typha-744cbc8fd8-jcgw9\" (UID: \"1139ac9d-e146-4f3e-bfe8-fbf238abc0d3\") " pod="calico-system/calico-typha-744cbc8fd8-jcgw9" May 8 00:39:07.980153 kubelet[2711]: I0508 00:39:07.980137 2711 scope.go:117] "RemoveContainer" containerID="35ac4b797a95bb8799e1a064f17708655a91c2182494a1e61906c57d01351169" May 8 00:39:07.983662 systemd[1]: Removed slice kubepods-besteffort-pod27e14b8e_f054_4cc5_a94f_78053ac3ed18.slice - libcontainer container kubepods-besteffort-pod27e14b8e_f054_4cc5_a94f_78053ac3ed18.slice. May 8 00:39:08.003996 containerd[1549]: time="2025-05-08T00:39:08.003904767Z" level=info msg="RemoveContainer for \"35ac4b797a95bb8799e1a064f17708655a91c2182494a1e61906c57d01351169\"" May 8 00:39:08.017975 containerd[1549]: time="2025-05-08T00:39:08.017880063Z" level=info msg="RemoveContainer for \"35ac4b797a95bb8799e1a064f17708655a91c2182494a1e61906c57d01351169\" returns successfully" May 8 00:39:08.018469 kubelet[2711]: I0508 00:39:08.018054 2711 scope.go:117] "RemoveContainer" containerID="25a4772e25e9c721fb241d037fbc1b5e27b80b776826ee484cb7aa05c14f98e3" May 8 00:39:08.018689 containerd[1549]: time="2025-05-08T00:39:08.018674116Z" level=info msg="RemoveContainer for \"25a4772e25e9c721fb241d037fbc1b5e27b80b776826ee484cb7aa05c14f98e3\"" May 8 00:39:08.020091 containerd[1549]: time="2025-05-08T00:39:08.020073771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-744cbc8fd8-jcgw9,Uid:1139ac9d-e146-4f3e-bfe8-fbf238abc0d3,Namespace:calico-system,Attempt:0,}" May 8 00:39:08.042995 containerd[1549]: time="2025-05-08T00:39:08.042971688Z" level=info msg="RemoveContainer for \"25a4772e25e9c721fb241d037fbc1b5e27b80b776826ee484cb7aa05c14f98e3\" returns successfully" May 8 00:39:08.043265 kubelet[2711]: I0508 00:39:08.043244 2711 scope.go:117] "RemoveContainer" containerID="39fa1a0504363ce400fd3cb2db77ba0f1e20ebc2db99b8311e6cf07321e6eaaa" May 8 00:39:08.043840 containerd[1549]: time="2025-05-08T00:39:08.043790933Z" level=info msg="RemoveContainer for \"39fa1a0504363ce400fd3cb2db77ba0f1e20ebc2db99b8311e6cf07321e6eaaa\"" May 8 00:39:08.092862 containerd[1549]: time="2025-05-08T00:39:08.092832511Z" level=info msg="RemoveContainer for \"39fa1a0504363ce400fd3cb2db77ba0f1e20ebc2db99b8311e6cf07321e6eaaa\" returns successfully" May 8 00:39:08.105554 containerd[1549]: time="2025-05-08T00:39:08.104733356Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:39:08.105554 containerd[1549]: time="2025-05-08T00:39:08.105516171Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:39:08.105554 containerd[1549]: time="2025-05-08T00:39:08.105525487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:08.106400 containerd[1549]: time="2025-05-08T00:39:08.105580186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:08.122010 systemd[1]: Started cri-containerd-fe547f1437342d6802a43bd4b1b0ff1e20aaa552dd13f076a33d030aded4d8a4.scope - libcontainer container fe547f1437342d6802a43bd4b1b0ff1e20aaa552dd13f076a33d030aded4d8a4. May 8 00:39:08.159008 containerd[1549]: time="2025-05-08T00:39:08.158983905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-744cbc8fd8-jcgw9,Uid:1139ac9d-e146-4f3e-bfe8-fbf238abc0d3,Namespace:calico-system,Attempt:0,} returns sandbox id \"fe547f1437342d6802a43bd4b1b0ff1e20aaa552dd13f076a33d030aded4d8a4\"" May 8 00:39:08.171373 containerd[1549]: time="2025-05-08T00:39:08.171348831Z" level=info msg="CreateContainer within sandbox \"fe547f1437342d6802a43bd4b1b0ff1e20aaa552dd13f076a33d030aded4d8a4\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 8 00:39:08.277360 kubelet[2711]: I0508 00:39:08.277299 2711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27e14b8e-f054-4cc5-a94f-78053ac3ed18" path="/var/lib/kubelet/pods/27e14b8e-f054-4cc5-a94f-78053ac3ed18/volumes" May 8 00:39:08.278241 kubelet[2711]: I0508 00:39:08.278221 2711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4b4cdff-0602-4ca5-839f-31fe2409aace" path="/var/lib/kubelet/pods/d4b4cdff-0602-4ca5-839f-31fe2409aace/volumes" May 8 00:39:08.594043 systemd[1]: cri-containerd-251d5d16be12a427866f2cc90dcf4594da7d519b1127f2e34131297666806b47.scope: Deactivated successfully. May 8 00:39:08.607305 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-251d5d16be12a427866f2cc90dcf4594da7d519b1127f2e34131297666806b47-rootfs.mount: Deactivated successfully. May 8 00:39:08.982312 containerd[1549]: time="2025-05-08T00:39:08.982253684Z" level=info msg="CreateContainer within sandbox \"fe547f1437342d6802a43bd4b1b0ff1e20aaa552dd13f076a33d030aded4d8a4\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8fda162989c050978f78439b6537181aef005011b4ac1d4d61e211d9d262dddf\"" May 8 00:39:09.004362 containerd[1549]: time="2025-05-08T00:39:08.982874749Z" level=info msg="StartContainer for \"8fda162989c050978f78439b6537181aef005011b4ac1d4d61e211d9d262dddf\"" May 8 00:39:09.004394 systemd[1]: Started cri-containerd-8fda162989c050978f78439b6537181aef005011b4ac1d4d61e211d9d262dddf.scope - libcontainer container 8fda162989c050978f78439b6537181aef005011b4ac1d4d61e211d9d262dddf. May 8 00:39:09.061360 containerd[1549]: time="2025-05-08T00:39:09.061296437Z" level=info msg="shim disconnected" id=251d5d16be12a427866f2cc90dcf4594da7d519b1127f2e34131297666806b47 namespace=k8s.io May 8 00:39:09.061360 containerd[1549]: time="2025-05-08T00:39:09.061329132Z" level=warning msg="cleaning up after shim disconnected" id=251d5d16be12a427866f2cc90dcf4594da7d519b1127f2e34131297666806b47 namespace=k8s.io May 8 00:39:09.061360 containerd[1549]: time="2025-05-08T00:39:09.061334517Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:39:09.068224 containerd[1549]: time="2025-05-08T00:39:09.068201658Z" level=info msg="StartContainer for \"8fda162989c050978f78439b6537181aef005011b4ac1d4d61e211d9d262dddf\" returns successfully" May 8 00:39:09.276722 containerd[1549]: time="2025-05-08T00:39:09.276641408Z" level=info msg="StopPodSandbox for \"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\"" May 8 00:39:09.299143 containerd[1549]: time="2025-05-08T00:39:09.299023029Z" level=error msg="StopPodSandbox for \"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\" failed" error="failed to destroy network for sandbox \"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:39:09.299253 kubelet[2711]: E0508 00:39:09.299196 2711 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" May 8 00:39:09.299253 kubelet[2711]: E0508 00:39:09.299230 2711 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6"} May 8 00:39:09.299253 kubelet[2711]: E0508 00:39:09.299252 2711 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0ec7b516-7af4-4ea9-8c59-9667bc29c59e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:39:09.299563 kubelet[2711]: E0508 00:39:09.299266 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0ec7b516-7af4-4ea9-8c59-9667bc29c59e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-577f55d98d-45j6l" podUID="0ec7b516-7af4-4ea9-8c59-9667bc29c59e" May 8 00:39:09.990143 containerd[1549]: time="2025-05-08T00:39:09.990107237Z" level=info msg="CreateContainer within sandbox \"d2838e32d6eda8ccd3682e6de0c2ebfd4688eb0a9031e9a6f6231c81ad846ab8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 8 00:39:10.002893 containerd[1549]: time="2025-05-08T00:39:10.002814825Z" level=info msg="CreateContainer within sandbox \"d2838e32d6eda8ccd3682e6de0c2ebfd4688eb0a9031e9a6f6231c81ad846ab8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"92db356b63d3e5771bdeddb090013ae0228212fd0a1a2768f67e440eb825bd79\"" May 8 00:39:10.006230 containerd[1549]: time="2025-05-08T00:39:10.006172219Z" level=info msg="StartContainer for \"92db356b63d3e5771bdeddb090013ae0228212fd0a1a2768f67e440eb825bd79\"" May 8 00:39:10.039042 systemd[1]: Started cri-containerd-92db356b63d3e5771bdeddb090013ae0228212fd0a1a2768f67e440eb825bd79.scope - libcontainer container 92db356b63d3e5771bdeddb090013ae0228212fd0a1a2768f67e440eb825bd79. May 8 00:39:10.066498 containerd[1549]: time="2025-05-08T00:39:10.066376202Z" level=info msg="StartContainer for \"92db356b63d3e5771bdeddb090013ae0228212fd0a1a2768f67e440eb825bd79\" returns successfully" May 8 00:39:10.075285 systemd[1]: Started sshd@8-139.178.70.103:22-139.178.68.195:60904.service - OpenSSH per-connection server daemon (139.178.68.195:60904). May 8 00:39:10.243507 sshd[4916]: Accepted publickey for core from 139.178.68.195 port 60904 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:39:10.249694 sshd[4916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:10.257880 systemd-logind[1523]: New session 10 of user core. May 8 00:39:10.261990 systemd[1]: Started session-10.scope - Session 10 of User core. May 8 00:39:11.065160 kubelet[2711]: I0508 00:39:11.023211 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-744cbc8fd8-jcgw9" podStartSLOduration=5.012355409 podStartE2EDuration="5.012355409s" podCreationTimestamp="2025-05-08 00:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:39:10.025004093 +0000 UTC m=+77.987912182" watchObservedRunningTime="2025-05-08 00:39:11.012355409 +0000 UTC m=+78.975263509" May 8 00:39:11.290135 sshd[4916]: pam_unix(sshd:session): session closed for user core May 8 00:39:11.291941 systemd-logind[1523]: Session 10 logged out. Waiting for processes to exit. May 8 00:39:11.292462 systemd[1]: sshd@8-139.178.70.103:22-139.178.68.195:60904.service: Deactivated successfully. May 8 00:39:11.294535 systemd[1]: session-10.scope: Deactivated successfully. May 8 00:39:11.295497 systemd-logind[1523]: Removed session 10. May 8 00:39:12.569107 systemd[1]: cri-containerd-92db356b63d3e5771bdeddb090013ae0228212fd0a1a2768f67e440eb825bd79.scope: Deactivated successfully. May 8 00:39:12.584934 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92db356b63d3e5771bdeddb090013ae0228212fd0a1a2768f67e440eb825bd79-rootfs.mount: Deactivated successfully. May 8 00:39:12.594250 containerd[1549]: time="2025-05-08T00:39:12.594207027Z" level=info msg="shim disconnected" id=92db356b63d3e5771bdeddb090013ae0228212fd0a1a2768f67e440eb825bd79 namespace=k8s.io May 8 00:39:12.594250 containerd[1549]: time="2025-05-08T00:39:12.594249327Z" level=warning msg="cleaning up after shim disconnected" id=92db356b63d3e5771bdeddb090013ae0228212fd0a1a2768f67e440eb825bd79 namespace=k8s.io May 8 00:39:12.594484 containerd[1549]: time="2025-05-08T00:39:12.594255938Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:39:13.022515 containerd[1549]: time="2025-05-08T00:39:13.022489754Z" level=info msg="CreateContainer within sandbox \"d2838e32d6eda8ccd3682e6de0c2ebfd4688eb0a9031e9a6f6231c81ad846ab8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 8 00:39:13.036600 containerd[1549]: time="2025-05-08T00:39:13.036518267Z" level=info msg="CreateContainer within sandbox \"d2838e32d6eda8ccd3682e6de0c2ebfd4688eb0a9031e9a6f6231c81ad846ab8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f7869ca330ae985b82fe7b1e8acff0ff1119aab85a577e4473771dfe24721e6e\"" May 8 00:39:13.037086 containerd[1549]: time="2025-05-08T00:39:13.037069388Z" level=info msg="StartContainer for \"f7869ca330ae985b82fe7b1e8acff0ff1119aab85a577e4473771dfe24721e6e\"" May 8 00:39:13.073015 systemd[1]: Started cri-containerd-f7869ca330ae985b82fe7b1e8acff0ff1119aab85a577e4473771dfe24721e6e.scope - libcontainer container f7869ca330ae985b82fe7b1e8acff0ff1119aab85a577e4473771dfe24721e6e. May 8 00:39:13.095100 containerd[1549]: time="2025-05-08T00:39:13.095078873Z" level=info msg="StartContainer for \"f7869ca330ae985b82fe7b1e8acff0ff1119aab85a577e4473771dfe24721e6e\" returns successfully" May 8 00:39:14.277488 containerd[1549]: time="2025-05-08T00:39:14.277414437Z" level=info msg="StopPodSandbox for \"be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23\"" May 8 00:39:14.347254 kubelet[2711]: I0508 00:39:14.347187 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8lzpq" podStartSLOduration=7.347172023 podStartE2EDuration="7.347172023s" podCreationTimestamp="2025-05-08 00:39:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:39:14.010860637 +0000 UTC m=+81.973768734" watchObservedRunningTime="2025-05-08 00:39:14.347172023 +0000 UTC m=+82.310080120" May 8 00:39:14.371460 containerd[1549]: 2025-05-08 00:39:14.346 [INFO][5077] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23" May 8 00:39:14.371460 containerd[1549]: 2025-05-08 00:39:14.347 [INFO][5077] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23" iface="eth0" netns="/var/run/netns/cni-2d07d79e-355a-429e-a1ed-2a3d8364168b" May 8 00:39:14.371460 containerd[1549]: 2025-05-08 00:39:14.347 [INFO][5077] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23" iface="eth0" netns="/var/run/netns/cni-2d07d79e-355a-429e-a1ed-2a3d8364168b" May 8 00:39:14.371460 containerd[1549]: 2025-05-08 00:39:14.347 [INFO][5077] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23" iface="eth0" netns="/var/run/netns/cni-2d07d79e-355a-429e-a1ed-2a3d8364168b" May 8 00:39:14.371460 containerd[1549]: 2025-05-08 00:39:14.347 [INFO][5077] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23" May 8 00:39:14.371460 containerd[1549]: 2025-05-08 00:39:14.347 [INFO][5077] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23" May 8 00:39:14.371460 containerd[1549]: 2025-05-08 00:39:14.364 [INFO][5084] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23" HandleID="k8s-pod-network.be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23" Workload="localhost-k8s-coredns--6f6b679f8f--bfnqq-eth0" May 8 00:39:14.371460 containerd[1549]: 2025-05-08 00:39:14.364 [INFO][5084] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:39:14.371460 containerd[1549]: 2025-05-08 00:39:14.364 [INFO][5084] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:39:14.371460 containerd[1549]: 2025-05-08 00:39:14.367 [WARNING][5084] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23" HandleID="k8s-pod-network.be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23" Workload="localhost-k8s-coredns--6f6b679f8f--bfnqq-eth0" May 8 00:39:14.371460 containerd[1549]: 2025-05-08 00:39:14.367 [INFO][5084] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23" HandleID="k8s-pod-network.be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23" Workload="localhost-k8s-coredns--6f6b679f8f--bfnqq-eth0" May 8 00:39:14.371460 containerd[1549]: 2025-05-08 00:39:14.368 [INFO][5084] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:39:14.371460 containerd[1549]: 2025-05-08 00:39:14.369 [INFO][5077] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23" May 8 00:39:14.378188 containerd[1549]: time="2025-05-08T00:39:14.372947488Z" level=info msg="TearDown network for sandbox \"be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23\" successfully" May 8 00:39:14.378188 containerd[1549]: time="2025-05-08T00:39:14.372964412Z" level=info msg="StopPodSandbox for \"be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23\" returns successfully" May 8 00:39:14.378188 containerd[1549]: time="2025-05-08T00:39:14.373429537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bfnqq,Uid:882fb80e-90cb-449a-87c0-4bbb4fd3e432,Namespace:kube-system,Attempt:1,}" May 8 00:39:14.373108 systemd[1]: run-netns-cni\x2d2d07d79e\x2d355a\x2d429e\x2da1ed\x2d2a3d8364168b.mount: Deactivated successfully. May 8 00:39:14.572207 systemd-networkd[1452]: calid52169a72ae: Link UP May 8 00:39:14.572349 systemd-networkd[1452]: calid52169a72ae: Gained carrier May 8 00:39:14.591748 containerd[1549]: 2025-05-08 00:39:14.473 [INFO][5092] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 8 00:39:14.591748 containerd[1549]: 2025-05-08 00:39:14.478 [INFO][5092] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--bfnqq-eth0 coredns-6f6b679f8f- kube-system 882fb80e-90cb-449a-87c0-4bbb4fd3e432 1053 0 2025-05-08 00:37:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-bfnqq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid52169a72ae [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="a376fd11e6e362c158d16f8834469c86c663d880135f15dad558609c4ed241ef" Namespace="kube-system" Pod="coredns-6f6b679f8f-bfnqq" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bfnqq-" May 8 00:39:14.591748 containerd[1549]: 2025-05-08 00:39:14.478 [INFO][5092] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a376fd11e6e362c158d16f8834469c86c663d880135f15dad558609c4ed241ef" Namespace="kube-system" Pod="coredns-6f6b679f8f-bfnqq" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bfnqq-eth0" May 8 00:39:14.591748 containerd[1549]: 2025-05-08 00:39:14.505 [INFO][5103] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a376fd11e6e362c158d16f8834469c86c663d880135f15dad558609c4ed241ef" HandleID="k8s-pod-network.a376fd11e6e362c158d16f8834469c86c663d880135f15dad558609c4ed241ef" Workload="localhost-k8s-coredns--6f6b679f8f--bfnqq-eth0" May 8 00:39:14.591748 containerd[1549]: 2025-05-08 00:39:14.509 [INFO][5103] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a376fd11e6e362c158d16f8834469c86c663d880135f15dad558609c4ed241ef" HandleID="k8s-pod-network.a376fd11e6e362c158d16f8834469c86c663d880135f15dad558609c4ed241ef" Workload="localhost-k8s-coredns--6f6b679f8f--bfnqq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002919a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-bfnqq", "timestamp":"2025-05-08 00:39:14.505013075 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:39:14.591748 containerd[1549]: 2025-05-08 00:39:14.509 [INFO][5103] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:39:14.591748 containerd[1549]: 2025-05-08 00:39:14.509 [INFO][5103] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:39:14.591748 containerd[1549]: 2025-05-08 00:39:14.509 [INFO][5103] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:39:14.591748 containerd[1549]: 2025-05-08 00:39:14.510 [INFO][5103] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a376fd11e6e362c158d16f8834469c86c663d880135f15dad558609c4ed241ef" host="localhost" May 8 00:39:14.591748 containerd[1549]: 2025-05-08 00:39:14.512 [INFO][5103] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:39:14.591748 containerd[1549]: 2025-05-08 00:39:14.514 [INFO][5103] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:39:14.591748 containerd[1549]: 2025-05-08 00:39:14.515 [INFO][5103] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:39:14.591748 containerd[1549]: 2025-05-08 00:39:14.516 [INFO][5103] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:39:14.591748 containerd[1549]: 2025-05-08 00:39:14.516 [INFO][5103] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a376fd11e6e362c158d16f8834469c86c663d880135f15dad558609c4ed241ef" host="localhost" May 8 00:39:14.591748 containerd[1549]: 2025-05-08 00:39:14.517 [INFO][5103] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a376fd11e6e362c158d16f8834469c86c663d880135f15dad558609c4ed241ef May 8 00:39:14.591748 containerd[1549]: 2025-05-08 00:39:14.526 [INFO][5103] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a376fd11e6e362c158d16f8834469c86c663d880135f15dad558609c4ed241ef" host="localhost" May 8 00:39:14.591748 containerd[1549]: 2025-05-08 00:39:14.534 [INFO][5103] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.a376fd11e6e362c158d16f8834469c86c663d880135f15dad558609c4ed241ef" host="localhost" May 8 00:39:14.591748 containerd[1549]: 2025-05-08 00:39:14.534 [INFO][5103] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.a376fd11e6e362c158d16f8834469c86c663d880135f15dad558609c4ed241ef" host="localhost" May 8 00:39:14.591748 containerd[1549]: 2025-05-08 00:39:14.534 [INFO][5103] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:39:14.591748 containerd[1549]: 2025-05-08 00:39:14.534 [INFO][5103] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="a376fd11e6e362c158d16f8834469c86c663d880135f15dad558609c4ed241ef" HandleID="k8s-pod-network.a376fd11e6e362c158d16f8834469c86c663d880135f15dad558609c4ed241ef" Workload="localhost-k8s-coredns--6f6b679f8f--bfnqq-eth0" May 8 00:39:14.592413 containerd[1549]: 2025-05-08 00:39:14.535 [INFO][5092] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a376fd11e6e362c158d16f8834469c86c663d880135f15dad558609c4ed241ef" Namespace="kube-system" Pod="coredns-6f6b679f8f-bfnqq" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bfnqq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--bfnqq-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"882fb80e-90cb-449a-87c0-4bbb4fd3e432", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 37, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-bfnqq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid52169a72ae", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:39:14.592413 containerd[1549]: 2025-05-08 00:39:14.536 [INFO][5092] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="a376fd11e6e362c158d16f8834469c86c663d880135f15dad558609c4ed241ef" Namespace="kube-system" Pod="coredns-6f6b679f8f-bfnqq" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bfnqq-eth0" May 8 00:39:14.592413 containerd[1549]: 2025-05-08 00:39:14.536 [INFO][5092] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid52169a72ae ContainerID="a376fd11e6e362c158d16f8834469c86c663d880135f15dad558609c4ed241ef" Namespace="kube-system" Pod="coredns-6f6b679f8f-bfnqq" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bfnqq-eth0" May 8 00:39:14.592413 containerd[1549]: 2025-05-08 00:39:14.559 [INFO][5092] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a376fd11e6e362c158d16f8834469c86c663d880135f15dad558609c4ed241ef" Namespace="kube-system" Pod="coredns-6f6b679f8f-bfnqq" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bfnqq-eth0" May 8 00:39:14.592413 containerd[1549]: 2025-05-08 00:39:14.559 [INFO][5092] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a376fd11e6e362c158d16f8834469c86c663d880135f15dad558609c4ed241ef" Namespace="kube-system" Pod="coredns-6f6b679f8f-bfnqq" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bfnqq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--bfnqq-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"882fb80e-90cb-449a-87c0-4bbb4fd3e432", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 37, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a376fd11e6e362c158d16f8834469c86c663d880135f15dad558609c4ed241ef", Pod:"coredns-6f6b679f8f-bfnqq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid52169a72ae", MAC:"06:d1:b8:61:91:0f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:39:14.592413 containerd[1549]: 2025-05-08 00:39:14.589 [INFO][5092] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a376fd11e6e362c158d16f8834469c86c663d880135f15dad558609c4ed241ef" Namespace="kube-system" Pod="coredns-6f6b679f8f-bfnqq" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bfnqq-eth0" May 8 00:39:14.616150 containerd[1549]: time="2025-05-08T00:39:14.614894879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:39:14.616150 containerd[1549]: time="2025-05-08T00:39:14.615945137Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:39:14.616150 containerd[1549]: time="2025-05-08T00:39:14.615960106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:14.616150 containerd[1549]: time="2025-05-08T00:39:14.616044633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:14.640329 systemd[1]: Started cri-containerd-a376fd11e6e362c158d16f8834469c86c663d880135f15dad558609c4ed241ef.scope - libcontainer container a376fd11e6e362c158d16f8834469c86c663d880135f15dad558609c4ed241ef. May 8 00:39:14.649549 systemd-resolved[1414]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:39:14.682293 containerd[1549]: time="2025-05-08T00:39:14.682269206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bfnqq,Uid:882fb80e-90cb-449a-87c0-4bbb4fd3e432,Namespace:kube-system,Attempt:1,} returns sandbox id \"a376fd11e6e362c158d16f8834469c86c663d880135f15dad558609c4ed241ef\"" May 8 00:39:14.686785 containerd[1549]: time="2025-05-08T00:39:14.686727337Z" level=info msg="CreateContainer within sandbox \"a376fd11e6e362c158d16f8834469c86c663d880135f15dad558609c4ed241ef\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:39:14.862356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2482071817.mount: Deactivated successfully. May 8 00:39:14.865033 containerd[1549]: time="2025-05-08T00:39:14.865006421Z" level=info msg="CreateContainer within sandbox \"a376fd11e6e362c158d16f8834469c86c663d880135f15dad558609c4ed241ef\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cac8d7ec0d683e03d2f4f45bbea5ce95c5bdadda32700bfb48226e964f4e775c\"" May 8 00:39:14.867027 containerd[1549]: time="2025-05-08T00:39:14.866190481Z" level=info msg="StartContainer for \"cac8d7ec0d683e03d2f4f45bbea5ce95c5bdadda32700bfb48226e964f4e775c\"" May 8 00:39:14.895125 systemd[1]: Started cri-containerd-cac8d7ec0d683e03d2f4f45bbea5ce95c5bdadda32700bfb48226e964f4e775c.scope - libcontainer container cac8d7ec0d683e03d2f4f45bbea5ce95c5bdadda32700bfb48226e964f4e775c. May 8 00:39:14.929467 containerd[1549]: time="2025-05-08T00:39:14.929400683Z" level=info msg="StartContainer for \"cac8d7ec0d683e03d2f4f45bbea5ce95c5bdadda32700bfb48226e964f4e775c\" returns successfully" May 8 00:39:14.978928 kernel: bpftool[5325]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 8 00:39:15.018448 kubelet[2711]: I0508 00:39:15.017303 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-bfnqq" podStartSLOduration=78.017290427 podStartE2EDuration="1m18.017290427s" podCreationTimestamp="2025-05-08 00:37:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:39:15.017222598 +0000 UTC m=+82.980130694" watchObservedRunningTime="2025-05-08 00:39:15.017290427 +0000 UTC m=+82.980198519" May 8 00:39:15.193564 systemd-networkd[1452]: vxlan.calico: Link UP May 8 00:39:15.193722 systemd-networkd[1452]: vxlan.calico: Gained carrier May 8 00:39:15.277125 containerd[1549]: time="2025-05-08T00:39:15.277102969Z" level=info msg="StopPodSandbox for \"ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762\"" May 8 00:39:15.375212 containerd[1549]: 2025-05-08 00:39:15.349 [INFO][5416] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762" May 8 00:39:15.375212 containerd[1549]: 2025-05-08 00:39:15.350 [INFO][5416] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762" iface="eth0" netns="/var/run/netns/cni-00c8658e-8072-b474-c754-03c0af294eb8" May 8 00:39:15.375212 containerd[1549]: 2025-05-08 00:39:15.351 [INFO][5416] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762" iface="eth0" netns="/var/run/netns/cni-00c8658e-8072-b474-c754-03c0af294eb8" May 8 00:39:15.375212 containerd[1549]: 2025-05-08 00:39:15.351 [INFO][5416] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762" iface="eth0" netns="/var/run/netns/cni-00c8658e-8072-b474-c754-03c0af294eb8" May 8 00:39:15.375212 containerd[1549]: 2025-05-08 00:39:15.351 [INFO][5416] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762" May 8 00:39:15.375212 containerd[1549]: 2025-05-08 00:39:15.351 [INFO][5416] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762" May 8 00:39:15.375212 containerd[1549]: 2025-05-08 00:39:15.367 [INFO][5423] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762" HandleID="k8s-pod-network.ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762" Workload="localhost-k8s-coredns--6f6b679f8f--gljcw-eth0" May 8 00:39:15.375212 containerd[1549]: 2025-05-08 00:39:15.367 [INFO][5423] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:39:15.375212 containerd[1549]: 2025-05-08 00:39:15.367 [INFO][5423] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:39:15.375212 containerd[1549]: 2025-05-08 00:39:15.371 [WARNING][5423] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762" HandleID="k8s-pod-network.ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762" Workload="localhost-k8s-coredns--6f6b679f8f--gljcw-eth0" May 8 00:39:15.375212 containerd[1549]: 2025-05-08 00:39:15.371 [INFO][5423] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762" HandleID="k8s-pod-network.ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762" Workload="localhost-k8s-coredns--6f6b679f8f--gljcw-eth0" May 8 00:39:15.375212 containerd[1549]: 2025-05-08 00:39:15.371 [INFO][5423] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:39:15.375212 containerd[1549]: 2025-05-08 00:39:15.374 [INFO][5416] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762" May 8 00:39:15.376796 containerd[1549]: time="2025-05-08T00:39:15.375274415Z" level=info msg="TearDown network for sandbox \"ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762\" successfully" May 8 00:39:15.376796 containerd[1549]: time="2025-05-08T00:39:15.375290783Z" level=info msg="StopPodSandbox for \"ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762\" returns successfully" May 8 00:39:15.376796 containerd[1549]: time="2025-05-08T00:39:15.375789080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gljcw,Uid:bcac0e1d-a484-4def-954d-e37294951ec1,Namespace:kube-system,Attempt:1,}" May 8 00:39:15.457171 systemd-networkd[1452]: caliebeadd467c2: Link UP May 8 00:39:15.457594 systemd-networkd[1452]: caliebeadd467c2: Gained carrier May 8 00:39:15.467528 containerd[1549]: 2025-05-08 00:39:15.405 [INFO][5429] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--gljcw-eth0 coredns-6f6b679f8f- kube-system bcac0e1d-a484-4def-954d-e37294951ec1 1069 0 2025-05-08 00:37:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-gljcw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliebeadd467c2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b17d88e3ae911616a18e0430acb10a9c6d9a5911ab5fd1b39b327d132ef9b706" Namespace="kube-system" Pod="coredns-6f6b679f8f-gljcw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--gljcw-" May 8 00:39:15.467528 containerd[1549]: 2025-05-08 00:39:15.406 [INFO][5429] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b17d88e3ae911616a18e0430acb10a9c6d9a5911ab5fd1b39b327d132ef9b706" Namespace="kube-system" Pod="coredns-6f6b679f8f-gljcw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--gljcw-eth0" May 8 00:39:15.467528 containerd[1549]: 2025-05-08 00:39:15.426 [INFO][5445] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b17d88e3ae911616a18e0430acb10a9c6d9a5911ab5fd1b39b327d132ef9b706" HandleID="k8s-pod-network.b17d88e3ae911616a18e0430acb10a9c6d9a5911ab5fd1b39b327d132ef9b706" Workload="localhost-k8s-coredns--6f6b679f8f--gljcw-eth0" May 8 00:39:15.467528 containerd[1549]: 2025-05-08 00:39:15.433 [INFO][5445] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b17d88e3ae911616a18e0430acb10a9c6d9a5911ab5fd1b39b327d132ef9b706" HandleID="k8s-pod-network.b17d88e3ae911616a18e0430acb10a9c6d9a5911ab5fd1b39b327d132ef9b706" Workload="localhost-k8s-coredns--6f6b679f8f--gljcw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bb330), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-gljcw", "timestamp":"2025-05-08 00:39:15.426216753 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:39:15.467528 containerd[1549]: 2025-05-08 00:39:15.433 [INFO][5445] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:39:15.467528 containerd[1549]: 2025-05-08 00:39:15.433 [INFO][5445] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:39:15.467528 containerd[1549]: 2025-05-08 00:39:15.433 [INFO][5445] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:39:15.467528 containerd[1549]: 2025-05-08 00:39:15.435 [INFO][5445] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b17d88e3ae911616a18e0430acb10a9c6d9a5911ab5fd1b39b327d132ef9b706" host="localhost" May 8 00:39:15.467528 containerd[1549]: 2025-05-08 00:39:15.437 [INFO][5445] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:39:15.467528 containerd[1549]: 2025-05-08 00:39:15.441 [INFO][5445] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:39:15.467528 containerd[1549]: 2025-05-08 00:39:15.442 [INFO][5445] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:39:15.467528 containerd[1549]: 2025-05-08 00:39:15.443 [INFO][5445] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:39:15.467528 containerd[1549]: 2025-05-08 00:39:15.443 [INFO][5445] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b17d88e3ae911616a18e0430acb10a9c6d9a5911ab5fd1b39b327d132ef9b706" host="localhost" May 8 00:39:15.467528 containerd[1549]: 2025-05-08 00:39:15.444 [INFO][5445] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b17d88e3ae911616a18e0430acb10a9c6d9a5911ab5fd1b39b327d132ef9b706 May 8 00:39:15.467528 containerd[1549]: 2025-05-08 00:39:15.448 [INFO][5445] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b17d88e3ae911616a18e0430acb10a9c6d9a5911ab5fd1b39b327d132ef9b706" host="localhost" May 8 00:39:15.467528 containerd[1549]: 2025-05-08 00:39:15.452 [INFO][5445] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.b17d88e3ae911616a18e0430acb10a9c6d9a5911ab5fd1b39b327d132ef9b706" host="localhost" May 8 00:39:15.467528 containerd[1549]: 2025-05-08 00:39:15.452 [INFO][5445] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.b17d88e3ae911616a18e0430acb10a9c6d9a5911ab5fd1b39b327d132ef9b706" host="localhost" May 8 00:39:15.467528 containerd[1549]: 2025-05-08 00:39:15.452 [INFO][5445] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:39:15.467528 containerd[1549]: 2025-05-08 00:39:15.452 [INFO][5445] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="b17d88e3ae911616a18e0430acb10a9c6d9a5911ab5fd1b39b327d132ef9b706" HandleID="k8s-pod-network.b17d88e3ae911616a18e0430acb10a9c6d9a5911ab5fd1b39b327d132ef9b706" Workload="localhost-k8s-coredns--6f6b679f8f--gljcw-eth0" May 8 00:39:15.468978 containerd[1549]: 2025-05-08 00:39:15.455 [INFO][5429] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b17d88e3ae911616a18e0430acb10a9c6d9a5911ab5fd1b39b327d132ef9b706" Namespace="kube-system" Pod="coredns-6f6b679f8f-gljcw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--gljcw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--gljcw-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"bcac0e1d-a484-4def-954d-e37294951ec1", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 37, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-gljcw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliebeadd467c2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:39:15.468978 containerd[1549]: 2025-05-08 00:39:15.455 [INFO][5429] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="b17d88e3ae911616a18e0430acb10a9c6d9a5911ab5fd1b39b327d132ef9b706" Namespace="kube-system" Pod="coredns-6f6b679f8f-gljcw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--gljcw-eth0" May 8 00:39:15.468978 containerd[1549]: 2025-05-08 00:39:15.455 [INFO][5429] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliebeadd467c2 ContainerID="b17d88e3ae911616a18e0430acb10a9c6d9a5911ab5fd1b39b327d132ef9b706" Namespace="kube-system" Pod="coredns-6f6b679f8f-gljcw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--gljcw-eth0" May 8 00:39:15.468978 containerd[1549]: 2025-05-08 00:39:15.456 [INFO][5429] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b17d88e3ae911616a18e0430acb10a9c6d9a5911ab5fd1b39b327d132ef9b706" Namespace="kube-system" Pod="coredns-6f6b679f8f-gljcw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--gljcw-eth0" May 8 00:39:15.468978 containerd[1549]: 2025-05-08 00:39:15.456 [INFO][5429] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b17d88e3ae911616a18e0430acb10a9c6d9a5911ab5fd1b39b327d132ef9b706" Namespace="kube-system" Pod="coredns-6f6b679f8f-gljcw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--gljcw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--gljcw-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"bcac0e1d-a484-4def-954d-e37294951ec1", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 37, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b17d88e3ae911616a18e0430acb10a9c6d9a5911ab5fd1b39b327d132ef9b706", Pod:"coredns-6f6b679f8f-gljcw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliebeadd467c2", MAC:"be:ee:2e:5f:79:ec", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:39:15.468978 containerd[1549]: 2025-05-08 00:39:15.464 [INFO][5429] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b17d88e3ae911616a18e0430acb10a9c6d9a5911ab5fd1b39b327d132ef9b706" Namespace="kube-system" Pod="coredns-6f6b679f8f-gljcw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--gljcw-eth0" May 8 00:39:15.490369 containerd[1549]: time="2025-05-08T00:39:15.487348982Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:39:15.490369 containerd[1549]: time="2025-05-08T00:39:15.487400081Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:39:15.490369 containerd[1549]: time="2025-05-08T00:39:15.487410585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:15.490369 containerd[1549]: time="2025-05-08T00:39:15.487472377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:15.504030 systemd[1]: Started cri-containerd-b17d88e3ae911616a18e0430acb10a9c6d9a5911ab5fd1b39b327d132ef9b706.scope - libcontainer container b17d88e3ae911616a18e0430acb10a9c6d9a5911ab5fd1b39b327d132ef9b706. May 8 00:39:15.512131 systemd-resolved[1414]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:39:15.531862 containerd[1549]: time="2025-05-08T00:39:15.531834421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gljcw,Uid:bcac0e1d-a484-4def-954d-e37294951ec1,Namespace:kube-system,Attempt:1,} returns sandbox id \"b17d88e3ae911616a18e0430acb10a9c6d9a5911ab5fd1b39b327d132ef9b706\"" May 8 00:39:15.535468 containerd[1549]: time="2025-05-08T00:39:15.535443990Z" level=info msg="CreateContainer within sandbox \"b17d88e3ae911616a18e0430acb10a9c6d9a5911ab5fd1b39b327d132ef9b706\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:39:15.540057 containerd[1549]: time="2025-05-08T00:39:15.540028098Z" level=info msg="CreateContainer within sandbox \"b17d88e3ae911616a18e0430acb10a9c6d9a5911ab5fd1b39b327d132ef9b706\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"781bd58b632d22fee5dfb148a85f6124b9c9a811371954554eadadae841f0985\"" May 8 00:39:15.540451 containerd[1549]: time="2025-05-08T00:39:15.540345184Z" level=info msg="StartContainer for \"781bd58b632d22fee5dfb148a85f6124b9c9a811371954554eadadae841f0985\"" May 8 00:39:15.560051 systemd[1]: Started cri-containerd-781bd58b632d22fee5dfb148a85f6124b9c9a811371954554eadadae841f0985.scope - libcontainer container 781bd58b632d22fee5dfb148a85f6124b9c9a811371954554eadadae841f0985. May 8 00:39:15.575281 containerd[1549]: time="2025-05-08T00:39:15.575215571Z" level=info msg="StartContainer for \"781bd58b632d22fee5dfb148a85f6124b9c9a811371954554eadadae841f0985\" returns successfully" May 8 00:39:15.621128 systemd[1]: run-netns-cni\x2d00c8658e\x2d8072\x2db474\x2dc754\x2d03c0af294eb8.mount: Deactivated successfully. May 8 00:39:16.019656 kubelet[2711]: I0508 00:39:16.019456 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-gljcw" podStartSLOduration=79.019442619 podStartE2EDuration="1m19.019442619s" podCreationTimestamp="2025-05-08 00:37:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:39:16.019148094 +0000 UTC m=+83.982056186" watchObservedRunningTime="2025-05-08 00:39:16.019442619 +0000 UTC m=+83.982350708" May 8 00:39:16.299571 systemd[1]: Started sshd@9-139.178.70.103:22-139.178.68.195:54020.service - OpenSSH per-connection server daemon (139.178.68.195:54020). May 8 00:39:16.383942 sshd[5578]: Accepted publickey for core from 139.178.68.195 port 54020 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:39:16.385130 sshd[5578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:16.388822 systemd-logind[1523]: New session 11 of user core. May 8 00:39:16.392022 systemd[1]: Started session-11.scope - Session 11 of User core. May 8 00:39:16.481366 systemd-networkd[1452]: calid52169a72ae: Gained IPv6LL May 8 00:39:16.609088 systemd-networkd[1452]: caliebeadd467c2: Gained IPv6LL May 8 00:39:17.044909 sshd[5578]: pam_unix(sshd:session): session closed for user core May 8 00:39:17.046402 systemd[1]: sshd@9-139.178.70.103:22-139.178.68.195:54020.service: Deactivated successfully. May 8 00:39:17.047663 systemd[1]: session-11.scope: Deactivated successfully. May 8 00:39:17.048497 systemd-logind[1523]: Session 11 logged out. Waiting for processes to exit. May 8 00:39:17.049139 systemd-logind[1523]: Removed session 11. May 8 00:39:17.121092 systemd-networkd[1452]: vxlan.calico: Gained IPv6LL May 8 00:39:18.276639 containerd[1549]: time="2025-05-08T00:39:18.276614076Z" level=info msg="StopPodSandbox for \"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\"" May 8 00:39:18.444573 containerd[1549]: 2025-05-08 00:39:18.393 [INFO][5609] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b" May 8 00:39:18.444573 containerd[1549]: 2025-05-08 00:39:18.393 [INFO][5609] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b" iface="eth0" netns="/var/run/netns/cni-8f70750d-2b10-4960-22b9-75bfefcaaf53" May 8 00:39:18.444573 containerd[1549]: 2025-05-08 00:39:18.394 [INFO][5609] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b" iface="eth0" netns="/var/run/netns/cni-8f70750d-2b10-4960-22b9-75bfefcaaf53" May 8 00:39:18.444573 containerd[1549]: 2025-05-08 00:39:18.394 [INFO][5609] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b" iface="eth0" netns="/var/run/netns/cni-8f70750d-2b10-4960-22b9-75bfefcaaf53" May 8 00:39:18.444573 containerd[1549]: 2025-05-08 00:39:18.394 [INFO][5609] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b" May 8 00:39:18.444573 containerd[1549]: 2025-05-08 00:39:18.394 [INFO][5609] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b" May 8 00:39:18.444573 containerd[1549]: 2025-05-08 00:39:18.411 [INFO][5616] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b" HandleID="k8s-pod-network.fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b" Workload="localhost-k8s-calico--apiserver--5d465466c4--h88jf-eth0" May 8 00:39:18.444573 containerd[1549]: 2025-05-08 00:39:18.411 [INFO][5616] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:39:18.444573 containerd[1549]: 2025-05-08 00:39:18.411 [INFO][5616] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:39:18.444573 containerd[1549]: 2025-05-08 00:39:18.441 [WARNING][5616] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b" HandleID="k8s-pod-network.fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b" Workload="localhost-k8s-calico--apiserver--5d465466c4--h88jf-eth0" May 8 00:39:18.444573 containerd[1549]: 2025-05-08 00:39:18.441 [INFO][5616] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b" HandleID="k8s-pod-network.fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b" Workload="localhost-k8s-calico--apiserver--5d465466c4--h88jf-eth0" May 8 00:39:18.444573 containerd[1549]: 2025-05-08 00:39:18.442 [INFO][5616] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:39:18.444573 containerd[1549]: 2025-05-08 00:39:18.443 [INFO][5609] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b" May 8 00:39:18.467685 containerd[1549]: time="2025-05-08T00:39:18.446338903Z" level=info msg="TearDown network for sandbox \"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\" successfully" May 8 00:39:18.467685 containerd[1549]: time="2025-05-08T00:39:18.446356752Z" level=info msg="StopPodSandbox for \"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\" returns successfully" May 8 00:39:18.467685 containerd[1549]: time="2025-05-08T00:39:18.447209307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d465466c4-h88jf,Uid:9353a2d8-021f-4963-930a-ab008f3fd909,Namespace:calico-apiserver,Attempt:1,}" May 8 00:39:18.446206 systemd[1]: run-netns-cni\x2d8f70750d\x2d2b10\x2d4960\x2d22b9\x2d75bfefcaaf53.mount: Deactivated successfully. May 8 00:39:18.691985 systemd-networkd[1452]: calie5bf0f6f32c: Link UP May 8 00:39:18.692454 systemd-networkd[1452]: calie5bf0f6f32c: Gained carrier May 8 00:39:18.710059 containerd[1549]: 2025-05-08 00:39:18.641 [INFO][5623] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5d465466c4--h88jf-eth0 calico-apiserver-5d465466c4- calico-apiserver 9353a2d8-021f-4963-930a-ab008f3fd909 1102 0 2025-05-08 00:38:06 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d465466c4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5d465466c4-h88jf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie5bf0f6f32c [] []}} ContainerID="3007f5c556b725763f2f88c36ed6b5beac87979c009522b4605b096847445232" Namespace="calico-apiserver" Pod="calico-apiserver-5d465466c4-h88jf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d465466c4--h88jf-" May 8 00:39:18.710059 containerd[1549]: 2025-05-08 00:39:18.641 [INFO][5623] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3007f5c556b725763f2f88c36ed6b5beac87979c009522b4605b096847445232" Namespace="calico-apiserver" Pod="calico-apiserver-5d465466c4-h88jf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d465466c4--h88jf-eth0" May 8 00:39:18.710059 containerd[1549]: 2025-05-08 00:39:18.665 [INFO][5635] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3007f5c556b725763f2f88c36ed6b5beac87979c009522b4605b096847445232" HandleID="k8s-pod-network.3007f5c556b725763f2f88c36ed6b5beac87979c009522b4605b096847445232" Workload="localhost-k8s-calico--apiserver--5d465466c4--h88jf-eth0" May 8 00:39:18.710059 containerd[1549]: 2025-05-08 00:39:18.670 [INFO][5635] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3007f5c556b725763f2f88c36ed6b5beac87979c009522b4605b096847445232" HandleID="k8s-pod-network.3007f5c556b725763f2f88c36ed6b5beac87979c009522b4605b096847445232" Workload="localhost-k8s-calico--apiserver--5d465466c4--h88jf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004d6b70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5d465466c4-h88jf", "timestamp":"2025-05-08 00:39:18.665727799 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:39:18.710059 containerd[1549]: 2025-05-08 00:39:18.670 [INFO][5635] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:39:18.710059 containerd[1549]: 2025-05-08 00:39:18.670 [INFO][5635] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:39:18.710059 containerd[1549]: 2025-05-08 00:39:18.670 [INFO][5635] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:39:18.710059 containerd[1549]: 2025-05-08 00:39:18.671 [INFO][5635] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3007f5c556b725763f2f88c36ed6b5beac87979c009522b4605b096847445232" host="localhost" May 8 00:39:18.710059 containerd[1549]: 2025-05-08 00:39:18.673 [INFO][5635] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:39:18.710059 containerd[1549]: 2025-05-08 00:39:18.676 [INFO][5635] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:39:18.710059 containerd[1549]: 2025-05-08 00:39:18.677 [INFO][5635] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:39:18.710059 containerd[1549]: 2025-05-08 00:39:18.678 [INFO][5635] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:39:18.710059 containerd[1549]: 2025-05-08 00:39:18.678 [INFO][5635] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3007f5c556b725763f2f88c36ed6b5beac87979c009522b4605b096847445232" host="localhost" May 8 00:39:18.710059 containerd[1549]: 2025-05-08 00:39:18.679 [INFO][5635] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3007f5c556b725763f2f88c36ed6b5beac87979c009522b4605b096847445232 May 8 00:39:18.710059 containerd[1549]: 2025-05-08 00:39:18.685 [INFO][5635] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3007f5c556b725763f2f88c36ed6b5beac87979c009522b4605b096847445232" host="localhost" May 8 00:39:18.710059 containerd[1549]: 2025-05-08 00:39:18.688 [INFO][5635] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.3007f5c556b725763f2f88c36ed6b5beac87979c009522b4605b096847445232" host="localhost" May 8 00:39:18.710059 containerd[1549]: 2025-05-08 00:39:18.688 [INFO][5635] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.3007f5c556b725763f2f88c36ed6b5beac87979c009522b4605b096847445232" host="localhost" May 8 00:39:18.710059 containerd[1549]: 2025-05-08 00:39:18.688 [INFO][5635] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:39:18.710059 containerd[1549]: 2025-05-08 00:39:18.688 [INFO][5635] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="3007f5c556b725763f2f88c36ed6b5beac87979c009522b4605b096847445232" HandleID="k8s-pod-network.3007f5c556b725763f2f88c36ed6b5beac87979c009522b4605b096847445232" Workload="localhost-k8s-calico--apiserver--5d465466c4--h88jf-eth0" May 8 00:39:18.710473 containerd[1549]: 2025-05-08 00:39:18.690 [INFO][5623] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3007f5c556b725763f2f88c36ed6b5beac87979c009522b4605b096847445232" Namespace="calico-apiserver" Pod="calico-apiserver-5d465466c4-h88jf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d465466c4--h88jf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d465466c4--h88jf-eth0", GenerateName:"calico-apiserver-5d465466c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"9353a2d8-021f-4963-930a-ab008f3fd909", ResourceVersion:"1102", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 38, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d465466c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5d465466c4-h88jf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie5bf0f6f32c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:39:18.710473 containerd[1549]: 2025-05-08 00:39:18.690 [INFO][5623] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="3007f5c556b725763f2f88c36ed6b5beac87979c009522b4605b096847445232" Namespace="calico-apiserver" Pod="calico-apiserver-5d465466c4-h88jf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d465466c4--h88jf-eth0" May 8 00:39:18.710473 containerd[1549]: 2025-05-08 00:39:18.690 [INFO][5623] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie5bf0f6f32c ContainerID="3007f5c556b725763f2f88c36ed6b5beac87979c009522b4605b096847445232" Namespace="calico-apiserver" Pod="calico-apiserver-5d465466c4-h88jf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d465466c4--h88jf-eth0" May 8 00:39:18.710473 containerd[1549]: 2025-05-08 00:39:18.692 [INFO][5623] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3007f5c556b725763f2f88c36ed6b5beac87979c009522b4605b096847445232" Namespace="calico-apiserver" Pod="calico-apiserver-5d465466c4-h88jf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d465466c4--h88jf-eth0" May 8 00:39:18.710473 containerd[1549]: 2025-05-08 00:39:18.692 [INFO][5623] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3007f5c556b725763f2f88c36ed6b5beac87979c009522b4605b096847445232" Namespace="calico-apiserver" Pod="calico-apiserver-5d465466c4-h88jf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d465466c4--h88jf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d465466c4--h88jf-eth0", GenerateName:"calico-apiserver-5d465466c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"9353a2d8-021f-4963-930a-ab008f3fd909", ResourceVersion:"1102", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 38, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d465466c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3007f5c556b725763f2f88c36ed6b5beac87979c009522b4605b096847445232", Pod:"calico-apiserver-5d465466c4-h88jf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie5bf0f6f32c", MAC:"26:63:68:3a:5a:99", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:39:18.710473 containerd[1549]: 2025-05-08 00:39:18.707 [INFO][5623] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3007f5c556b725763f2f88c36ed6b5beac87979c009522b4605b096847445232" Namespace="calico-apiserver" Pod="calico-apiserver-5d465466c4-h88jf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d465466c4--h88jf-eth0" May 8 00:39:18.732795 containerd[1549]: time="2025-05-08T00:39:18.731966627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:39:18.732795 containerd[1549]: time="2025-05-08T00:39:18.732011064Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:39:18.732795 containerd[1549]: time="2025-05-08T00:39:18.732019509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:18.732795 containerd[1549]: time="2025-05-08T00:39:18.732066781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:18.749795 systemd[1]: Started cri-containerd-3007f5c556b725763f2f88c36ed6b5beac87979c009522b4605b096847445232.scope - libcontainer container 3007f5c556b725763f2f88c36ed6b5beac87979c009522b4605b096847445232. May 8 00:39:18.759815 systemd-resolved[1414]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:39:18.787209 containerd[1549]: time="2025-05-08T00:39:18.787063540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d465466c4-h88jf,Uid:9353a2d8-021f-4963-930a-ab008f3fd909,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"3007f5c556b725763f2f88c36ed6b5beac87979c009522b4605b096847445232\"" May 8 00:39:18.789219 containerd[1549]: time="2025-05-08T00:39:18.789034524Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 8 00:39:19.277247 containerd[1549]: time="2025-05-08T00:39:19.276417445Z" level=info msg="StopPodSandbox for \"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\"" May 8 00:39:19.372728 containerd[1549]: 2025-05-08 00:39:19.335 [INFO][5707] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" May 8 00:39:19.372728 containerd[1549]: 2025-05-08 00:39:19.335 [INFO][5707] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" iface="eth0" netns="/var/run/netns/cni-add4c2be-1e19-4ce4-0555-c3fa939fdbb5" May 8 00:39:19.372728 containerd[1549]: 2025-05-08 00:39:19.335 [INFO][5707] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" iface="eth0" netns="/var/run/netns/cni-add4c2be-1e19-4ce4-0555-c3fa939fdbb5" May 8 00:39:19.372728 containerd[1549]: 2025-05-08 00:39:19.335 [INFO][5707] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" iface="eth0" netns="/var/run/netns/cni-add4c2be-1e19-4ce4-0555-c3fa939fdbb5" May 8 00:39:19.372728 containerd[1549]: 2025-05-08 00:39:19.335 [INFO][5707] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" May 8 00:39:19.372728 containerd[1549]: 2025-05-08 00:39:19.335 [INFO][5707] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" May 8 00:39:19.372728 containerd[1549]: 2025-05-08 00:39:19.349 [INFO][5714] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" HandleID="k8s-pod-network.a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" Workload="localhost-k8s-calico--apiserver--577f55d98d--bjswh-eth0" May 8 00:39:19.372728 containerd[1549]: 2025-05-08 00:39:19.349 [INFO][5714] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:39:19.372728 containerd[1549]: 2025-05-08 00:39:19.350 [INFO][5714] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:39:19.372728 containerd[1549]: 2025-05-08 00:39:19.357 [WARNING][5714] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" HandleID="k8s-pod-network.a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" Workload="localhost-k8s-calico--apiserver--577f55d98d--bjswh-eth0" May 8 00:39:19.372728 containerd[1549]: 2025-05-08 00:39:19.357 [INFO][5714] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" HandleID="k8s-pod-network.a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" Workload="localhost-k8s-calico--apiserver--577f55d98d--bjswh-eth0" May 8 00:39:19.372728 containerd[1549]: 2025-05-08 00:39:19.370 [INFO][5714] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:39:19.372728 containerd[1549]: 2025-05-08 00:39:19.371 [INFO][5707] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" May 8 00:39:19.374227 containerd[1549]: time="2025-05-08T00:39:19.372860138Z" level=info msg="TearDown network for sandbox \"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\" successfully" May 8 00:39:19.374227 containerd[1549]: time="2025-05-08T00:39:19.372881552Z" level=info msg="StopPodSandbox for \"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\" returns successfully" May 8 00:39:19.374227 containerd[1549]: time="2025-05-08T00:39:19.373400384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-577f55d98d-bjswh,Uid:2ee8fd8b-a4bc-43c5-bff4-0631d474067b,Namespace:calico-apiserver,Attempt:1,}" May 8 00:39:19.374397 systemd[1]: run-netns-cni\x2dadd4c2be\x2d1e19\x2d4ce4\x2d0555\x2dc3fa939fdbb5.mount: Deactivated successfully. May 8 00:39:19.522816 systemd-networkd[1452]: califc3aa0a806f: Link UP May 8 00:39:19.523533 systemd-networkd[1452]: califc3aa0a806f: Gained carrier May 8 00:39:19.544119 containerd[1549]: 2025-05-08 00:39:19.451 [INFO][5720] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--577f55d98d--bjswh-eth0 calico-apiserver-577f55d98d- calico-apiserver 2ee8fd8b-a4bc-43c5-bff4-0631d474067b 1109 0 2025-05-08 00:38:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:577f55d98d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-577f55d98d-bjswh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califc3aa0a806f [] []}} ContainerID="0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" Namespace="calico-apiserver" Pod="calico-apiserver-577f55d98d-bjswh" WorkloadEndpoint="localhost-k8s-calico--apiserver--577f55d98d--bjswh-" May 8 00:39:19.544119 containerd[1549]: 2025-05-08 00:39:19.451 [INFO][5720] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" Namespace="calico-apiserver" Pod="calico-apiserver-577f55d98d-bjswh" WorkloadEndpoint="localhost-k8s-calico--apiserver--577f55d98d--bjswh-eth0" May 8 00:39:19.544119 containerd[1549]: 2025-05-08 00:39:19.470 [INFO][5732] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" HandleID="k8s-pod-network.0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" Workload="localhost-k8s-calico--apiserver--577f55d98d--bjswh-eth0" May 8 00:39:19.544119 containerd[1549]: 2025-05-08 00:39:19.475 [INFO][5732] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" HandleID="k8s-pod-network.0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" Workload="localhost-k8s-calico--apiserver--577f55d98d--bjswh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ba110), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-577f55d98d-bjswh", "timestamp":"2025-05-08 00:39:19.470279723 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:39:19.544119 containerd[1549]: 2025-05-08 00:39:19.475 [INFO][5732] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:39:19.544119 containerd[1549]: 2025-05-08 00:39:19.475 [INFO][5732] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:39:19.544119 containerd[1549]: 2025-05-08 00:39:19.475 [INFO][5732] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:39:19.544119 containerd[1549]: 2025-05-08 00:39:19.476 [INFO][5732] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" host="localhost" May 8 00:39:19.544119 containerd[1549]: 2025-05-08 00:39:19.485 [INFO][5732] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:39:19.544119 containerd[1549]: 2025-05-08 00:39:19.487 [INFO][5732] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:39:19.544119 containerd[1549]: 2025-05-08 00:39:19.488 [INFO][5732] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:39:19.544119 containerd[1549]: 2025-05-08 00:39:19.489 [INFO][5732] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:39:19.544119 containerd[1549]: 2025-05-08 00:39:19.489 [INFO][5732] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" host="localhost" May 8 00:39:19.544119 containerd[1549]: 2025-05-08 00:39:19.490 [INFO][5732] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97 May 8 00:39:19.544119 containerd[1549]: 2025-05-08 00:39:19.496 [INFO][5732] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" host="localhost" May 8 00:39:19.544119 containerd[1549]: 2025-05-08 00:39:19.519 [INFO][5732] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" host="localhost" May 8 00:39:19.544119 containerd[1549]: 2025-05-08 00:39:19.519 [INFO][5732] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" host="localhost" May 8 00:39:19.544119 containerd[1549]: 2025-05-08 00:39:19.519 [INFO][5732] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:39:19.544119 containerd[1549]: 2025-05-08 00:39:19.519 [INFO][5732] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" HandleID="k8s-pod-network.0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" Workload="localhost-k8s-calico--apiserver--577f55d98d--bjswh-eth0" May 8 00:39:19.544659 containerd[1549]: 2025-05-08 00:39:19.521 [INFO][5720] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" Namespace="calico-apiserver" Pod="calico-apiserver-577f55d98d-bjswh" WorkloadEndpoint="localhost-k8s-calico--apiserver--577f55d98d--bjswh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--577f55d98d--bjswh-eth0", GenerateName:"calico-apiserver-577f55d98d-", Namespace:"calico-apiserver", SelfLink:"", UID:"2ee8fd8b-a4bc-43c5-bff4-0631d474067b", ResourceVersion:"1109", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 38, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"577f55d98d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-577f55d98d-bjswh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califc3aa0a806f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:39:19.544659 containerd[1549]: 2025-05-08 00:39:19.521 [INFO][5720] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" Namespace="calico-apiserver" Pod="calico-apiserver-577f55d98d-bjswh" WorkloadEndpoint="localhost-k8s-calico--apiserver--577f55d98d--bjswh-eth0" May 8 00:39:19.544659 containerd[1549]: 2025-05-08 00:39:19.521 [INFO][5720] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califc3aa0a806f ContainerID="0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" Namespace="calico-apiserver" Pod="calico-apiserver-577f55d98d-bjswh" WorkloadEndpoint="localhost-k8s-calico--apiserver--577f55d98d--bjswh-eth0" May 8 00:39:19.544659 containerd[1549]: 2025-05-08 00:39:19.524 [INFO][5720] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" Namespace="calico-apiserver" Pod="calico-apiserver-577f55d98d-bjswh" WorkloadEndpoint="localhost-k8s-calico--apiserver--577f55d98d--bjswh-eth0" May 8 00:39:19.544659 containerd[1549]: 2025-05-08 00:39:19.525 [INFO][5720] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" Namespace="calico-apiserver" Pod="calico-apiserver-577f55d98d-bjswh" WorkloadEndpoint="localhost-k8s-calico--apiserver--577f55d98d--bjswh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--577f55d98d--bjswh-eth0", GenerateName:"calico-apiserver-577f55d98d-", Namespace:"calico-apiserver", SelfLink:"", UID:"2ee8fd8b-a4bc-43c5-bff4-0631d474067b", ResourceVersion:"1109", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 38, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"577f55d98d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97", Pod:"calico-apiserver-577f55d98d-bjswh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califc3aa0a806f", MAC:"1a:b4:bb:b3:cd:80", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:39:19.544659 containerd[1549]: 2025-05-08 00:39:19.541 [INFO][5720] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" Namespace="calico-apiserver" Pod="calico-apiserver-577f55d98d-bjswh" WorkloadEndpoint="localhost-k8s-calico--apiserver--577f55d98d--bjswh-eth0" May 8 00:39:19.583270 containerd[1549]: time="2025-05-08T00:39:19.583196132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:39:19.583270 containerd[1549]: time="2025-05-08T00:39:19.583246135Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:39:19.583270 containerd[1549]: time="2025-05-08T00:39:19.583257111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:19.583439 containerd[1549]: time="2025-05-08T00:39:19.583328458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:19.602061 systemd[1]: Started cri-containerd-0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97.scope - libcontainer container 0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97. May 8 00:39:19.611660 systemd-resolved[1414]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:39:19.632637 containerd[1549]: time="2025-05-08T00:39:19.632485250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-577f55d98d-bjswh,Uid:2ee8fd8b-a4bc-43c5-bff4-0631d474067b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97\"" May 8 00:39:20.320998 systemd-networkd[1452]: calie5bf0f6f32c: Gained IPv6LL May 8 00:39:21.409069 systemd-networkd[1452]: califc3aa0a806f: Gained IPv6LL May 8 00:39:22.054574 systemd[1]: Started sshd@10-139.178.70.103:22-139.178.68.195:54022.service - OpenSSH per-connection server daemon (139.178.68.195:54022). May 8 00:39:22.281413 sshd[5800]: Accepted publickey for core from 139.178.68.195 port 54022 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:39:22.281893 sshd[5800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:22.285880 containerd[1549]: time="2025-05-08T00:39:22.285502963Z" level=info msg="StopPodSandbox for \"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\"" May 8 00:39:22.288088 systemd-logind[1523]: New session 12 of user core. May 8 00:39:22.292147 systemd[1]: Started session-12.scope - Session 12 of User core. May 8 00:39:22.315255 containerd[1549]: time="2025-05-08T00:39:22.314986192Z" level=info msg="StopPodSandbox for \"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\"" May 8 00:39:22.318776 containerd[1549]: time="2025-05-08T00:39:22.316358183Z" level=info msg="StopPodSandbox for \"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\"" May 8 00:39:22.464632 containerd[1549]: 2025-05-08 00:39:22.393 [INFO][5820] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f" May 8 00:39:22.464632 containerd[1549]: 2025-05-08 00:39:22.394 [INFO][5820] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f" iface="eth0" netns="/var/run/netns/cni-9db657a5-8231-5fbf-e932-679eb2fda25c" May 8 00:39:22.464632 containerd[1549]: 2025-05-08 00:39:22.394 [INFO][5820] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f" iface="eth0" netns="/var/run/netns/cni-9db657a5-8231-5fbf-e932-679eb2fda25c" May 8 00:39:22.464632 containerd[1549]: 2025-05-08 00:39:22.395 [INFO][5820] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f" iface="eth0" netns="/var/run/netns/cni-9db657a5-8231-5fbf-e932-679eb2fda25c" May 8 00:39:22.464632 containerd[1549]: 2025-05-08 00:39:22.395 [INFO][5820] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f" May 8 00:39:22.464632 containerd[1549]: 2025-05-08 00:39:22.395 [INFO][5820] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f" May 8 00:39:22.464632 containerd[1549]: 2025-05-08 00:39:22.449 [INFO][5871] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f" HandleID="k8s-pod-network.12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f" Workload="localhost-k8s-csi--node--driver--gvqtb-eth0" May 8 00:39:22.464632 containerd[1549]: 2025-05-08 00:39:22.449 [INFO][5871] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:39:22.464632 containerd[1549]: 2025-05-08 00:39:22.449 [INFO][5871] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:39:22.464632 containerd[1549]: 2025-05-08 00:39:22.455 [WARNING][5871] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f" HandleID="k8s-pod-network.12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f" Workload="localhost-k8s-csi--node--driver--gvqtb-eth0" May 8 00:39:22.464632 containerd[1549]: 2025-05-08 00:39:22.456 [INFO][5871] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f" HandleID="k8s-pod-network.12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f" Workload="localhost-k8s-csi--node--driver--gvqtb-eth0" May 8 00:39:22.464632 containerd[1549]: 2025-05-08 00:39:22.457 [INFO][5871] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:39:22.464632 containerd[1549]: 2025-05-08 00:39:22.462 [INFO][5820] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f" May 8 00:39:22.467235 systemd[1]: run-netns-cni\x2d9db657a5\x2d8231\x2d5fbf\x2de932\x2d679eb2fda25c.mount: Deactivated successfully. May 8 00:39:22.468800 containerd[1549]: time="2025-05-08T00:39:22.468780936Z" level=info msg="TearDown network for sandbox \"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\" successfully" May 8 00:39:22.468865 containerd[1549]: time="2025-05-08T00:39:22.468857299Z" level=info msg="StopPodSandbox for \"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\" returns successfully" May 8 00:39:22.470198 containerd[1549]: time="2025-05-08T00:39:22.470185044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gvqtb,Uid:06105806-56a1-4100-9953-11ff7427bd13,Namespace:calico-system,Attempt:1,}" May 8 00:39:22.476358 containerd[1549]: 2025-05-08 00:39:22.396 [INFO][5854] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724" May 8 00:39:22.476358 containerd[1549]: 2025-05-08 00:39:22.397 [INFO][5854] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724" iface="eth0" netns="/var/run/netns/cni-fe6c232e-d7ab-b06a-c2b5-91e46a9ddcf3" May 8 00:39:22.476358 containerd[1549]: 2025-05-08 00:39:22.397 [INFO][5854] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724" iface="eth0" netns="/var/run/netns/cni-fe6c232e-d7ab-b06a-c2b5-91e46a9ddcf3" May 8 00:39:22.476358 containerd[1549]: 2025-05-08 00:39:22.398 [INFO][5854] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724" iface="eth0" netns="/var/run/netns/cni-fe6c232e-d7ab-b06a-c2b5-91e46a9ddcf3" May 8 00:39:22.476358 containerd[1549]: 2025-05-08 00:39:22.398 [INFO][5854] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724" May 8 00:39:22.476358 containerd[1549]: 2025-05-08 00:39:22.398 [INFO][5854] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724" May 8 00:39:22.476358 containerd[1549]: 2025-05-08 00:39:22.461 [INFO][5876] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724" HandleID="k8s-pod-network.806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724" Workload="localhost-k8s-calico--kube--controllers--79589f44bf--xp2wm-eth0" May 8 00:39:22.476358 containerd[1549]: 2025-05-08 00:39:22.462 [INFO][5876] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:39:22.476358 containerd[1549]: 2025-05-08 00:39:22.462 [INFO][5876] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:39:22.476358 containerd[1549]: 2025-05-08 00:39:22.469 [WARNING][5876] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724" HandleID="k8s-pod-network.806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724" Workload="localhost-k8s-calico--kube--controllers--79589f44bf--xp2wm-eth0" May 8 00:39:22.476358 containerd[1549]: 2025-05-08 00:39:22.469 [INFO][5876] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724" HandleID="k8s-pod-network.806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724" Workload="localhost-k8s-calico--kube--controllers--79589f44bf--xp2wm-eth0" May 8 00:39:22.476358 containerd[1549]: 2025-05-08 00:39:22.471 [INFO][5876] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:39:22.476358 containerd[1549]: 2025-05-08 00:39:22.474 [INFO][5854] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724" May 8 00:39:22.479622 systemd[1]: run-netns-cni\x2dfe6c232e\x2dd7ab\x2db06a\x2dc2b5\x2d91e46a9ddcf3.mount: Deactivated successfully. May 8 00:39:22.480888 containerd[1549]: time="2025-05-08T00:39:22.480868489Z" level=info msg="TearDown network for sandbox \"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\" successfully" May 8 00:39:22.481019 containerd[1549]: time="2025-05-08T00:39:22.480957532Z" level=info msg="StopPodSandbox for \"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\" returns successfully" May 8 00:39:22.488128 containerd[1549]: 2025-05-08 00:39:22.413 [INFO][5847] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" May 8 00:39:22.488128 containerd[1549]: 2025-05-08 00:39:22.413 [INFO][5847] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" iface="eth0" netns="/var/run/netns/cni-f4786fa5-4c01-d095-c086-4cc9b3dcd124" May 8 00:39:22.488128 containerd[1549]: 2025-05-08 00:39:22.413 [INFO][5847] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" iface="eth0" netns="/var/run/netns/cni-f4786fa5-4c01-d095-c086-4cc9b3dcd124" May 8 00:39:22.488128 containerd[1549]: 2025-05-08 00:39:22.414 [INFO][5847] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" iface="eth0" netns="/var/run/netns/cni-f4786fa5-4c01-d095-c086-4cc9b3dcd124" May 8 00:39:22.488128 containerd[1549]: 2025-05-08 00:39:22.414 [INFO][5847] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" May 8 00:39:22.488128 containerd[1549]: 2025-05-08 00:39:22.414 [INFO][5847] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" May 8 00:39:22.488128 containerd[1549]: 2025-05-08 00:39:22.475 [INFO][5881] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" HandleID="k8s-pod-network.02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" Workload="localhost-k8s-calico--apiserver--577f55d98d--45j6l-eth0" May 8 00:39:22.488128 containerd[1549]: 2025-05-08 00:39:22.476 [INFO][5881] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:39:22.488128 containerd[1549]: 2025-05-08 00:39:22.476 [INFO][5881] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:39:22.488128 containerd[1549]: 2025-05-08 00:39:22.482 [WARNING][5881] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" HandleID="k8s-pod-network.02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" Workload="localhost-k8s-calico--apiserver--577f55d98d--45j6l-eth0" May 8 00:39:22.488128 containerd[1549]: 2025-05-08 00:39:22.482 [INFO][5881] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" HandleID="k8s-pod-network.02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" Workload="localhost-k8s-calico--apiserver--577f55d98d--45j6l-eth0" May 8 00:39:22.488128 containerd[1549]: 2025-05-08 00:39:22.483 [INFO][5881] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:39:22.488128 containerd[1549]: 2025-05-08 00:39:22.485 [INFO][5847] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" May 8 00:39:22.489336 containerd[1549]: time="2025-05-08T00:39:22.488838582Z" level=info msg="TearDown network for sandbox \"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\" successfully" May 8 00:39:22.489336 containerd[1549]: time="2025-05-08T00:39:22.488853918Z" level=info msg="StopPodSandbox for \"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\" returns successfully" May 8 00:39:22.490832 systemd[1]: run-netns-cni\x2df4786fa5\x2d4c01\x2dd095\x2dc086\x2d4cc9b3dcd124.mount: Deactivated successfully. May 8 00:39:22.492883 containerd[1549]: time="2025-05-08T00:39:22.492327018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-577f55d98d-45j6l,Uid:0ec7b516-7af4-4ea9-8c59-9667bc29c59e,Namespace:calico-apiserver,Attempt:1,}" May 8 00:39:22.605350 kubelet[2711]: I0508 00:39:22.604147 2711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5260c901-675e-42fb-a9e6-84eb23b95893-tigera-ca-bundle\") pod \"5260c901-675e-42fb-a9e6-84eb23b95893\" (UID: \"5260c901-675e-42fb-a9e6-84eb23b95893\") " May 8 00:39:22.605350 kubelet[2711]: I0508 00:39:22.604205 2711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfgkn\" (UniqueName: \"kubernetes.io/projected/5260c901-675e-42fb-a9e6-84eb23b95893-kube-api-access-bfgkn\") pod \"5260c901-675e-42fb-a9e6-84eb23b95893\" (UID: \"5260c901-675e-42fb-a9e6-84eb23b95893\") " May 8 00:39:22.615575 kubelet[2711]: I0508 00:39:22.615344 2711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5260c901-675e-42fb-a9e6-84eb23b95893-kube-api-access-bfgkn" (OuterVolumeSpecName: "kube-api-access-bfgkn") pod "5260c901-675e-42fb-a9e6-84eb23b95893" (UID: "5260c901-675e-42fb-a9e6-84eb23b95893"). InnerVolumeSpecName "kube-api-access-bfgkn". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:39:22.621609 kubelet[2711]: I0508 00:39:22.620976 2711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5260c901-675e-42fb-a9e6-84eb23b95893-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "5260c901-675e-42fb-a9e6-84eb23b95893" (UID: "5260c901-675e-42fb-a9e6-84eb23b95893"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 8 00:39:22.711043 kubelet[2711]: I0508 00:39:22.710861 2711 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5260c901-675e-42fb-a9e6-84eb23b95893-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 8 00:39:22.711043 kubelet[2711]: I0508 00:39:22.710886 2711 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-bfgkn\" (UniqueName: \"kubernetes.io/projected/5260c901-675e-42fb-a9e6-84eb23b95893-kube-api-access-bfgkn\") on node \"localhost\" DevicePath \"\"" May 8 00:39:22.756149 systemd-networkd[1452]: cali5d79671ee61: Link UP May 8 00:39:22.756719 systemd-networkd[1452]: cali5d79671ee61: Gained carrier May 8 00:39:22.789795 containerd[1549]: 2025-05-08 00:39:22.545 [INFO][5895] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--gvqtb-eth0 csi-node-driver- calico-system 06105806-56a1-4100-9953-11ff7427bd13 1135 0 2025-05-08 00:38:05 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5bcd8f69 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-gvqtb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5d79671ee61 [] []}} ContainerID="8ae7f98b82e7c7b9a5ae83d03fba697e333407c99d5bb693cde080979879a83c" Namespace="calico-system" Pod="csi-node-driver-gvqtb" WorkloadEndpoint="localhost-k8s-csi--node--driver--gvqtb-" May 8 00:39:22.789795 containerd[1549]: 2025-05-08 00:39:22.545 [INFO][5895] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8ae7f98b82e7c7b9a5ae83d03fba697e333407c99d5bb693cde080979879a83c" Namespace="calico-system" Pod="csi-node-driver-gvqtb" WorkloadEndpoint="localhost-k8s-csi--node--driver--gvqtb-eth0" May 8 00:39:22.789795 containerd[1549]: 2025-05-08 00:39:22.625 [INFO][5918] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8ae7f98b82e7c7b9a5ae83d03fba697e333407c99d5bb693cde080979879a83c" HandleID="k8s-pod-network.8ae7f98b82e7c7b9a5ae83d03fba697e333407c99d5bb693cde080979879a83c" Workload="localhost-k8s-csi--node--driver--gvqtb-eth0" May 8 00:39:22.789795 containerd[1549]: 2025-05-08 00:39:22.641 [INFO][5918] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8ae7f98b82e7c7b9a5ae83d03fba697e333407c99d5bb693cde080979879a83c" HandleID="k8s-pod-network.8ae7f98b82e7c7b9a5ae83d03fba697e333407c99d5bb693cde080979879a83c" Workload="localhost-k8s-csi--node--driver--gvqtb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051da0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-gvqtb", "timestamp":"2025-05-08 00:39:22.625961983 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:39:22.789795 containerd[1549]: 2025-05-08 00:39:22.641 [INFO][5918] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:39:22.789795 containerd[1549]: 2025-05-08 00:39:22.641 [INFO][5918] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:39:22.789795 containerd[1549]: 2025-05-08 00:39:22.641 [INFO][5918] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:39:22.789795 containerd[1549]: 2025-05-08 00:39:22.648 [INFO][5918] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8ae7f98b82e7c7b9a5ae83d03fba697e333407c99d5bb693cde080979879a83c" host="localhost" May 8 00:39:22.789795 containerd[1549]: 2025-05-08 00:39:22.653 [INFO][5918] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:39:22.789795 containerd[1549]: 2025-05-08 00:39:22.659 [INFO][5918] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:39:22.789795 containerd[1549]: 2025-05-08 00:39:22.660 [INFO][5918] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:39:22.789795 containerd[1549]: 2025-05-08 00:39:22.664 [INFO][5918] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:39:22.789795 containerd[1549]: 2025-05-08 00:39:22.664 [INFO][5918] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8ae7f98b82e7c7b9a5ae83d03fba697e333407c99d5bb693cde080979879a83c" host="localhost" May 8 00:39:22.789795 containerd[1549]: 2025-05-08 00:39:22.666 [INFO][5918] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8ae7f98b82e7c7b9a5ae83d03fba697e333407c99d5bb693cde080979879a83c May 8 00:39:22.789795 containerd[1549]: 2025-05-08 00:39:22.676 [INFO][5918] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8ae7f98b82e7c7b9a5ae83d03fba697e333407c99d5bb693cde080979879a83c" host="localhost" May 8 00:39:22.789795 containerd[1549]: 2025-05-08 00:39:22.713 [INFO][5918] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.8ae7f98b82e7c7b9a5ae83d03fba697e333407c99d5bb693cde080979879a83c" host="localhost" May 8 00:39:22.789795 containerd[1549]: 2025-05-08 00:39:22.713 [INFO][5918] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.8ae7f98b82e7c7b9a5ae83d03fba697e333407c99d5bb693cde080979879a83c" host="localhost" May 8 00:39:22.789795 containerd[1549]: 2025-05-08 00:39:22.715 [INFO][5918] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:39:22.789795 containerd[1549]: 2025-05-08 00:39:22.715 [INFO][5918] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="8ae7f98b82e7c7b9a5ae83d03fba697e333407c99d5bb693cde080979879a83c" HandleID="k8s-pod-network.8ae7f98b82e7c7b9a5ae83d03fba697e333407c99d5bb693cde080979879a83c" Workload="localhost-k8s-csi--node--driver--gvqtb-eth0" May 8 00:39:22.803555 containerd[1549]: 2025-05-08 00:39:22.750 [INFO][5895] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8ae7f98b82e7c7b9a5ae83d03fba697e333407c99d5bb693cde080979879a83c" Namespace="calico-system" Pod="csi-node-driver-gvqtb" WorkloadEndpoint="localhost-k8s-csi--node--driver--gvqtb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gvqtb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"06105806-56a1-4100-9953-11ff7427bd13", ResourceVersion:"1135", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 38, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-gvqtb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5d79671ee61", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:39:22.803555 containerd[1549]: 2025-05-08 00:39:22.751 [INFO][5895] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="8ae7f98b82e7c7b9a5ae83d03fba697e333407c99d5bb693cde080979879a83c" Namespace="calico-system" Pod="csi-node-driver-gvqtb" WorkloadEndpoint="localhost-k8s-csi--node--driver--gvqtb-eth0" May 8 00:39:22.803555 containerd[1549]: 2025-05-08 00:39:22.751 [INFO][5895] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5d79671ee61 ContainerID="8ae7f98b82e7c7b9a5ae83d03fba697e333407c99d5bb693cde080979879a83c" Namespace="calico-system" Pod="csi-node-driver-gvqtb" WorkloadEndpoint="localhost-k8s-csi--node--driver--gvqtb-eth0" May 8 00:39:22.803555 containerd[1549]: 2025-05-08 00:39:22.754 [INFO][5895] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8ae7f98b82e7c7b9a5ae83d03fba697e333407c99d5bb693cde080979879a83c" Namespace="calico-system" Pod="csi-node-driver-gvqtb" WorkloadEndpoint="localhost-k8s-csi--node--driver--gvqtb-eth0" May 8 00:39:22.803555 containerd[1549]: 2025-05-08 00:39:22.759 [INFO][5895] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8ae7f98b82e7c7b9a5ae83d03fba697e333407c99d5bb693cde080979879a83c" Namespace="calico-system" Pod="csi-node-driver-gvqtb" WorkloadEndpoint="localhost-k8s-csi--node--driver--gvqtb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gvqtb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"06105806-56a1-4100-9953-11ff7427bd13", ResourceVersion:"1135", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 38, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8ae7f98b82e7c7b9a5ae83d03fba697e333407c99d5bb693cde080979879a83c", Pod:"csi-node-driver-gvqtb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5d79671ee61", MAC:"3a:6d:52:db:ac:36", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:39:22.803555 containerd[1549]: 2025-05-08 00:39:22.784 [INFO][5895] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8ae7f98b82e7c7b9a5ae83d03fba697e333407c99d5bb693cde080979879a83c" Namespace="calico-system" Pod="csi-node-driver-gvqtb" WorkloadEndpoint="localhost-k8s-csi--node--driver--gvqtb-eth0" May 8 00:39:22.800317 systemd-networkd[1452]: cali2417d36f95d: Link UP May 8 00:39:22.800745 systemd-networkd[1452]: cali2417d36f95d: Gained carrier May 8 00:39:22.820650 containerd[1549]: 2025-05-08 00:39:22.622 [INFO][5907] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--577f55d98d--45j6l-eth0 calico-apiserver-577f55d98d- calico-apiserver 0ec7b516-7af4-4ea9-8c59-9667bc29c59e 1137 0 2025-05-08 00:38:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:577f55d98d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-577f55d98d-45j6l eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2417d36f95d [] []}} ContainerID="13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" Namespace="calico-apiserver" Pod="calico-apiserver-577f55d98d-45j6l" WorkloadEndpoint="localhost-k8s-calico--apiserver--577f55d98d--45j6l-" May 8 00:39:22.820650 containerd[1549]: 2025-05-08 00:39:22.622 [INFO][5907] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" Namespace="calico-apiserver" Pod="calico-apiserver-577f55d98d-45j6l" WorkloadEndpoint="localhost-k8s-calico--apiserver--577f55d98d--45j6l-eth0" May 8 00:39:22.820650 containerd[1549]: 2025-05-08 00:39:22.675 [INFO][5929] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" HandleID="k8s-pod-network.13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" Workload="localhost-k8s-calico--apiserver--577f55d98d--45j6l-eth0" May 8 00:39:22.820650 containerd[1549]: 2025-05-08 00:39:22.739 [INFO][5929] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" HandleID="k8s-pod-network.13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" Workload="localhost-k8s-calico--apiserver--577f55d98d--45j6l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002902c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-577f55d98d-45j6l", "timestamp":"2025-05-08 00:39:22.675049662 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:39:22.820650 containerd[1549]: 2025-05-08 00:39:22.740 [INFO][5929] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:39:22.820650 containerd[1549]: 2025-05-08 00:39:22.740 [INFO][5929] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:39:22.820650 containerd[1549]: 2025-05-08 00:39:22.740 [INFO][5929] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:39:22.820650 containerd[1549]: 2025-05-08 00:39:22.751 [INFO][5929] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" host="localhost" May 8 00:39:22.820650 containerd[1549]: 2025-05-08 00:39:22.760 [INFO][5929] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:39:22.820650 containerd[1549]: 2025-05-08 00:39:22.763 [INFO][5929] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:39:22.820650 containerd[1549]: 2025-05-08 00:39:22.765 [INFO][5929] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:39:22.820650 containerd[1549]: 2025-05-08 00:39:22.766 [INFO][5929] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:39:22.820650 containerd[1549]: 2025-05-08 00:39:22.766 [INFO][5929] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" host="localhost" May 8 00:39:22.820650 containerd[1549]: 2025-05-08 00:39:22.768 [INFO][5929] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec May 8 00:39:22.820650 containerd[1549]: 2025-05-08 00:39:22.773 [INFO][5929] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" host="localhost" May 8 00:39:22.820650 containerd[1549]: 2025-05-08 00:39:22.791 [INFO][5929] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" host="localhost" May 8 00:39:22.820650 containerd[1549]: 2025-05-08 00:39:22.792 [INFO][5929] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" host="localhost" May 8 00:39:22.820650 containerd[1549]: 2025-05-08 00:39:22.792 [INFO][5929] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:39:22.820650 containerd[1549]: 2025-05-08 00:39:22.793 [INFO][5929] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" HandleID="k8s-pod-network.13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" Workload="localhost-k8s-calico--apiserver--577f55d98d--45j6l-eth0" May 8 00:39:22.846426 containerd[1549]: 2025-05-08 00:39:22.797 [INFO][5907] cni-plugin/k8s.go 386: Populated endpoint ContainerID="13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" Namespace="calico-apiserver" Pod="calico-apiserver-577f55d98d-45j6l" WorkloadEndpoint="localhost-k8s-calico--apiserver--577f55d98d--45j6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--577f55d98d--45j6l-eth0", GenerateName:"calico-apiserver-577f55d98d-", Namespace:"calico-apiserver", SelfLink:"", UID:"0ec7b516-7af4-4ea9-8c59-9667bc29c59e", ResourceVersion:"1137", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 38, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"577f55d98d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-577f55d98d-45j6l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2417d36f95d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:39:22.846426 containerd[1549]: 2025-05-08 00:39:22.797 [INFO][5907] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" Namespace="calico-apiserver" Pod="calico-apiserver-577f55d98d-45j6l" WorkloadEndpoint="localhost-k8s-calico--apiserver--577f55d98d--45j6l-eth0" May 8 00:39:22.846426 containerd[1549]: 2025-05-08 00:39:22.797 [INFO][5907] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2417d36f95d ContainerID="13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" Namespace="calico-apiserver" Pod="calico-apiserver-577f55d98d-45j6l" WorkloadEndpoint="localhost-k8s-calico--apiserver--577f55d98d--45j6l-eth0" May 8 00:39:22.846426 containerd[1549]: 2025-05-08 00:39:22.802 [INFO][5907] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" Namespace="calico-apiserver" Pod="calico-apiserver-577f55d98d-45j6l" WorkloadEndpoint="localhost-k8s-calico--apiserver--577f55d98d--45j6l-eth0" May 8 00:39:22.846426 containerd[1549]: 2025-05-08 00:39:22.803 [INFO][5907] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" Namespace="calico-apiserver" Pod="calico-apiserver-577f55d98d-45j6l" WorkloadEndpoint="localhost-k8s-calico--apiserver--577f55d98d--45j6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--577f55d98d--45j6l-eth0", GenerateName:"calico-apiserver-577f55d98d-", Namespace:"calico-apiserver", SelfLink:"", UID:"0ec7b516-7af4-4ea9-8c59-9667bc29c59e", ResourceVersion:"1137", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 38, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"577f55d98d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec", Pod:"calico-apiserver-577f55d98d-45j6l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2417d36f95d", MAC:"da:2f:96:a1:a4:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:39:22.846426 containerd[1549]: 2025-05-08 00:39:22.818 [INFO][5907] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" Namespace="calico-apiserver" Pod="calico-apiserver-577f55d98d-45j6l" WorkloadEndpoint="localhost-k8s-calico--apiserver--577f55d98d--45j6l-eth0" May 8 00:39:22.866305 containerd[1549]: time="2025-05-08T00:39:22.866163216Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:39:22.866305 containerd[1549]: time="2025-05-08T00:39:22.866209841Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:39:22.866305 containerd[1549]: time="2025-05-08T00:39:22.866220081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:22.867378 containerd[1549]: time="2025-05-08T00:39:22.866273054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:22.868937 containerd[1549]: time="2025-05-08T00:39:22.868818900Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:39:22.868937 containerd[1549]: time="2025-05-08T00:39:22.868862118Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:39:22.868937 containerd[1549]: time="2025-05-08T00:39:22.868873295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:22.869034 containerd[1549]: time="2025-05-08T00:39:22.868940771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:22.934528 systemd[1]: Started cri-containerd-8ae7f98b82e7c7b9a5ae83d03fba697e333407c99d5bb693cde080979879a83c.scope - libcontainer container 8ae7f98b82e7c7b9a5ae83d03fba697e333407c99d5bb693cde080979879a83c. May 8 00:39:22.940280 systemd[1]: Started cri-containerd-13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec.scope - libcontainer container 13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec. May 8 00:39:22.958845 systemd-resolved[1414]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:39:22.960822 systemd-resolved[1414]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:39:22.976662 containerd[1549]: time="2025-05-08T00:39:22.976631066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gvqtb,Uid:06105806-56a1-4100-9953-11ff7427bd13,Namespace:calico-system,Attempt:1,} returns sandbox id \"8ae7f98b82e7c7b9a5ae83d03fba697e333407c99d5bb693cde080979879a83c\"" May 8 00:39:23.006753 containerd[1549]: time="2025-05-08T00:39:23.006727773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-577f55d98d-45j6l,Uid:0ec7b516-7af4-4ea9-8c59-9667bc29c59e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec\"" May 8 00:39:23.015939 sshd[5800]: pam_unix(sshd:session): session closed for user core May 8 00:39:23.023687 systemd[1]: sshd@10-139.178.70.103:22-139.178.68.195:54022.service: Deactivated successfully. May 8 00:39:23.025409 systemd[1]: session-12.scope: Deactivated successfully. May 8 00:39:23.026112 systemd-logind[1523]: Session 12 logged out. Waiting for processes to exit. May 8 00:39:23.033285 systemd[1]: Started sshd@11-139.178.70.103:22-139.178.68.195:54034.service - OpenSSH per-connection server daemon (139.178.68.195:54034). May 8 00:39:23.037135 systemd-logind[1523]: Removed session 12. May 8 00:39:23.065277 systemd[1]: Removed slice kubepods-besteffort-pod5260c901_675e_42fb_a9e6_84eb23b95893.slice - libcontainer container kubepods-besteffort-pod5260c901_675e_42fb_a9e6_84eb23b95893.slice. May 8 00:39:23.068036 sshd[6048]: Accepted publickey for core from 139.178.68.195 port 54034 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:39:23.069251 sshd[6048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:23.075429 systemd-logind[1523]: New session 13 of user core. May 8 00:39:23.080048 systemd[1]: Started session-13.scope - Session 13 of User core. May 8 00:39:23.302335 sshd[6048]: pam_unix(sshd:session): session closed for user core May 8 00:39:23.311046 systemd[1]: sshd@11-139.178.70.103:22-139.178.68.195:54034.service: Deactivated successfully. May 8 00:39:23.314270 systemd[1]: session-13.scope: Deactivated successfully. May 8 00:39:23.314815 systemd-logind[1523]: Session 13 logged out. Waiting for processes to exit. May 8 00:39:23.321208 systemd[1]: Started sshd@12-139.178.70.103:22-139.178.68.195:54040.service - OpenSSH per-connection server daemon (139.178.68.195:54040). May 8 00:39:23.325731 systemd-logind[1523]: Removed session 13. May 8 00:39:23.438882 sshd[6058]: Accepted publickey for core from 139.178.68.195 port 54040 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:39:23.439807 sshd[6058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:23.442295 systemd-logind[1523]: New session 14 of user core. May 8 00:39:23.450020 systemd[1]: Started session-14.scope - Session 14 of User core. May 8 00:39:23.468334 systemd[1]: var-lib-kubelet-pods-5260c901\x2d675e\x2d42fb\x2da9e6\x2d84eb23b95893-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbfgkn.mount: Deactivated successfully. May 8 00:39:23.547394 containerd[1549]: time="2025-05-08T00:39:23.524519944Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" May 8 00:39:23.600142 containerd[1549]: time="2025-05-08T00:39:23.599698368Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:23.601930 containerd[1549]: time="2025-05-08T00:39:23.601703258Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:23.609083 containerd[1549]: time="2025-05-08T00:39:23.609040562Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:23.610985 containerd[1549]: time="2025-05-08T00:39:23.610958608Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 4.820625126s" May 8 00:39:23.611052 containerd[1549]: time="2025-05-08T00:39:23.610987117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 8 00:39:23.615322 containerd[1549]: time="2025-05-08T00:39:23.615281514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 8 00:39:23.641596 sshd[6058]: pam_unix(sshd:session): session closed for user core May 8 00:39:23.643708 systemd[1]: sshd@12-139.178.70.103:22-139.178.68.195:54040.service: Deactivated successfully. May 8 00:39:23.644720 systemd[1]: session-14.scope: Deactivated successfully. May 8 00:39:23.645287 systemd-logind[1523]: Session 14 logged out. Waiting for processes to exit. May 8 00:39:23.645768 systemd-logind[1523]: Removed session 14. May 8 00:39:23.661771 containerd[1549]: time="2025-05-08T00:39:23.661732922Z" level=info msg="CreateContainer within sandbox \"3007f5c556b725763f2f88c36ed6b5beac87979c009522b4605b096847445232\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 8 00:39:23.674378 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2988901453.mount: Deactivated successfully. May 8 00:39:23.675244 containerd[1549]: time="2025-05-08T00:39:23.675145108Z" level=info msg="CreateContainer within sandbox \"3007f5c556b725763f2f88c36ed6b5beac87979c009522b4605b096847445232\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6f11189852d538fbdc37013b59b3801383f4cc173d7f6f311dd50fb9e0efcd4c\"" May 8 00:39:23.675952 containerd[1549]: time="2025-05-08T00:39:23.675651855Z" level=info msg="StartContainer for \"6f11189852d538fbdc37013b59b3801383f4cc173d7f6f311dd50fb9e0efcd4c\"" May 8 00:39:23.704213 systemd[1]: Started cri-containerd-6f11189852d538fbdc37013b59b3801383f4cc173d7f6f311dd50fb9e0efcd4c.scope - libcontainer container 6f11189852d538fbdc37013b59b3801383f4cc173d7f6f311dd50fb9e0efcd4c. May 8 00:39:23.736211 containerd[1549]: time="2025-05-08T00:39:23.736145582Z" level=info msg="StartContainer for \"6f11189852d538fbdc37013b59b3801383f4cc173d7f6f311dd50fb9e0efcd4c\" returns successfully" May 8 00:39:24.012374 containerd[1549]: time="2025-05-08T00:39:24.012344075Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:24.012796 containerd[1549]: time="2025-05-08T00:39:24.012775050Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 8 00:39:24.014043 containerd[1549]: time="2025-05-08T00:39:24.014026367Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 398.721455ms" May 8 00:39:24.014078 containerd[1549]: time="2025-05-08T00:39:24.014045044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 8 00:39:24.014762 containerd[1549]: time="2025-05-08T00:39:24.014660915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 8 00:39:24.016960 containerd[1549]: time="2025-05-08T00:39:24.016940371Z" level=info msg="CreateContainer within sandbox \"0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 8 00:39:24.032140 containerd[1549]: time="2025-05-08T00:39:24.032109766Z" level=info msg="CreateContainer within sandbox \"0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"894cf5dddf66dd52947d7424075502d3a56f4980c1a2382c0da7349b339eb06c\"" May 8 00:39:24.032595 containerd[1549]: time="2025-05-08T00:39:24.032544227Z" level=info msg="StartContainer for \"894cf5dddf66dd52947d7424075502d3a56f4980c1a2382c0da7349b339eb06c\"" May 8 00:39:24.058094 systemd[1]: Started cri-containerd-894cf5dddf66dd52947d7424075502d3a56f4980c1a2382c0da7349b339eb06c.scope - libcontainer container 894cf5dddf66dd52947d7424075502d3a56f4980c1a2382c0da7349b339eb06c. May 8 00:39:24.111533 containerd[1549]: time="2025-05-08T00:39:24.111465043Z" level=info msg="StartContainer for \"894cf5dddf66dd52947d7424075502d3a56f4980c1a2382c0da7349b339eb06c\" returns successfully" May 8 00:39:24.278663 kubelet[2711]: I0508 00:39:24.278493 2711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5260c901-675e-42fb-a9e6-84eb23b95893" path="/var/lib/kubelet/pods/5260c901-675e-42fb-a9e6-84eb23b95893/volumes" May 8 00:39:24.353139 systemd-networkd[1452]: cali2417d36f95d: Gained IPv6LL May 8 00:39:24.417081 systemd-networkd[1452]: cali5d79671ee61: Gained IPv6LL May 8 00:39:24.706053 kubelet[2711]: I0508 00:39:24.706012 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5d465466c4-h88jf" podStartSLOduration=73.879007676 podStartE2EDuration="1m18.705996812s" podCreationTimestamp="2025-05-08 00:38:06 +0000 UTC" firstStartedPulling="2025-05-08 00:39:18.788193972 +0000 UTC m=+86.751102060" lastFinishedPulling="2025-05-08 00:39:23.615183109 +0000 UTC m=+91.578091196" observedRunningTime="2025-05-08 00:39:24.12442861 +0000 UTC m=+92.087336707" watchObservedRunningTime="2025-05-08 00:39:24.705996812 +0000 UTC m=+92.668904909" May 8 00:39:25.213156 kubelet[2711]: I0508 00:39:25.213090 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-577f55d98d-bjswh" podStartSLOduration=75.831983616 podStartE2EDuration="1m20.21306191s" podCreationTimestamp="2025-05-08 00:38:05 +0000 UTC" firstStartedPulling="2025-05-08 00:39:19.633442423 +0000 UTC m=+87.596350512" lastFinishedPulling="2025-05-08 00:39:24.014520718 +0000 UTC m=+91.977428806" observedRunningTime="2025-05-08 00:39:25.192352461 +0000 UTC m=+93.155260561" watchObservedRunningTime="2025-05-08 00:39:25.21306191 +0000 UTC m=+93.175970002" May 8 00:39:25.741944 containerd[1549]: time="2025-05-08T00:39:25.741645923Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:25.749758 containerd[1549]: time="2025-05-08T00:39:25.749500526Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 8 00:39:25.758552 containerd[1549]: time="2025-05-08T00:39:25.758509523Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:25.761611 containerd[1549]: time="2025-05-08T00:39:25.761580031Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:25.761955 containerd[1549]: time="2025-05-08T00:39:25.761935584Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 1.747258523s" May 8 00:39:25.762431 containerd[1549]: time="2025-05-08T00:39:25.761957168Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 8 00:39:25.762905 containerd[1549]: time="2025-05-08T00:39:25.762882102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 8 00:39:25.777752 containerd[1549]: time="2025-05-08T00:39:25.777729394Z" level=info msg="CreateContainer within sandbox \"8ae7f98b82e7c7b9a5ae83d03fba697e333407c99d5bb693cde080979879a83c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 8 00:39:26.209023 containerd[1549]: time="2025-05-08T00:39:26.208988164Z" level=info msg="CreateContainer within sandbox \"8ae7f98b82e7c7b9a5ae83d03fba697e333407c99d5bb693cde080979879a83c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2860a2f4da1d6673569cd6d60dee8a4fad86cb377508a457cf28565a85e4d50a\"" May 8 00:39:26.209950 containerd[1549]: time="2025-05-08T00:39:26.209616597Z" level=info msg="StartContainer for \"2860a2f4da1d6673569cd6d60dee8a4fad86cb377508a457cf28565a85e4d50a\"" May 8 00:39:26.241138 systemd[1]: Started cri-containerd-2860a2f4da1d6673569cd6d60dee8a4fad86cb377508a457cf28565a85e4d50a.scope - libcontainer container 2860a2f4da1d6673569cd6d60dee8a4fad86cb377508a457cf28565a85e4d50a. May 8 00:39:26.270935 containerd[1549]: time="2025-05-08T00:39:26.270495821Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:26.286715 containerd[1549]: time="2025-05-08T00:39:26.286694108Z" level=info msg="StartContainer for \"2860a2f4da1d6673569cd6d60dee8a4fad86cb377508a457cf28565a85e4d50a\" returns successfully" May 8 00:39:26.295508 containerd[1549]: time="2025-05-08T00:39:26.295426283Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 8 00:39:26.302308 containerd[1549]: time="2025-05-08T00:39:26.302228265Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 539.119402ms" May 8 00:39:26.302308 containerd[1549]: time="2025-05-08T00:39:26.302253662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 8 00:39:26.318112 containerd[1549]: time="2025-05-08T00:39:26.318085567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 8 00:39:26.321132 containerd[1549]: time="2025-05-08T00:39:26.321108202Z" level=info msg="CreateContainer within sandbox \"13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 8 00:39:26.426578 containerd[1549]: time="2025-05-08T00:39:26.426473711Z" level=info msg="CreateContainer within sandbox \"13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9bce52f4ccbfdb5eaef9a4cd8fe749af117d66480e7cf540524e009a126ed982\"" May 8 00:39:26.426551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1223718593.mount: Deactivated successfully. May 8 00:39:26.428408 containerd[1549]: time="2025-05-08T00:39:26.428386003Z" level=info msg="StartContainer for \"9bce52f4ccbfdb5eaef9a4cd8fe749af117d66480e7cf540524e009a126ed982\"" May 8 00:39:26.446082 systemd[1]: Started cri-containerd-9bce52f4ccbfdb5eaef9a4cd8fe749af117d66480e7cf540524e009a126ed982.scope - libcontainer container 9bce52f4ccbfdb5eaef9a4cd8fe749af117d66480e7cf540524e009a126ed982. May 8 00:39:26.477815 containerd[1549]: time="2025-05-08T00:39:26.477746481Z" level=info msg="StartContainer for \"9bce52f4ccbfdb5eaef9a4cd8fe749af117d66480e7cf540524e009a126ed982\" returns successfully" May 8 00:39:27.156743 kubelet[2711]: I0508 00:39:27.155188 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-577f55d98d-45j6l" podStartSLOduration=78.87016196 podStartE2EDuration="1m22.155168674s" podCreationTimestamp="2025-05-08 00:38:05 +0000 UTC" firstStartedPulling="2025-05-08 00:39:23.017643218 +0000 UTC m=+90.980551306" lastFinishedPulling="2025-05-08 00:39:26.302649931 +0000 UTC m=+94.265558020" observedRunningTime="2025-05-08 00:39:27.141104206 +0000 UTC m=+95.104012298" watchObservedRunningTime="2025-05-08 00:39:27.155168674 +0000 UTC m=+95.118076770" May 8 00:39:27.791788 containerd[1549]: time="2025-05-08T00:39:27.791629551Z" level=info msg="StopContainer for \"9bce52f4ccbfdb5eaef9a4cd8fe749af117d66480e7cf540524e009a126ed982\" with timeout 30 (s)" May 8 00:39:27.793787 containerd[1549]: time="2025-05-08T00:39:27.793763850Z" level=info msg="Stop container \"9bce52f4ccbfdb5eaef9a4cd8fe749af117d66480e7cf540524e009a126ed982\" with signal terminated" May 8 00:39:27.813253 systemd[1]: cri-containerd-9bce52f4ccbfdb5eaef9a4cd8fe749af117d66480e7cf540524e009a126ed982.scope: Deactivated successfully. May 8 00:39:27.858071 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9bce52f4ccbfdb5eaef9a4cd8fe749af117d66480e7cf540524e009a126ed982-rootfs.mount: Deactivated successfully. May 8 00:39:27.879879 containerd[1549]: time="2025-05-08T00:39:27.858988338Z" level=info msg="shim disconnected" id=9bce52f4ccbfdb5eaef9a4cd8fe749af117d66480e7cf540524e009a126ed982 namespace=k8s.io May 8 00:39:27.880372 containerd[1549]: time="2025-05-08T00:39:27.879859849Z" level=warning msg="cleaning up after shim disconnected" id=9bce52f4ccbfdb5eaef9a4cd8fe749af117d66480e7cf540524e009a126ed982 namespace=k8s.io May 8 00:39:27.880372 containerd[1549]: time="2025-05-08T00:39:27.880248205Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:39:27.900378 containerd[1549]: time="2025-05-08T00:39:27.900316665Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:39:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 8 00:39:27.902428 containerd[1549]: time="2025-05-08T00:39:27.901950647Z" level=info msg="StopContainer for \"9bce52f4ccbfdb5eaef9a4cd8fe749af117d66480e7cf540524e009a126ed982\" returns successfully" May 8 00:39:27.902428 containerd[1549]: time="2025-05-08T00:39:27.902339992Z" level=info msg="StopPodSandbox for \"13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec\"" May 8 00:39:27.902428 containerd[1549]: time="2025-05-08T00:39:27.902361827Z" level=info msg="Container to stop \"9bce52f4ccbfdb5eaef9a4cd8fe749af117d66480e7cf540524e009a126ed982\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:39:27.910406 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec-shm.mount: Deactivated successfully. May 8 00:39:27.915380 systemd[1]: cri-containerd-13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec.scope: Deactivated successfully. May 8 00:39:27.964137 containerd[1549]: time="2025-05-08T00:39:27.963992042Z" level=info msg="shim disconnected" id=13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec namespace=k8s.io May 8 00:39:27.964137 containerd[1549]: time="2025-05-08T00:39:27.964027831Z" level=warning msg="cleaning up after shim disconnected" id=13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec namespace=k8s.io May 8 00:39:27.964137 containerd[1549]: time="2025-05-08T00:39:27.964034996Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:39:27.965323 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec-rootfs.mount: Deactivated successfully. May 8 00:39:28.159547 systemd-networkd[1452]: cali2417d36f95d: Link DOWN May 8 00:39:28.159832 systemd-networkd[1452]: cali2417d36f95d: Lost carrier May 8 00:39:28.264536 containerd[1549]: 2025-05-08 00:39:28.152 [INFO][6327] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" May 8 00:39:28.264536 containerd[1549]: 2025-05-08 00:39:28.153 [INFO][6327] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" iface="eth0" netns="/var/run/netns/cni-2708a0fe-f6da-3a08-843c-c281298425fe" May 8 00:39:28.264536 containerd[1549]: 2025-05-08 00:39:28.157 [INFO][6327] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" iface="eth0" netns="/var/run/netns/cni-2708a0fe-f6da-3a08-843c-c281298425fe" May 8 00:39:28.264536 containerd[1549]: 2025-05-08 00:39:28.170 [INFO][6327] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" after=16.068397ms iface="eth0" netns="/var/run/netns/cni-2708a0fe-f6da-3a08-843c-c281298425fe" May 8 00:39:28.264536 containerd[1549]: 2025-05-08 00:39:28.170 [INFO][6327] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" May 8 00:39:28.264536 containerd[1549]: 2025-05-08 00:39:28.170 [INFO][6327] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" May 8 00:39:28.264536 containerd[1549]: 2025-05-08 00:39:28.207 [INFO][6342] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" HandleID="k8s-pod-network.13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" Workload="localhost-k8s-calico--apiserver--577f55d98d--45j6l-eth0" May 8 00:39:28.264536 containerd[1549]: 2025-05-08 00:39:28.208 [INFO][6342] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:39:28.264536 containerd[1549]: 2025-05-08 00:39:28.208 [INFO][6342] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:39:28.264536 containerd[1549]: 2025-05-08 00:39:28.256 [INFO][6342] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" HandleID="k8s-pod-network.13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" Workload="localhost-k8s-calico--apiserver--577f55d98d--45j6l-eth0" May 8 00:39:28.264536 containerd[1549]: 2025-05-08 00:39:28.256 [INFO][6342] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" HandleID="k8s-pod-network.13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" Workload="localhost-k8s-calico--apiserver--577f55d98d--45j6l-eth0" May 8 00:39:28.264536 containerd[1549]: 2025-05-08 00:39:28.258 [INFO][6342] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:39:28.264536 containerd[1549]: 2025-05-08 00:39:28.260 [INFO][6327] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" May 8 00:39:28.267058 containerd[1549]: time="2025-05-08T00:39:28.264890845Z" level=info msg="TearDown network for sandbox \"13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec\" successfully" May 8 00:39:28.267058 containerd[1549]: time="2025-05-08T00:39:28.264924264Z" level=info msg="StopPodSandbox for \"13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec\" returns successfully" May 8 00:39:28.267058 containerd[1549]: time="2025-05-08T00:39:28.265978460Z" level=info msg="StopPodSandbox for \"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\"" May 8 00:39:28.267720 systemd[1]: run-netns-cni\x2d2708a0fe\x2df6da\x2d3a08\x2d843c\x2dc281298425fe.mount: Deactivated successfully. May 8 00:39:28.277523 kubelet[2711]: I0508 00:39:28.277470 2711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" May 8 00:39:28.294945 containerd[1549]: time="2025-05-08T00:39:28.294558546Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:28.295700 containerd[1549]: time="2025-05-08T00:39:28.295681212Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 8 00:39:28.296204 containerd[1549]: time="2025-05-08T00:39:28.296191289Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:28.297664 containerd[1549]: time="2025-05-08T00:39:28.297651201Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:28.298499 containerd[1549]: time="2025-05-08T00:39:28.298485248Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 1.980367049s" May 8 00:39:28.298553 containerd[1549]: time="2025-05-08T00:39:28.298543967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 8 00:39:28.332191 containerd[1549]: time="2025-05-08T00:39:28.332167912Z" level=info msg="CreateContainer within sandbox \"8ae7f98b82e7c7b9a5ae83d03fba697e333407c99d5bb693cde080979879a83c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 8 00:39:28.334880 containerd[1549]: 2025-05-08 00:39:28.305 [WARNING][6364] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--577f55d98d--45j6l-eth0", GenerateName:"calico-apiserver-577f55d98d-", Namespace:"calico-apiserver", SelfLink:"", UID:"0ec7b516-7af4-4ea9-8c59-9667bc29c59e", ResourceVersion:"1259", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 38, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"577f55d98d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec", Pod:"calico-apiserver-577f55d98d-45j6l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2417d36f95d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:39:28.334880 containerd[1549]: 2025-05-08 00:39:28.306 [INFO][6364] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" May 8 00:39:28.334880 containerd[1549]: 2025-05-08 00:39:28.306 [INFO][6364] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" iface="eth0" netns="" May 8 00:39:28.334880 containerd[1549]: 2025-05-08 00:39:28.306 [INFO][6364] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" May 8 00:39:28.334880 containerd[1549]: 2025-05-08 00:39:28.306 [INFO][6364] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" May 8 00:39:28.334880 containerd[1549]: 2025-05-08 00:39:28.327 [INFO][6371] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" HandleID="k8s-pod-network.02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" Workload="localhost-k8s-calico--apiserver--577f55d98d--45j6l-eth0" May 8 00:39:28.334880 containerd[1549]: 2025-05-08 00:39:28.327 [INFO][6371] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:39:28.334880 containerd[1549]: 2025-05-08 00:39:28.327 [INFO][6371] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:39:28.334880 containerd[1549]: 2025-05-08 00:39:28.331 [WARNING][6371] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" HandleID="k8s-pod-network.02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" Workload="localhost-k8s-calico--apiserver--577f55d98d--45j6l-eth0" May 8 00:39:28.334880 containerd[1549]: 2025-05-08 00:39:28.331 [INFO][6371] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" HandleID="k8s-pod-network.02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" Workload="localhost-k8s-calico--apiserver--577f55d98d--45j6l-eth0" May 8 00:39:28.334880 containerd[1549]: 2025-05-08 00:39:28.332 [INFO][6371] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:39:28.334880 containerd[1549]: 2025-05-08 00:39:28.333 [INFO][6364] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" May 8 00:39:28.335486 containerd[1549]: time="2025-05-08T00:39:28.335233875Z" level=info msg="TearDown network for sandbox \"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\" successfully" May 8 00:39:28.335486 containerd[1549]: time="2025-05-08T00:39:28.335247047Z" level=info msg="StopPodSandbox for \"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\" returns successfully" May 8 00:39:28.340443 containerd[1549]: time="2025-05-08T00:39:28.340371536Z" level=info msg="CreateContainer within sandbox \"8ae7f98b82e7c7b9a5ae83d03fba697e333407c99d5bb693cde080979879a83c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e7c1a90d2672c0b4687265f7d7342b7b952ee10ef32988ab642b631e6470cba9\"" May 8 00:39:28.343743 containerd[1549]: time="2025-05-08T00:39:28.343729618Z" level=info msg="StartContainer for \"e7c1a90d2672c0b4687265f7d7342b7b952ee10ef32988ab642b631e6470cba9\"" May 8 00:39:28.374109 systemd[1]: Started cri-containerd-e7c1a90d2672c0b4687265f7d7342b7b952ee10ef32988ab642b631e6470cba9.scope - libcontainer container e7c1a90d2672c0b4687265f7d7342b7b952ee10ef32988ab642b631e6470cba9. May 8 00:39:28.398209 containerd[1549]: time="2025-05-08T00:39:28.398175107Z" level=info msg="StartContainer for \"e7c1a90d2672c0b4687265f7d7342b7b952ee10ef32988ab642b631e6470cba9\" returns successfully" May 8 00:39:28.478770 kubelet[2711]: I0508 00:39:28.478586 2711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljs4f\" (UniqueName: \"kubernetes.io/projected/0ec7b516-7af4-4ea9-8c59-9667bc29c59e-kube-api-access-ljs4f\") pod \"0ec7b516-7af4-4ea9-8c59-9667bc29c59e\" (UID: \"0ec7b516-7af4-4ea9-8c59-9667bc29c59e\") " May 8 00:39:28.478770 kubelet[2711]: I0508 00:39:28.478615 2711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0ec7b516-7af4-4ea9-8c59-9667bc29c59e-calico-apiserver-certs\") pod \"0ec7b516-7af4-4ea9-8c59-9667bc29c59e\" (UID: \"0ec7b516-7af4-4ea9-8c59-9667bc29c59e\") " May 8 00:39:28.502341 kubelet[2711]: I0508 00:39:28.502305 2711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ec7b516-7af4-4ea9-8c59-9667bc29c59e-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "0ec7b516-7af4-4ea9-8c59-9667bc29c59e" (UID: "0ec7b516-7af4-4ea9-8c59-9667bc29c59e"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 8 00:39:28.503437 kubelet[2711]: I0508 00:39:28.503420 2711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ec7b516-7af4-4ea9-8c59-9667bc29c59e-kube-api-access-ljs4f" (OuterVolumeSpecName: "kube-api-access-ljs4f") pod "0ec7b516-7af4-4ea9-8c59-9667bc29c59e" (UID: "0ec7b516-7af4-4ea9-8c59-9667bc29c59e"). InnerVolumeSpecName "kube-api-access-ljs4f". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:39:28.579862 kubelet[2711]: I0508 00:39:28.579822 2711 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-ljs4f\" (UniqueName: \"kubernetes.io/projected/0ec7b516-7af4-4ea9-8c59-9667bc29c59e-kube-api-access-ljs4f\") on node \"localhost\" DevicePath \"\"" May 8 00:39:28.579862 kubelet[2711]: I0508 00:39:28.579842 2711 reconciler_common.go:288] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0ec7b516-7af4-4ea9-8c59-9667bc29c59e-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" May 8 00:39:28.651409 systemd[1]: Started sshd@13-139.178.70.103:22-139.178.68.195:33924.service - OpenSSH per-connection server daemon (139.178.68.195:33924). May 8 00:39:28.750313 sshd[6416]: Accepted publickey for core from 139.178.68.195 port 33924 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:39:28.753218 sshd[6416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:28.757188 systemd-logind[1523]: New session 15 of user core. May 8 00:39:28.763209 systemd[1]: Started session-15.scope - Session 15 of User core. May 8 00:39:28.798944 kubelet[2711]: I0508 00:39:28.798880 2711 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 8 00:39:28.798944 kubelet[2711]: I0508 00:39:28.798928 2711 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 8 00:39:29.267106 systemd[1]: run-containerd-runc-k8s.io-e7c1a90d2672c0b4687265f7d7342b7b952ee10ef32988ab642b631e6470cba9-runc.8DfUeQ.mount: Deactivated successfully. May 8 00:39:29.267177 systemd[1]: var-lib-kubelet-pods-0ec7b516\x2d7af4\x2d4ea9\x2d8c59\x2d9667bc29c59e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dljs4f.mount: Deactivated successfully. May 8 00:39:29.267217 systemd[1]: var-lib-kubelet-pods-0ec7b516\x2d7af4\x2d4ea9\x2d8c59\x2d9667bc29c59e-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 8 00:39:29.351870 systemd[1]: Removed slice kubepods-besteffort-pod0ec7b516_7af4_4ea9_8c59_9667bc29c59e.slice - libcontainer container kubepods-besteffort-pod0ec7b516_7af4_4ea9_8c59_9667bc29c59e.slice. May 8 00:39:29.389822 kubelet[2711]: I0508 00:39:29.373357 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-gvqtb" podStartSLOduration=79.044002515 podStartE2EDuration="1m24.373340072s" podCreationTimestamp="2025-05-08 00:38:05 +0000 UTC" firstStartedPulling="2025-05-08 00:39:22.979921625 +0000 UTC m=+90.942829713" lastFinishedPulling="2025-05-08 00:39:28.309259181 +0000 UTC m=+96.272167270" observedRunningTime="2025-05-08 00:39:29.340663438 +0000 UTC m=+97.303571535" watchObservedRunningTime="2025-05-08 00:39:29.373340072 +0000 UTC m=+97.336248164" May 8 00:39:29.430637 sshd[6416]: pam_unix(sshd:session): session closed for user core May 8 00:39:29.434107 systemd[1]: sshd@13-139.178.70.103:22-139.178.68.195:33924.service: Deactivated successfully. May 8 00:39:29.435656 systemd[1]: session-15.scope: Deactivated successfully. May 8 00:39:29.436896 systemd-logind[1523]: Session 15 logged out. Waiting for processes to exit. May 8 00:39:29.437814 systemd-logind[1523]: Removed session 15. May 8 00:39:30.281622 kubelet[2711]: I0508 00:39:30.281471 2711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ec7b516-7af4-4ea9-8c59-9667bc29c59e" path="/var/lib/kubelet/pods/0ec7b516-7af4-4ea9-8c59-9667bc29c59e/volumes" May 8 00:39:34.452102 systemd[1]: Started sshd@14-139.178.70.103:22-139.178.68.195:33938.service - OpenSSH per-connection server daemon (139.178.68.195:33938). May 8 00:39:34.576130 sshd[6438]: Accepted publickey for core from 139.178.68.195 port 33938 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:39:34.577147 sshd[6438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:34.580621 systemd-logind[1523]: New session 16 of user core. May 8 00:39:34.586022 systemd[1]: Started session-16.scope - Session 16 of User core. May 8 00:39:34.792044 sshd[6438]: pam_unix(sshd:session): session closed for user core May 8 00:39:34.807878 systemd[1]: sshd@14-139.178.70.103:22-139.178.68.195:33938.service: Deactivated successfully. May 8 00:39:34.809403 systemd[1]: session-16.scope: Deactivated successfully. May 8 00:39:34.809888 systemd-logind[1523]: Session 16 logged out. Waiting for processes to exit. May 8 00:39:34.810836 systemd-logind[1523]: Removed session 16. May 8 00:39:39.795289 systemd[1]: Started sshd@15-139.178.70.103:22-139.178.68.195:57030.service - OpenSSH per-connection server daemon (139.178.68.195:57030). May 8 00:39:40.201129 sshd[6482]: Accepted publickey for core from 139.178.68.195 port 57030 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:39:40.202853 sshd[6482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:40.206688 systemd-logind[1523]: New session 17 of user core. May 8 00:39:40.209071 systemd[1]: Started session-17.scope - Session 17 of User core. May 8 00:39:40.431972 sshd[6482]: pam_unix(sshd:session): session closed for user core May 8 00:39:40.433899 systemd-logind[1523]: Session 17 logged out. Waiting for processes to exit. May 8 00:39:40.434106 systemd[1]: sshd@15-139.178.70.103:22-139.178.68.195:57030.service: Deactivated successfully. May 8 00:39:40.435222 systemd[1]: session-17.scope: Deactivated successfully. May 8 00:39:40.436561 systemd-logind[1523]: Removed session 17. May 8 00:39:43.016653 containerd[1549]: time="2025-05-08T00:39:43.016559784Z" level=info msg="StopContainer for \"894cf5dddf66dd52947d7424075502d3a56f4980c1a2382c0da7349b339eb06c\" with timeout 30 (s)" May 8 00:39:43.017023 containerd[1549]: time="2025-05-08T00:39:43.016979022Z" level=info msg="Stop container \"894cf5dddf66dd52947d7424075502d3a56f4980c1a2382c0da7349b339eb06c\" with signal terminated" May 8 00:39:43.030554 systemd[1]: cri-containerd-894cf5dddf66dd52947d7424075502d3a56f4980c1a2382c0da7349b339eb06c.scope: Deactivated successfully. May 8 00:39:43.047263 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-894cf5dddf66dd52947d7424075502d3a56f4980c1a2382c0da7349b339eb06c-rootfs.mount: Deactivated successfully. May 8 00:39:43.047487 containerd[1549]: time="2025-05-08T00:39:43.047436402Z" level=info msg="shim disconnected" id=894cf5dddf66dd52947d7424075502d3a56f4980c1a2382c0da7349b339eb06c namespace=k8s.io May 8 00:39:43.047487 containerd[1549]: time="2025-05-08T00:39:43.047481816Z" level=warning msg="cleaning up after shim disconnected" id=894cf5dddf66dd52947d7424075502d3a56f4980c1a2382c0da7349b339eb06c namespace=k8s.io May 8 00:39:43.047562 containerd[1549]: time="2025-05-08T00:39:43.047487762Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:39:43.070855 containerd[1549]: time="2025-05-08T00:39:43.070772253Z" level=info msg="StopContainer for \"894cf5dddf66dd52947d7424075502d3a56f4980c1a2382c0da7349b339eb06c\" returns successfully" May 8 00:39:43.074069 containerd[1549]: time="2025-05-08T00:39:43.073963603Z" level=info msg="StopPodSandbox for \"0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97\"" May 8 00:39:43.074069 containerd[1549]: time="2025-05-08T00:39:43.074004329Z" level=info msg="Container to stop \"894cf5dddf66dd52947d7424075502d3a56f4980c1a2382c0da7349b339eb06c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:39:43.075588 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97-shm.mount: Deactivated successfully. May 8 00:39:43.080199 systemd[1]: cri-containerd-0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97.scope: Deactivated successfully. May 8 00:39:43.094179 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97-rootfs.mount: Deactivated successfully. May 8 00:39:43.096167 containerd[1549]: time="2025-05-08T00:39:43.094849757Z" level=info msg="shim disconnected" id=0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97 namespace=k8s.io May 8 00:39:43.096167 containerd[1549]: time="2025-05-08T00:39:43.094882192Z" level=warning msg="cleaning up after shim disconnected" id=0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97 namespace=k8s.io May 8 00:39:43.096167 containerd[1549]: time="2025-05-08T00:39:43.094887590Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:39:43.105556 containerd[1549]: time="2025-05-08T00:39:43.105503786Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:39:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 8 00:39:43.147761 systemd-networkd[1452]: califc3aa0a806f: Link DOWN May 8 00:39:43.147766 systemd-networkd[1452]: califc3aa0a806f: Lost carrier May 8 00:39:43.204944 containerd[1549]: 2025-05-08 00:39:43.146 [INFO][6572] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" May 8 00:39:43.204944 containerd[1549]: 2025-05-08 00:39:43.146 [INFO][6572] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" iface="eth0" netns="/var/run/netns/cni-b1184c9b-7c89-cd17-d9c7-e48d2ebdc46d" May 8 00:39:43.204944 containerd[1549]: 2025-05-08 00:39:43.147 [INFO][6572] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" iface="eth0" netns="/var/run/netns/cni-b1184c9b-7c89-cd17-d9c7-e48d2ebdc46d" May 8 00:39:43.204944 containerd[1549]: 2025-05-08 00:39:43.162 [INFO][6572] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" after=15.159109ms iface="eth0" netns="/var/run/netns/cni-b1184c9b-7c89-cd17-d9c7-e48d2ebdc46d" May 8 00:39:43.204944 containerd[1549]: 2025-05-08 00:39:43.162 [INFO][6572] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" May 8 00:39:43.204944 containerd[1549]: 2025-05-08 00:39:43.162 [INFO][6572] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" May 8 00:39:43.204944 containerd[1549]: 2025-05-08 00:39:43.182 [INFO][6583] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" HandleID="k8s-pod-network.0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" Workload="localhost-k8s-calico--apiserver--577f55d98d--bjswh-eth0" May 8 00:39:43.204944 containerd[1549]: 2025-05-08 00:39:43.182 [INFO][6583] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:39:43.204944 containerd[1549]: 2025-05-08 00:39:43.182 [INFO][6583] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:39:43.204944 containerd[1549]: 2025-05-08 00:39:43.200 [INFO][6583] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" HandleID="k8s-pod-network.0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" Workload="localhost-k8s-calico--apiserver--577f55d98d--bjswh-eth0" May 8 00:39:43.204944 containerd[1549]: 2025-05-08 00:39:43.200 [INFO][6583] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" HandleID="k8s-pod-network.0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" Workload="localhost-k8s-calico--apiserver--577f55d98d--bjswh-eth0" May 8 00:39:43.204944 containerd[1549]: 2025-05-08 00:39:43.201 [INFO][6583] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:39:43.204944 containerd[1549]: 2025-05-08 00:39:43.203 [INFO][6572] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" May 8 00:39:43.207549 systemd[1]: run-netns-cni\x2db1184c9b\x2d7c89\x2dcd17\x2dd9c7\x2de48d2ebdc46d.mount: Deactivated successfully. May 8 00:39:43.208069 containerd[1549]: time="2025-05-08T00:39:43.207984873Z" level=info msg="TearDown network for sandbox \"0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97\" successfully" May 8 00:39:43.208069 containerd[1549]: time="2025-05-08T00:39:43.208010489Z" level=info msg="StopPodSandbox for \"0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97\" returns successfully" May 8 00:39:43.208493 containerd[1549]: time="2025-05-08T00:39:43.208323440Z" level=info msg="StopPodSandbox for \"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\"" May 8 00:39:43.251065 containerd[1549]: 2025-05-08 00:39:43.231 [WARNING][6604] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--577f55d98d--bjswh-eth0", GenerateName:"calico-apiserver-577f55d98d-", Namespace:"calico-apiserver", SelfLink:"", UID:"2ee8fd8b-a4bc-43c5-bff4-0631d474067b", ResourceVersion:"1371", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 38, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"577f55d98d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97", Pod:"calico-apiserver-577f55d98d-bjswh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califc3aa0a806f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:39:43.251065 containerd[1549]: 2025-05-08 00:39:43.231 [INFO][6604] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" May 8 00:39:43.251065 containerd[1549]: 2025-05-08 00:39:43.231 [INFO][6604] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" iface="eth0" netns="" May 8 00:39:43.251065 containerd[1549]: 2025-05-08 00:39:43.231 [INFO][6604] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" May 8 00:39:43.251065 containerd[1549]: 2025-05-08 00:39:43.231 [INFO][6604] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" May 8 00:39:43.251065 containerd[1549]: 2025-05-08 00:39:43.244 [INFO][6611] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" HandleID="k8s-pod-network.a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" Workload="localhost-k8s-calico--apiserver--577f55d98d--bjswh-eth0" May 8 00:39:43.251065 containerd[1549]: 2025-05-08 00:39:43.244 [INFO][6611] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:39:43.251065 containerd[1549]: 2025-05-08 00:39:43.244 [INFO][6611] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:39:43.251065 containerd[1549]: 2025-05-08 00:39:43.247 [WARNING][6611] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" HandleID="k8s-pod-network.a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" Workload="localhost-k8s-calico--apiserver--577f55d98d--bjswh-eth0" May 8 00:39:43.251065 containerd[1549]: 2025-05-08 00:39:43.247 [INFO][6611] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" HandleID="k8s-pod-network.a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" Workload="localhost-k8s-calico--apiserver--577f55d98d--bjswh-eth0" May 8 00:39:43.251065 containerd[1549]: 2025-05-08 00:39:43.248 [INFO][6611] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:39:43.251065 containerd[1549]: 2025-05-08 00:39:43.249 [INFO][6604] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" May 8 00:39:43.252077 containerd[1549]: time="2025-05-08T00:39:43.251102272Z" level=info msg="TearDown network for sandbox \"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\" successfully" May 8 00:39:43.252077 containerd[1549]: time="2025-05-08T00:39:43.251117433Z" level=info msg="StopPodSandbox for \"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\" returns successfully" May 8 00:39:43.396244 kubelet[2711]: I0508 00:39:43.396179 2711 scope.go:117] "RemoveContainer" containerID="894cf5dddf66dd52947d7424075502d3a56f4980c1a2382c0da7349b339eb06c" May 8 00:39:43.398590 containerd[1549]: time="2025-05-08T00:39:43.398569530Z" level=info msg="RemoveContainer for \"894cf5dddf66dd52947d7424075502d3a56f4980c1a2382c0da7349b339eb06c\"" May 8 00:39:43.399897 containerd[1549]: time="2025-05-08T00:39:43.399882176Z" level=info msg="RemoveContainer for \"894cf5dddf66dd52947d7424075502d3a56f4980c1a2382c0da7349b339eb06c\" returns successfully" May 8 00:39:43.400040 kubelet[2711]: I0508 00:39:43.399993 2711 scope.go:117] "RemoveContainer" containerID="894cf5dddf66dd52947d7424075502d3a56f4980c1a2382c0da7349b339eb06c" May 8 00:39:43.400115 containerd[1549]: time="2025-05-08T00:39:43.400088785Z" level=error msg="ContainerStatus for \"894cf5dddf66dd52947d7424075502d3a56f4980c1a2382c0da7349b339eb06c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"894cf5dddf66dd52947d7424075502d3a56f4980c1a2382c0da7349b339eb06c\": not found" May 8 00:39:43.417335 kubelet[2711]: E0508 00:39:43.417282 2711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"894cf5dddf66dd52947d7424075502d3a56f4980c1a2382c0da7349b339eb06c\": not found" containerID="894cf5dddf66dd52947d7424075502d3a56f4980c1a2382c0da7349b339eb06c" May 8 00:39:43.417335 kubelet[2711]: I0508 00:39:43.417314 2711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"894cf5dddf66dd52947d7424075502d3a56f4980c1a2382c0da7349b339eb06c"} err="failed to get container status \"894cf5dddf66dd52947d7424075502d3a56f4980c1a2382c0da7349b339eb06c\": rpc error: code = NotFound desc = an error occurred when try to find container \"894cf5dddf66dd52947d7424075502d3a56f4980c1a2382c0da7349b339eb06c\": not found" May 8 00:39:43.446478 kubelet[2711]: I0508 00:39:43.446464 2711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qr8h8\" (UniqueName: \"kubernetes.io/projected/2ee8fd8b-a4bc-43c5-bff4-0631d474067b-kube-api-access-qr8h8\") pod \"2ee8fd8b-a4bc-43c5-bff4-0631d474067b\" (UID: \"2ee8fd8b-a4bc-43c5-bff4-0631d474067b\") " May 8 00:39:43.446533 kubelet[2711]: I0508 00:39:43.446485 2711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2ee8fd8b-a4bc-43c5-bff4-0631d474067b-calico-apiserver-certs\") pod \"2ee8fd8b-a4bc-43c5-bff4-0631d474067b\" (UID: \"2ee8fd8b-a4bc-43c5-bff4-0631d474067b\") " May 8 00:39:43.453894 systemd[1]: var-lib-kubelet-pods-2ee8fd8b\x2da4bc\x2d43c5\x2dbff4\x2d0631d474067b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqr8h8.mount: Deactivated successfully. May 8 00:39:43.460063 kubelet[2711]: I0508 00:39:43.458338 2711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ee8fd8b-a4bc-43c5-bff4-0631d474067b-kube-api-access-qr8h8" (OuterVolumeSpecName: "kube-api-access-qr8h8") pod "2ee8fd8b-a4bc-43c5-bff4-0631d474067b" (UID: "2ee8fd8b-a4bc-43c5-bff4-0631d474067b"). InnerVolumeSpecName "kube-api-access-qr8h8". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:39:43.460154 kubelet[2711]: I0508 00:39:43.459578 2711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ee8fd8b-a4bc-43c5-bff4-0631d474067b-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "2ee8fd8b-a4bc-43c5-bff4-0631d474067b" (UID: "2ee8fd8b-a4bc-43c5-bff4-0631d474067b"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 8 00:39:43.547278 kubelet[2711]: I0508 00:39:43.547255 2711 reconciler_common.go:288] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2ee8fd8b-a4bc-43c5-bff4-0631d474067b-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" May 8 00:39:43.547395 kubelet[2711]: I0508 00:39:43.547383 2711 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-qr8h8\" (UniqueName: \"kubernetes.io/projected/2ee8fd8b-a4bc-43c5-bff4-0631d474067b-kube-api-access-qr8h8\") on node \"localhost\" DevicePath \"\"" May 8 00:39:43.684617 systemd[1]: Removed slice kubepods-besteffort-pod2ee8fd8b_a4bc_43c5_bff4_0631d474067b.slice - libcontainer container kubepods-besteffort-pod2ee8fd8b_a4bc_43c5_bff4_0631d474067b.slice. May 8 00:39:44.048283 systemd[1]: var-lib-kubelet-pods-2ee8fd8b\x2da4bc\x2d43c5\x2dbff4\x2d0631d474067b-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 8 00:39:44.277893 kubelet[2711]: I0508 00:39:44.277688 2711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ee8fd8b-a4bc-43c5-bff4-0631d474067b" path="/var/lib/kubelet/pods/2ee8fd8b-a4bc-43c5-bff4-0631d474067b/volumes" May 8 00:39:45.444692 systemd[1]: Started sshd@16-139.178.70.103:22-139.178.68.195:40388.service - OpenSSH per-connection server daemon (139.178.68.195:40388). May 8 00:39:45.490690 sshd[6622]: Accepted publickey for core from 139.178.68.195 port 40388 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:39:45.491817 sshd[6622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:45.494949 systemd-logind[1523]: New session 18 of user core. May 8 00:39:45.503031 systemd[1]: Started session-18.scope - Session 18 of User core. May 8 00:39:45.594306 sshd[6622]: pam_unix(sshd:session): session closed for user core May 8 00:39:45.599509 systemd[1]: sshd@16-139.178.70.103:22-139.178.68.195:40388.service: Deactivated successfully. May 8 00:39:45.600611 systemd[1]: session-18.scope: Deactivated successfully. May 8 00:39:45.601520 systemd-logind[1523]: Session 18 logged out. Waiting for processes to exit. May 8 00:39:45.612161 systemd[1]: Started sshd@17-139.178.70.103:22-139.178.68.195:40390.service - OpenSSH per-connection server daemon (139.178.68.195:40390). May 8 00:39:45.613229 systemd-logind[1523]: Removed session 18. May 8 00:39:45.641341 sshd[6634]: Accepted publickey for core from 139.178.68.195 port 40390 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:39:45.642299 sshd[6634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:45.645423 systemd-logind[1523]: New session 19 of user core. May 8 00:39:45.648988 systemd[1]: Started session-19.scope - Session 19 of User core. May 8 00:39:46.786112 sshd[6634]: pam_unix(sshd:session): session closed for user core May 8 00:39:46.792749 systemd[1]: sshd@17-139.178.70.103:22-139.178.68.195:40390.service: Deactivated successfully. May 8 00:39:46.794270 systemd[1]: session-19.scope: Deactivated successfully. May 8 00:39:46.795133 systemd-logind[1523]: Session 19 logged out. Waiting for processes to exit. May 8 00:39:46.796925 systemd[1]: Started sshd@18-139.178.70.103:22-139.178.68.195:40392.service - OpenSSH per-connection server daemon (139.178.68.195:40392). May 8 00:39:46.797764 systemd-logind[1523]: Removed session 19. May 8 00:39:46.830504 sshd[6645]: Accepted publickey for core from 139.178.68.195 port 40392 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:39:46.831392 sshd[6645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:46.833828 systemd-logind[1523]: New session 20 of user core. May 8 00:39:46.838013 systemd[1]: Started session-20.scope - Session 20 of User core. May 8 00:39:49.316967 sshd[6645]: pam_unix(sshd:session): session closed for user core May 8 00:39:49.323266 systemd[1]: sshd@18-139.178.70.103:22-139.178.68.195:40392.service: Deactivated successfully. May 8 00:39:49.324711 systemd[1]: session-20.scope: Deactivated successfully. May 8 00:39:49.326694 systemd-logind[1523]: Session 20 logged out. Waiting for processes to exit. May 8 00:39:49.330418 systemd[1]: Started sshd@19-139.178.70.103:22-139.178.68.195:40398.service - OpenSSH per-connection server daemon (139.178.68.195:40398). May 8 00:39:49.331429 systemd-logind[1523]: Removed session 20. May 8 00:39:49.569720 sshd[6688]: Accepted publickey for core from 139.178.68.195 port 40398 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:39:49.570280 sshd[6688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:49.572643 systemd-logind[1523]: New session 21 of user core. May 8 00:39:49.580032 systemd[1]: Started session-21.scope - Session 21 of User core. May 8 00:39:50.136514 sshd[6688]: pam_unix(sshd:session): session closed for user core May 8 00:39:50.142478 systemd[1]: sshd@19-139.178.70.103:22-139.178.68.195:40398.service: Deactivated successfully. May 8 00:39:50.143832 systemd[1]: session-21.scope: Deactivated successfully. May 8 00:39:50.144695 systemd-logind[1523]: Session 21 logged out. Waiting for processes to exit. May 8 00:39:50.145537 systemd[1]: Started sshd@20-139.178.70.103:22-139.178.68.195:40402.service - OpenSSH per-connection server daemon (139.178.68.195:40402). May 8 00:39:50.146341 systemd-logind[1523]: Removed session 21. May 8 00:39:50.223839 sshd[6700]: Accepted publickey for core from 139.178.68.195 port 40402 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:39:50.225194 sshd[6700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:50.228671 systemd-logind[1523]: New session 22 of user core. May 8 00:39:50.234113 systemd[1]: Started session-22.scope - Session 22 of User core. May 8 00:39:50.346852 sshd[6700]: pam_unix(sshd:session): session closed for user core May 8 00:39:50.349788 systemd[1]: sshd@20-139.178.70.103:22-139.178.68.195:40402.service: Deactivated successfully. May 8 00:39:50.351209 systemd[1]: session-22.scope: Deactivated successfully. May 8 00:39:50.351839 systemd-logind[1523]: Session 22 logged out. Waiting for processes to exit. May 8 00:39:50.352987 systemd-logind[1523]: Removed session 22. May 8 00:39:52.867588 kubelet[2711]: I0508 00:39:52.867155 2711 scope.go:117] "RemoveContainer" containerID="9bce52f4ccbfdb5eaef9a4cd8fe749af117d66480e7cf540524e009a126ed982" May 8 00:39:52.916821 containerd[1549]: time="2025-05-08T00:39:52.916793279Z" level=info msg="RemoveContainer for \"9bce52f4ccbfdb5eaef9a4cd8fe749af117d66480e7cf540524e009a126ed982\"" May 8 00:39:52.961240 containerd[1549]: time="2025-05-08T00:39:52.961034883Z" level=info msg="RemoveContainer for \"9bce52f4ccbfdb5eaef9a4cd8fe749af117d66480e7cf540524e009a126ed982\" returns successfully" May 8 00:39:53.000647 containerd[1549]: time="2025-05-08T00:39:53.000468508Z" level=info msg="StopPodSandbox for \"ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762\"" May 8 00:39:53.669052 containerd[1549]: 2025-05-08 00:39:53.124 [WARNING][6727] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--gljcw-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"bcac0e1d-a484-4def-954d-e37294951ec1", ResourceVersion:"1093", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 37, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b17d88e3ae911616a18e0430acb10a9c6d9a5911ab5fd1b39b327d132ef9b706", Pod:"coredns-6f6b679f8f-gljcw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliebeadd467c2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:39:53.669052 containerd[1549]: 2025-05-08 00:39:53.124 [INFO][6727] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762" May 8 00:39:53.669052 containerd[1549]: 2025-05-08 00:39:53.124 [INFO][6727] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762" iface="eth0" netns="" May 8 00:39:53.669052 containerd[1549]: 2025-05-08 00:39:53.124 [INFO][6727] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762" May 8 00:39:53.669052 containerd[1549]: 2025-05-08 00:39:53.124 [INFO][6727] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762" May 8 00:39:53.669052 containerd[1549]: 2025-05-08 00:39:53.657 [INFO][6735] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762" HandleID="k8s-pod-network.ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762" Workload="localhost-k8s-coredns--6f6b679f8f--gljcw-eth0" May 8 00:39:53.669052 containerd[1549]: 2025-05-08 00:39:53.658 [INFO][6735] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:39:53.669052 containerd[1549]: 2025-05-08 00:39:53.658 [INFO][6735] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:39:53.669052 containerd[1549]: 2025-05-08 00:39:53.665 [WARNING][6735] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762" HandleID="k8s-pod-network.ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762" Workload="localhost-k8s-coredns--6f6b679f8f--gljcw-eth0" May 8 00:39:53.669052 containerd[1549]: 2025-05-08 00:39:53.665 [INFO][6735] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762" HandleID="k8s-pod-network.ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762" Workload="localhost-k8s-coredns--6f6b679f8f--gljcw-eth0" May 8 00:39:53.669052 containerd[1549]: 2025-05-08 00:39:53.666 [INFO][6735] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:39:53.669052 containerd[1549]: 2025-05-08 00:39:53.667 [INFO][6727] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762" May 8 00:39:53.669684 containerd[1549]: time="2025-05-08T00:39:53.669045986Z" level=info msg="TearDown network for sandbox \"ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762\" successfully" May 8 00:39:53.669684 containerd[1549]: time="2025-05-08T00:39:53.669071459Z" level=info msg="StopPodSandbox for \"ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762\" returns successfully" May 8 00:39:54.004260 containerd[1549]: time="2025-05-08T00:39:54.004170956Z" level=info msg="RemovePodSandbox for \"ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762\"" May 8 00:39:54.013358 containerd[1549]: time="2025-05-08T00:39:54.012453748Z" level=info msg="Forcibly stopping sandbox \"ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762\"" May 8 00:39:54.095250 containerd[1549]: 2025-05-08 00:39:54.052 [WARNING][6755] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--gljcw-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"bcac0e1d-a484-4def-954d-e37294951ec1", ResourceVersion:"1093", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 37, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b17d88e3ae911616a18e0430acb10a9c6d9a5911ab5fd1b39b327d132ef9b706", Pod:"coredns-6f6b679f8f-gljcw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliebeadd467c2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:39:54.095250 containerd[1549]: 2025-05-08 00:39:54.052 [INFO][6755] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762" May 8 00:39:54.095250 containerd[1549]: 2025-05-08 00:39:54.052 [INFO][6755] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762" iface="eth0" netns="" May 8 00:39:54.095250 containerd[1549]: 2025-05-08 00:39:54.052 [INFO][6755] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762" May 8 00:39:54.095250 containerd[1549]: 2025-05-08 00:39:54.052 [INFO][6755] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762" May 8 00:39:54.095250 containerd[1549]: 2025-05-08 00:39:54.089 [INFO][6762] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762" HandleID="k8s-pod-network.ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762" Workload="localhost-k8s-coredns--6f6b679f8f--gljcw-eth0" May 8 00:39:54.095250 containerd[1549]: 2025-05-08 00:39:54.089 [INFO][6762] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:39:54.095250 containerd[1549]: 2025-05-08 00:39:54.089 [INFO][6762] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:39:54.095250 containerd[1549]: 2025-05-08 00:39:54.092 [WARNING][6762] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762" HandleID="k8s-pod-network.ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762" Workload="localhost-k8s-coredns--6f6b679f8f--gljcw-eth0" May 8 00:39:54.095250 containerd[1549]: 2025-05-08 00:39:54.092 [INFO][6762] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762" HandleID="k8s-pod-network.ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762" Workload="localhost-k8s-coredns--6f6b679f8f--gljcw-eth0" May 8 00:39:54.095250 containerd[1549]: 2025-05-08 00:39:54.093 [INFO][6762] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:39:54.095250 containerd[1549]: 2025-05-08 00:39:54.094 [INFO][6755] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762" May 8 00:39:54.109076 containerd[1549]: time="2025-05-08T00:39:54.095272843Z" level=info msg="TearDown network for sandbox \"ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762\" successfully" May 8 00:39:54.133836 containerd[1549]: time="2025-05-08T00:39:54.133703806Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:39:54.173767 containerd[1549]: time="2025-05-08T00:39:54.173676894Z" level=info msg="RemovePodSandbox \"ea6fdc4d30b52fe236d41eb9b05caaa0c0f42177777cd5e3a87c421991fcd762\" returns successfully" May 8 00:39:54.175742 containerd[1549]: time="2025-05-08T00:39:54.175723719Z" level=info msg="StopPodSandbox for \"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\"" May 8 00:39:54.279543 containerd[1549]: 2025-05-08 00:39:54.214 [WARNING][6780] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gvqtb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"06105806-56a1-4100-9953-11ff7427bd13", ResourceVersion:"1276", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 38, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8ae7f98b82e7c7b9a5ae83d03fba697e333407c99d5bb693cde080979879a83c", Pod:"csi-node-driver-gvqtb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5d79671ee61", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:39:54.279543 containerd[1549]: 2025-05-08 00:39:54.214 [INFO][6780] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f" May 8 00:39:54.279543 containerd[1549]: 2025-05-08 00:39:54.214 [INFO][6780] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f" iface="eth0" netns="" May 8 00:39:54.279543 containerd[1549]: 2025-05-08 00:39:54.214 [INFO][6780] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f" May 8 00:39:54.279543 containerd[1549]: 2025-05-08 00:39:54.214 [INFO][6780] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f" May 8 00:39:54.279543 containerd[1549]: 2025-05-08 00:39:54.271 [INFO][6787] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f" HandleID="k8s-pod-network.12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f" Workload="localhost-k8s-csi--node--driver--gvqtb-eth0" May 8 00:39:54.279543 containerd[1549]: 2025-05-08 00:39:54.272 [INFO][6787] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:39:54.279543 containerd[1549]: 2025-05-08 00:39:54.272 [INFO][6787] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:39:54.279543 containerd[1549]: 2025-05-08 00:39:54.275 [WARNING][6787] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f" HandleID="k8s-pod-network.12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f" Workload="localhost-k8s-csi--node--driver--gvqtb-eth0" May 8 00:39:54.279543 containerd[1549]: 2025-05-08 00:39:54.275 [INFO][6787] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f" HandleID="k8s-pod-network.12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f" Workload="localhost-k8s-csi--node--driver--gvqtb-eth0" May 8 00:39:54.279543 containerd[1549]: 2025-05-08 00:39:54.277 [INFO][6787] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:39:54.279543 containerd[1549]: 2025-05-08 00:39:54.278 [INFO][6780] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f" May 8 00:39:54.279543 containerd[1549]: time="2025-05-08T00:39:54.279455509Z" level=info msg="TearDown network for sandbox \"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\" successfully" May 8 00:39:54.279543 containerd[1549]: time="2025-05-08T00:39:54.279467773Z" level=info msg="StopPodSandbox for \"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\" returns successfully" May 8 00:39:54.280455 containerd[1549]: time="2025-05-08T00:39:54.280303460Z" level=info msg="RemovePodSandbox for \"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\"" May 8 00:39:54.280455 containerd[1549]: time="2025-05-08T00:39:54.280322221Z" level=info msg="Forcibly stopping sandbox \"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\"" May 8 00:39:54.361805 containerd[1549]: 2025-05-08 00:39:54.343 [WARNING][6805] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gvqtb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"06105806-56a1-4100-9953-11ff7427bd13", ResourceVersion:"1276", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 38, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8ae7f98b82e7c7b9a5ae83d03fba697e333407c99d5bb693cde080979879a83c", Pod:"csi-node-driver-gvqtb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5d79671ee61", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:39:54.361805 containerd[1549]: 2025-05-08 00:39:54.343 [INFO][6805] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f" May 8 00:39:54.361805 containerd[1549]: 2025-05-08 00:39:54.343 [INFO][6805] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f" iface="eth0" netns="" May 8 00:39:54.361805 containerd[1549]: 2025-05-08 00:39:54.343 [INFO][6805] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f" May 8 00:39:54.361805 containerd[1549]: 2025-05-08 00:39:54.343 [INFO][6805] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f" May 8 00:39:54.361805 containerd[1549]: 2025-05-08 00:39:54.354 [INFO][6812] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f" HandleID="k8s-pod-network.12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f" Workload="localhost-k8s-csi--node--driver--gvqtb-eth0" May 8 00:39:54.361805 containerd[1549]: 2025-05-08 00:39:54.354 [INFO][6812] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:39:54.361805 containerd[1549]: 2025-05-08 00:39:54.354 [INFO][6812] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:39:54.361805 containerd[1549]: 2025-05-08 00:39:54.358 [WARNING][6812] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f" HandleID="k8s-pod-network.12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f" Workload="localhost-k8s-csi--node--driver--gvqtb-eth0" May 8 00:39:54.361805 containerd[1549]: 2025-05-08 00:39:54.358 [INFO][6812] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f" HandleID="k8s-pod-network.12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f" Workload="localhost-k8s-csi--node--driver--gvqtb-eth0" May 8 00:39:54.361805 containerd[1549]: 2025-05-08 00:39:54.359 [INFO][6812] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:39:54.361805 containerd[1549]: 2025-05-08 00:39:54.360 [INFO][6805] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f" May 8 00:39:54.361805 containerd[1549]: time="2025-05-08T00:39:54.361179798Z" level=info msg="TearDown network for sandbox \"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\" successfully" May 8 00:39:54.981880 containerd[1549]: time="2025-05-08T00:39:54.981846677Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:39:54.982119 containerd[1549]: time="2025-05-08T00:39:54.981891642Z" level=info msg="RemovePodSandbox \"12fb69a58605d525d4274518d180f680e63090aced2d6b8e9561a14c7c894f0f\" returns successfully" May 8 00:39:54.982192 containerd[1549]: time="2025-05-08T00:39:54.982175618Z" level=info msg="StopPodSandbox for \"99041687496a20f2a794d955933b8708c04816e8fb3a2dc0a6cf8c4ab0a92363\"" May 8 00:39:54.982240 containerd[1549]: time="2025-05-08T00:39:54.982223388Z" level=info msg="TearDown network for sandbox \"99041687496a20f2a794d955933b8708c04816e8fb3a2dc0a6cf8c4ab0a92363\" successfully" May 8 00:39:54.982240 containerd[1549]: time="2025-05-08T00:39:54.982231637Z" level=info msg="StopPodSandbox for \"99041687496a20f2a794d955933b8708c04816e8fb3a2dc0a6cf8c4ab0a92363\" returns successfully" May 8 00:39:54.982378 containerd[1549]: time="2025-05-08T00:39:54.982364488Z" level=info msg="RemovePodSandbox for \"99041687496a20f2a794d955933b8708c04816e8fb3a2dc0a6cf8c4ab0a92363\"" May 8 00:39:54.989635 containerd[1549]: time="2025-05-08T00:39:54.982378332Z" level=info msg="Forcibly stopping sandbox \"99041687496a20f2a794d955933b8708c04816e8fb3a2dc0a6cf8c4ab0a92363\"" May 8 00:39:54.989635 containerd[1549]: time="2025-05-08T00:39:54.982402747Z" level=info msg="TearDown network for sandbox \"99041687496a20f2a794d955933b8708c04816e8fb3a2dc0a6cf8c4ab0a92363\" successfully" May 8 00:39:55.015556 containerd[1549]: time="2025-05-08T00:39:55.015524587Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"99041687496a20f2a794d955933b8708c04816e8fb3a2dc0a6cf8c4ab0a92363\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:39:55.015857 containerd[1549]: time="2025-05-08T00:39:55.015572344Z" level=info msg="RemovePodSandbox \"99041687496a20f2a794d955933b8708c04816e8fb3a2dc0a6cf8c4ab0a92363\" returns successfully" May 8 00:39:55.015857 containerd[1549]: time="2025-05-08T00:39:55.015815667Z" level=info msg="StopPodSandbox for \"13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec\"" May 8 00:39:55.114101 containerd[1549]: 2025-05-08 00:39:55.049 [WARNING][6833] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" WorkloadEndpoint="localhost-k8s-calico--apiserver--577f55d98d--45j6l-eth0" May 8 00:39:55.114101 containerd[1549]: 2025-05-08 00:39:55.049 [INFO][6833] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" May 8 00:39:55.114101 containerd[1549]: 2025-05-08 00:39:55.049 [INFO][6833] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" iface="eth0" netns="" May 8 00:39:55.114101 containerd[1549]: 2025-05-08 00:39:55.049 [INFO][6833] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" May 8 00:39:55.114101 containerd[1549]: 2025-05-08 00:39:55.049 [INFO][6833] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" May 8 00:39:55.114101 containerd[1549]: 2025-05-08 00:39:55.090 [INFO][6840] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" HandleID="k8s-pod-network.13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" Workload="localhost-k8s-calico--apiserver--577f55d98d--45j6l-eth0" May 8 00:39:55.114101 containerd[1549]: 2025-05-08 00:39:55.091 [INFO][6840] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:39:55.114101 containerd[1549]: 2025-05-08 00:39:55.091 [INFO][6840] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:39:55.114101 containerd[1549]: 2025-05-08 00:39:55.111 [WARNING][6840] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" HandleID="k8s-pod-network.13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" Workload="localhost-k8s-calico--apiserver--577f55d98d--45j6l-eth0" May 8 00:39:55.114101 containerd[1549]: 2025-05-08 00:39:55.111 [INFO][6840] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" HandleID="k8s-pod-network.13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" Workload="localhost-k8s-calico--apiserver--577f55d98d--45j6l-eth0" May 8 00:39:55.114101 containerd[1549]: 2025-05-08 00:39:55.112 [INFO][6840] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:39:55.114101 containerd[1549]: 2025-05-08 00:39:55.113 [INFO][6833] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" May 8 00:39:55.114101 containerd[1549]: time="2025-05-08T00:39:55.114039806Z" level=info msg="TearDown network for sandbox \"13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec\" successfully" May 8 00:39:55.114101 containerd[1549]: time="2025-05-08T00:39:55.114057096Z" level=info msg="StopPodSandbox for \"13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec\" returns successfully" May 8 00:39:55.132886 containerd[1549]: time="2025-05-08T00:39:55.114606683Z" level=info msg="RemovePodSandbox for \"13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec\"" May 8 00:39:55.132886 containerd[1549]: time="2025-05-08T00:39:55.114624534Z" level=info msg="Forcibly stopping sandbox \"13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec\"" May 8 00:39:55.172617 containerd[1549]: 2025-05-08 00:39:55.143 [WARNING][6859] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" WorkloadEndpoint="localhost-k8s-calico--apiserver--577f55d98d--45j6l-eth0" May 8 00:39:55.172617 containerd[1549]: 2025-05-08 00:39:55.143 [INFO][6859] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" May 8 00:39:55.172617 containerd[1549]: 2025-05-08 00:39:55.143 [INFO][6859] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" iface="eth0" netns="" May 8 00:39:55.172617 containerd[1549]: 2025-05-08 00:39:55.143 [INFO][6859] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" May 8 00:39:55.172617 containerd[1549]: 2025-05-08 00:39:55.143 [INFO][6859] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" May 8 00:39:55.172617 containerd[1549]: 2025-05-08 00:39:55.165 [INFO][6866] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" HandleID="k8s-pod-network.13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" Workload="localhost-k8s-calico--apiserver--577f55d98d--45j6l-eth0" May 8 00:39:55.172617 containerd[1549]: 2025-05-08 00:39:55.165 [INFO][6866] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:39:55.172617 containerd[1549]: 2025-05-08 00:39:55.165 [INFO][6866] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:39:55.172617 containerd[1549]: 2025-05-08 00:39:55.170 [WARNING][6866] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" HandleID="k8s-pod-network.13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" Workload="localhost-k8s-calico--apiserver--577f55d98d--45j6l-eth0" May 8 00:39:55.172617 containerd[1549]: 2025-05-08 00:39:55.170 [INFO][6866] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" HandleID="k8s-pod-network.13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" Workload="localhost-k8s-calico--apiserver--577f55d98d--45j6l-eth0" May 8 00:39:55.172617 containerd[1549]: 2025-05-08 00:39:55.170 [INFO][6866] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:39:55.172617 containerd[1549]: 2025-05-08 00:39:55.171 [INFO][6859] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec" May 8 00:39:55.172617 containerd[1549]: time="2025-05-08T00:39:55.172597882Z" level=info msg="TearDown network for sandbox \"13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec\" successfully" May 8 00:39:55.206846 containerd[1549]: time="2025-05-08T00:39:55.206817340Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:39:55.206971 containerd[1549]: time="2025-05-08T00:39:55.206865283Z" level=info msg="RemovePodSandbox \"13f84e4e95917e9d14ab08dbf8454a408b279e48ae5bf73fa7f98e8523c8c1ec\" returns successfully" May 8 00:39:55.207339 containerd[1549]: time="2025-05-08T00:39:55.207177641Z" level=info msg="StopPodSandbox for \"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\"" May 8 00:39:55.320710 containerd[1549]: 2025-05-08 00:39:55.249 [WARNING][6884] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" WorkloadEndpoint="localhost-k8s-calico--apiserver--577f55d98d--45j6l-eth0" May 8 00:39:55.320710 containerd[1549]: 2025-05-08 00:39:55.249 [INFO][6884] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" May 8 00:39:55.320710 containerd[1549]: 2025-05-08 00:39:55.249 [INFO][6884] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" iface="eth0" netns="" May 8 00:39:55.320710 containerd[1549]: 2025-05-08 00:39:55.249 [INFO][6884] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" May 8 00:39:55.320710 containerd[1549]: 2025-05-08 00:39:55.249 [INFO][6884] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" May 8 00:39:55.320710 containerd[1549]: 2025-05-08 00:39:55.298 [INFO][6891] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" HandleID="k8s-pod-network.02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" Workload="localhost-k8s-calico--apiserver--577f55d98d--45j6l-eth0" May 8 00:39:55.320710 containerd[1549]: 2025-05-08 00:39:55.298 [INFO][6891] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:39:55.320710 containerd[1549]: 2025-05-08 00:39:55.298 [INFO][6891] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:39:55.320710 containerd[1549]: 2025-05-08 00:39:55.317 [WARNING][6891] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" HandleID="k8s-pod-network.02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" Workload="localhost-k8s-calico--apiserver--577f55d98d--45j6l-eth0" May 8 00:39:55.320710 containerd[1549]: 2025-05-08 00:39:55.317 [INFO][6891] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" HandleID="k8s-pod-network.02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" Workload="localhost-k8s-calico--apiserver--577f55d98d--45j6l-eth0" May 8 00:39:55.320710 containerd[1549]: 2025-05-08 00:39:55.318 [INFO][6891] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:39:55.320710 containerd[1549]: 2025-05-08 00:39:55.319 [INFO][6884] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" May 8 00:39:55.321515 containerd[1549]: time="2025-05-08T00:39:55.321233365Z" level=info msg="TearDown network for sandbox \"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\" successfully" May 8 00:39:55.321515 containerd[1549]: time="2025-05-08T00:39:55.321445240Z" level=info msg="StopPodSandbox for \"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\" returns successfully" May 8 00:39:55.327953 containerd[1549]: time="2025-05-08T00:39:55.321727552Z" level=info msg="RemovePodSandbox for \"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\"" May 8 00:39:55.327953 containerd[1549]: time="2025-05-08T00:39:55.321744207Z" level=info msg="Forcibly stopping sandbox \"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\"" May 8 00:39:55.361222 systemd[1]: Started sshd@21-139.178.70.103:22-139.178.68.195:55980.service - OpenSSH per-connection server daemon (139.178.68.195:55980). May 8 00:39:55.366004 containerd[1549]: 2025-05-08 00:39:55.342 [WARNING][6909] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" WorkloadEndpoint="localhost-k8s-calico--apiserver--577f55d98d--45j6l-eth0" May 8 00:39:55.366004 containerd[1549]: 2025-05-08 00:39:55.342 [INFO][6909] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" May 8 00:39:55.366004 containerd[1549]: 2025-05-08 00:39:55.342 [INFO][6909] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" iface="eth0" netns="" May 8 00:39:55.366004 containerd[1549]: 2025-05-08 00:39:55.342 [INFO][6909] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" May 8 00:39:55.366004 containerd[1549]: 2025-05-08 00:39:55.342 [INFO][6909] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" May 8 00:39:55.366004 containerd[1549]: 2025-05-08 00:39:55.357 [INFO][6917] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" HandleID="k8s-pod-network.02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" Workload="localhost-k8s-calico--apiserver--577f55d98d--45j6l-eth0" May 8 00:39:55.366004 containerd[1549]: 2025-05-08 00:39:55.357 [INFO][6917] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:39:55.366004 containerd[1549]: 2025-05-08 00:39:55.357 [INFO][6917] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:39:55.366004 containerd[1549]: 2025-05-08 00:39:55.360 [WARNING][6917] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" HandleID="k8s-pod-network.02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" Workload="localhost-k8s-calico--apiserver--577f55d98d--45j6l-eth0" May 8 00:39:55.366004 containerd[1549]: 2025-05-08 00:39:55.360 [INFO][6917] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" HandleID="k8s-pod-network.02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" Workload="localhost-k8s-calico--apiserver--577f55d98d--45j6l-eth0" May 8 00:39:55.366004 containerd[1549]: 2025-05-08 00:39:55.361 [INFO][6917] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:39:55.366004 containerd[1549]: 2025-05-08 00:39:55.365 [INFO][6909] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6" May 8 00:39:55.366280 containerd[1549]: time="2025-05-08T00:39:55.365996820Z" level=info msg="TearDown network for sandbox \"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\" successfully" May 8 00:39:55.387390 containerd[1549]: time="2025-05-08T00:39:55.387359726Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:39:55.387694 containerd[1549]: time="2025-05-08T00:39:55.387404450Z" level=info msg="RemovePodSandbox \"02d6167890da161fc34a5326f4c294b4f8b3fa5a9ad61f528c14dd1362d1b2b6\" returns successfully" May 8 00:39:55.387694 containerd[1549]: time="2025-05-08T00:39:55.387660556Z" level=info msg="StopPodSandbox for \"f8c30ac93b514c38288d3d9e290d44b8b2e34665bbac79bb445525f22dc98161\"" May 8 00:39:55.387773 containerd[1549]: time="2025-05-08T00:39:55.387702892Z" level=info msg="TearDown network for sandbox \"f8c30ac93b514c38288d3d9e290d44b8b2e34665bbac79bb445525f22dc98161\" successfully" May 8 00:39:55.387773 containerd[1549]: time="2025-05-08T00:39:55.387709581Z" level=info msg="StopPodSandbox for \"f8c30ac93b514c38288d3d9e290d44b8b2e34665bbac79bb445525f22dc98161\" returns successfully" May 8 00:39:55.388396 containerd[1549]: time="2025-05-08T00:39:55.387984176Z" level=info msg="RemovePodSandbox for \"f8c30ac93b514c38288d3d9e290d44b8b2e34665bbac79bb445525f22dc98161\"" May 8 00:39:55.388396 containerd[1549]: time="2025-05-08T00:39:55.387999566Z" level=info msg="Forcibly stopping sandbox \"f8c30ac93b514c38288d3d9e290d44b8b2e34665bbac79bb445525f22dc98161\"" May 8 00:39:55.388396 containerd[1549]: time="2025-05-08T00:39:55.388025653Z" level=info msg="TearDown network for sandbox \"f8c30ac93b514c38288d3d9e290d44b8b2e34665bbac79bb445525f22dc98161\" successfully" May 8 00:39:55.413619 containerd[1549]: time="2025-05-08T00:39:55.413597321Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f8c30ac93b514c38288d3d9e290d44b8b2e34665bbac79bb445525f22dc98161\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:39:55.413730 containerd[1549]: time="2025-05-08T00:39:55.413719535Z" level=info msg="RemovePodSandbox \"f8c30ac93b514c38288d3d9e290d44b8b2e34665bbac79bb445525f22dc98161\" returns successfully" May 8 00:39:55.414122 containerd[1549]: time="2025-05-08T00:39:55.414103507Z" level=info msg="StopPodSandbox for \"be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23\"" May 8 00:39:55.480146 containerd[1549]: 2025-05-08 00:39:55.445 [WARNING][6937] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--bfnqq-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"882fb80e-90cb-449a-87c0-4bbb4fd3e432", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 37, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a376fd11e6e362c158d16f8834469c86c663d880135f15dad558609c4ed241ef", Pod:"coredns-6f6b679f8f-bfnqq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid52169a72ae", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:39:55.480146 containerd[1549]: 2025-05-08 00:39:55.446 [INFO][6937] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23" May 8 00:39:55.480146 containerd[1549]: 2025-05-08 00:39:55.446 [INFO][6937] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23" iface="eth0" netns="" May 8 00:39:55.480146 containerd[1549]: 2025-05-08 00:39:55.446 [INFO][6937] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23" May 8 00:39:55.480146 containerd[1549]: 2025-05-08 00:39:55.446 [INFO][6937] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23" May 8 00:39:55.480146 containerd[1549]: 2025-05-08 00:39:55.472 [INFO][6944] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23" HandleID="k8s-pod-network.be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23" Workload="localhost-k8s-coredns--6f6b679f8f--bfnqq-eth0" May 8 00:39:55.480146 containerd[1549]: 2025-05-08 00:39:55.472 [INFO][6944] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:39:55.480146 containerd[1549]: 2025-05-08 00:39:55.472 [INFO][6944] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:39:55.480146 containerd[1549]: 2025-05-08 00:39:55.477 [WARNING][6944] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23" HandleID="k8s-pod-network.be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23" Workload="localhost-k8s-coredns--6f6b679f8f--bfnqq-eth0" May 8 00:39:55.480146 containerd[1549]: 2025-05-08 00:39:55.477 [INFO][6944] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23" HandleID="k8s-pod-network.be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23" Workload="localhost-k8s-coredns--6f6b679f8f--bfnqq-eth0" May 8 00:39:55.480146 containerd[1549]: 2025-05-08 00:39:55.477 [INFO][6944] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:39:55.480146 containerd[1549]: 2025-05-08 00:39:55.478 [INFO][6937] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23" May 8 00:39:55.488245 containerd[1549]: time="2025-05-08T00:39:55.480185905Z" level=info msg="TearDown network for sandbox \"be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23\" successfully" May 8 00:39:55.488245 containerd[1549]: time="2025-05-08T00:39:55.480203137Z" level=info msg="StopPodSandbox for \"be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23\" returns successfully" May 8 00:39:55.488245 containerd[1549]: time="2025-05-08T00:39:55.480825313Z" level=info msg="RemovePodSandbox for \"be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23\"" May 8 00:39:55.488245 containerd[1549]: time="2025-05-08T00:39:55.480839918Z" level=info msg="Forcibly stopping sandbox \"be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23\"" May 8 00:39:55.531585 containerd[1549]: 2025-05-08 00:39:55.505 [WARNING][6962] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--bfnqq-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"882fb80e-90cb-449a-87c0-4bbb4fd3e432", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 37, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a376fd11e6e362c158d16f8834469c86c663d880135f15dad558609c4ed241ef", Pod:"coredns-6f6b679f8f-bfnqq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid52169a72ae", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:39:55.531585 containerd[1549]: 2025-05-08 00:39:55.505 [INFO][6962] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23" May 8 00:39:55.531585 containerd[1549]: 2025-05-08 00:39:55.505 [INFO][6962] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23" iface="eth0" netns="" May 8 00:39:55.531585 containerd[1549]: 2025-05-08 00:39:55.505 [INFO][6962] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23" May 8 00:39:55.531585 containerd[1549]: 2025-05-08 00:39:55.505 [INFO][6962] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23" May 8 00:39:55.531585 containerd[1549]: 2025-05-08 00:39:55.522 [INFO][6969] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23" HandleID="k8s-pod-network.be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23" Workload="localhost-k8s-coredns--6f6b679f8f--bfnqq-eth0" May 8 00:39:55.531585 containerd[1549]: 2025-05-08 00:39:55.523 [INFO][6969] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:39:55.531585 containerd[1549]: 2025-05-08 00:39:55.523 [INFO][6969] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:39:55.531585 containerd[1549]: 2025-05-08 00:39:55.528 [WARNING][6969] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23" HandleID="k8s-pod-network.be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23" Workload="localhost-k8s-coredns--6f6b679f8f--bfnqq-eth0" May 8 00:39:55.531585 containerd[1549]: 2025-05-08 00:39:55.528 [INFO][6969] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23" HandleID="k8s-pod-network.be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23" Workload="localhost-k8s-coredns--6f6b679f8f--bfnqq-eth0" May 8 00:39:55.531585 containerd[1549]: 2025-05-08 00:39:55.529 [INFO][6969] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:39:55.531585 containerd[1549]: 2025-05-08 00:39:55.530 [INFO][6962] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23" May 8 00:39:55.532290 containerd[1549]: time="2025-05-08T00:39:55.531566380Z" level=info msg="TearDown network for sandbox \"be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23\" successfully" May 8 00:39:55.546063 containerd[1549]: time="2025-05-08T00:39:55.546029643Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:39:55.549017 containerd[1549]: time="2025-05-08T00:39:55.546075492Z" level=info msg="RemovePodSandbox \"be3842fb4d20549f7b54176b4da74e61e7d300a592e6ce71763e19a9e3c17b23\" returns successfully" May 8 00:39:55.549017 containerd[1549]: time="2025-05-08T00:39:55.546490806Z" level=info msg="StopPodSandbox for \"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\"" May 8 00:39:55.604875 containerd[1549]: 2025-05-08 00:39:55.574 [WARNING][6993] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d465466c4--h88jf-eth0", GenerateName:"calico-apiserver-5d465466c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"9353a2d8-021f-4963-930a-ab008f3fd909", ResourceVersion:"1195", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 38, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d465466c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3007f5c556b725763f2f88c36ed6b5beac87979c009522b4605b096847445232", Pod:"calico-apiserver-5d465466c4-h88jf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie5bf0f6f32c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:39:55.604875 containerd[1549]: 2025-05-08 00:39:55.574 [INFO][6993] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b" May 8 00:39:55.604875 containerd[1549]: 2025-05-08 00:39:55.574 [INFO][6993] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b" iface="eth0" netns="" May 8 00:39:55.604875 containerd[1549]: 2025-05-08 00:39:55.574 [INFO][6993] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b" May 8 00:39:55.604875 containerd[1549]: 2025-05-08 00:39:55.574 [INFO][6993] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b" May 8 00:39:55.604875 containerd[1549]: 2025-05-08 00:39:55.588 [INFO][7000] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b" HandleID="k8s-pod-network.fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b" Workload="localhost-k8s-calico--apiserver--5d465466c4--h88jf-eth0" May 8 00:39:55.604875 containerd[1549]: 2025-05-08 00:39:55.588 [INFO][7000] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:39:55.604875 containerd[1549]: 2025-05-08 00:39:55.588 [INFO][7000] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:39:55.604875 containerd[1549]: 2025-05-08 00:39:55.602 [WARNING][7000] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b" HandleID="k8s-pod-network.fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b" Workload="localhost-k8s-calico--apiserver--5d465466c4--h88jf-eth0" May 8 00:39:55.604875 containerd[1549]: 2025-05-08 00:39:55.602 [INFO][7000] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b" HandleID="k8s-pod-network.fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b" Workload="localhost-k8s-calico--apiserver--5d465466c4--h88jf-eth0" May 8 00:39:55.604875 containerd[1549]: 2025-05-08 00:39:55.602 [INFO][7000] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:39:55.604875 containerd[1549]: 2025-05-08 00:39:55.603 [INFO][6993] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b" May 8 00:39:55.605489 containerd[1549]: time="2025-05-08T00:39:55.605409527Z" level=info msg="TearDown network for sandbox \"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\" successfully" May 8 00:39:55.605489 containerd[1549]: time="2025-05-08T00:39:55.605430160Z" level=info msg="StopPodSandbox for \"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\" returns successfully" May 8 00:39:55.606110 containerd[1549]: time="2025-05-08T00:39:55.605883176Z" level=info msg="RemovePodSandbox for \"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\"" May 8 00:39:55.606110 containerd[1549]: time="2025-05-08T00:39:55.605900300Z" level=info msg="Forcibly stopping sandbox \"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\"" May 8 00:39:55.726185 containerd[1549]: 2025-05-08 00:39:55.661 [WARNING][7018] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d465466c4--h88jf-eth0", GenerateName:"calico-apiserver-5d465466c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"9353a2d8-021f-4963-930a-ab008f3fd909", ResourceVersion:"1195", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 38, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d465466c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3007f5c556b725763f2f88c36ed6b5beac87979c009522b4605b096847445232", Pod:"calico-apiserver-5d465466c4-h88jf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie5bf0f6f32c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:39:55.726185 containerd[1549]: 2025-05-08 00:39:55.661 [INFO][7018] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b" May 8 00:39:55.726185 containerd[1549]: 2025-05-08 00:39:55.661 [INFO][7018] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b" iface="eth0" netns="" May 8 00:39:55.726185 containerd[1549]: 2025-05-08 00:39:55.661 [INFO][7018] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b" May 8 00:39:55.726185 containerd[1549]: 2025-05-08 00:39:55.661 [INFO][7018] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b" May 8 00:39:55.726185 containerd[1549]: 2025-05-08 00:39:55.719 [INFO][7027] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b" HandleID="k8s-pod-network.fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b" Workload="localhost-k8s-calico--apiserver--5d465466c4--h88jf-eth0" May 8 00:39:55.726185 containerd[1549]: 2025-05-08 00:39:55.719 [INFO][7027] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:39:55.726185 containerd[1549]: 2025-05-08 00:39:55.719 [INFO][7027] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:39:55.726185 containerd[1549]: 2025-05-08 00:39:55.723 [WARNING][7027] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b" HandleID="k8s-pod-network.fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b" Workload="localhost-k8s-calico--apiserver--5d465466c4--h88jf-eth0" May 8 00:39:55.726185 containerd[1549]: 2025-05-08 00:39:55.723 [INFO][7027] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b" HandleID="k8s-pod-network.fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b" Workload="localhost-k8s-calico--apiserver--5d465466c4--h88jf-eth0" May 8 00:39:55.726185 containerd[1549]: 2025-05-08 00:39:55.724 [INFO][7027] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:39:55.726185 containerd[1549]: 2025-05-08 00:39:55.725 [INFO][7018] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b" May 8 00:39:55.726185 containerd[1549]: time="2025-05-08T00:39:55.726157161Z" level=info msg="TearDown network for sandbox \"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\" successfully" May 8 00:39:55.732365 containerd[1549]: time="2025-05-08T00:39:55.732308079Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:39:55.732365 containerd[1549]: time="2025-05-08T00:39:55.732352640Z" level=info msg="RemovePodSandbox \"fc54ed6c79787e00a7340057cc839317b65a1a98031397dd263f7c64fe94379b\" returns successfully" May 8 00:39:55.732703 containerd[1549]: time="2025-05-08T00:39:55.732666179Z" level=info msg="StopPodSandbox for \"0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97\"" May 8 00:39:55.777050 containerd[1549]: 2025-05-08 00:39:55.755 [WARNING][7046] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" WorkloadEndpoint="localhost-k8s-calico--apiserver--577f55d98d--bjswh-eth0" May 8 00:39:55.777050 containerd[1549]: 2025-05-08 00:39:55.756 [INFO][7046] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" May 8 00:39:55.777050 containerd[1549]: 2025-05-08 00:39:55.756 [INFO][7046] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" iface="eth0" netns="" May 8 00:39:55.777050 containerd[1549]: 2025-05-08 00:39:55.756 [INFO][7046] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" May 8 00:39:55.777050 containerd[1549]: 2025-05-08 00:39:55.756 [INFO][7046] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" May 8 00:39:55.777050 containerd[1549]: 2025-05-08 00:39:55.769 [INFO][7053] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" HandleID="k8s-pod-network.0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" Workload="localhost-k8s-calico--apiserver--577f55d98d--bjswh-eth0" May 8 00:39:55.777050 containerd[1549]: 2025-05-08 00:39:55.769 [INFO][7053] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:39:55.777050 containerd[1549]: 2025-05-08 00:39:55.769 [INFO][7053] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:39:55.777050 containerd[1549]: 2025-05-08 00:39:55.774 [WARNING][7053] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" HandleID="k8s-pod-network.0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" Workload="localhost-k8s-calico--apiserver--577f55d98d--bjswh-eth0" May 8 00:39:55.777050 containerd[1549]: 2025-05-08 00:39:55.774 [INFO][7053] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" HandleID="k8s-pod-network.0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" Workload="localhost-k8s-calico--apiserver--577f55d98d--bjswh-eth0" May 8 00:39:55.777050 containerd[1549]: 2025-05-08 00:39:55.774 [INFO][7053] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:39:55.777050 containerd[1549]: 2025-05-08 00:39:55.776 [INFO][7046] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" May 8 00:39:55.780396 containerd[1549]: time="2025-05-08T00:39:55.777074785Z" level=info msg="TearDown network for sandbox \"0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97\" successfully" May 8 00:39:55.780396 containerd[1549]: time="2025-05-08T00:39:55.777108593Z" level=info msg="StopPodSandbox for \"0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97\" returns successfully" May 8 00:39:55.780396 containerd[1549]: time="2025-05-08T00:39:55.777453853Z" level=info msg="RemovePodSandbox for \"0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97\"" May 8 00:39:55.780396 containerd[1549]: time="2025-05-08T00:39:55.777470848Z" level=info msg="Forcibly stopping sandbox \"0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97\"" May 8 00:39:55.835542 containerd[1549]: 2025-05-08 00:39:55.809 [WARNING][7071] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" WorkloadEndpoint="localhost-k8s-calico--apiserver--577f55d98d--bjswh-eth0" May 8 00:39:55.835542 containerd[1549]: 2025-05-08 00:39:55.809 [INFO][7071] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" May 8 00:39:55.835542 containerd[1549]: 2025-05-08 00:39:55.809 [INFO][7071] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" iface="eth0" netns="" May 8 00:39:55.835542 containerd[1549]: 2025-05-08 00:39:55.809 [INFO][7071] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" May 8 00:39:55.835542 containerd[1549]: 2025-05-08 00:39:55.809 [INFO][7071] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" May 8 00:39:55.835542 containerd[1549]: 2025-05-08 00:39:55.827 [INFO][7078] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" HandleID="k8s-pod-network.0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" Workload="localhost-k8s-calico--apiserver--577f55d98d--bjswh-eth0" May 8 00:39:55.835542 containerd[1549]: 2025-05-08 00:39:55.828 [INFO][7078] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:39:55.835542 containerd[1549]: 2025-05-08 00:39:55.828 [INFO][7078] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:39:55.835542 containerd[1549]: 2025-05-08 00:39:55.832 [WARNING][7078] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" HandleID="k8s-pod-network.0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" Workload="localhost-k8s-calico--apiserver--577f55d98d--bjswh-eth0" May 8 00:39:55.835542 containerd[1549]: 2025-05-08 00:39:55.832 [INFO][7078] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" HandleID="k8s-pod-network.0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" Workload="localhost-k8s-calico--apiserver--577f55d98d--bjswh-eth0" May 8 00:39:55.835542 containerd[1549]: 2025-05-08 00:39:55.833 [INFO][7078] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:39:55.835542 containerd[1549]: 2025-05-08 00:39:55.834 [INFO][7071] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97" May 8 00:39:55.839547 containerd[1549]: time="2025-05-08T00:39:55.835587864Z" level=info msg="TearDown network for sandbox \"0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97\" successfully" May 8 00:39:55.849384 containerd[1549]: time="2025-05-08T00:39:55.849362446Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:39:55.849432 containerd[1549]: time="2025-05-08T00:39:55.849411492Z" level=info msg="RemovePodSandbox \"0b3b5aefc74e9820bc37cc7569cb391c5c49d5e7115c16cd3421099375dc6d97\" returns successfully" May 8 00:39:55.852436 containerd[1549]: time="2025-05-08T00:39:55.849749479Z" level=info msg="StopPodSandbox for \"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\"" May 8 00:39:55.891197 containerd[1549]: 2025-05-08 00:39:55.870 [WARNING][7096] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" WorkloadEndpoint="localhost-k8s-calico--apiserver--577f55d98d--bjswh-eth0" May 8 00:39:55.891197 containerd[1549]: 2025-05-08 00:39:55.870 [INFO][7096] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" May 8 00:39:55.891197 containerd[1549]: 2025-05-08 00:39:55.870 [INFO][7096] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" iface="eth0" netns="" May 8 00:39:55.891197 containerd[1549]: 2025-05-08 00:39:55.870 [INFO][7096] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" May 8 00:39:55.891197 containerd[1549]: 2025-05-08 00:39:55.870 [INFO][7096] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" May 8 00:39:55.891197 containerd[1549]: 2025-05-08 00:39:55.883 [INFO][7104] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" HandleID="k8s-pod-network.a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" Workload="localhost-k8s-calico--apiserver--577f55d98d--bjswh-eth0" May 8 00:39:55.891197 containerd[1549]: 2025-05-08 00:39:55.883 [INFO][7104] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:39:55.891197 containerd[1549]: 2025-05-08 00:39:55.883 [INFO][7104] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:39:55.891197 containerd[1549]: 2025-05-08 00:39:55.888 [WARNING][7104] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" HandleID="k8s-pod-network.a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" Workload="localhost-k8s-calico--apiserver--577f55d98d--bjswh-eth0" May 8 00:39:55.891197 containerd[1549]: 2025-05-08 00:39:55.888 [INFO][7104] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" HandleID="k8s-pod-network.a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" Workload="localhost-k8s-calico--apiserver--577f55d98d--bjswh-eth0" May 8 00:39:55.891197 containerd[1549]: 2025-05-08 00:39:55.888 [INFO][7104] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:39:55.891197 containerd[1549]: 2025-05-08 00:39:55.890 [INFO][7096] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" May 8 00:39:55.891521 containerd[1549]: time="2025-05-08T00:39:55.891218941Z" level=info msg="TearDown network for sandbox \"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\" successfully" May 8 00:39:55.891521 containerd[1549]: time="2025-05-08T00:39:55.891235213Z" level=info msg="StopPodSandbox for \"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\" returns successfully" May 8 00:39:55.891863 containerd[1549]: time="2025-05-08T00:39:55.891810319Z" level=info msg="RemovePodSandbox for \"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\"" May 8 00:39:55.891863 containerd[1549]: time="2025-05-08T00:39:55.891826969Z" level=info msg="Forcibly stopping sandbox \"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\"" May 8 00:39:55.945870 sshd[6924]: Accepted publickey for core from 139.178.68.195 port 55980 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:39:55.955840 sshd[6924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:55.958941 containerd[1549]: 2025-05-08 00:39:55.912 [WARNING][7122] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" WorkloadEndpoint="localhost-k8s-calico--apiserver--577f55d98d--bjswh-eth0" May 8 00:39:55.958941 containerd[1549]: 2025-05-08 00:39:55.913 [INFO][7122] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" May 8 00:39:55.958941 containerd[1549]: 2025-05-08 00:39:55.913 [INFO][7122] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" iface="eth0" netns="" May 8 00:39:55.958941 containerd[1549]: 2025-05-08 00:39:55.913 [INFO][7122] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" May 8 00:39:55.958941 containerd[1549]: 2025-05-08 00:39:55.913 [INFO][7122] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" May 8 00:39:55.958941 containerd[1549]: 2025-05-08 00:39:55.951 [INFO][7129] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" HandleID="k8s-pod-network.a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" Workload="localhost-k8s-calico--apiserver--577f55d98d--bjswh-eth0" May 8 00:39:55.958941 containerd[1549]: 2025-05-08 00:39:55.951 [INFO][7129] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:39:55.958941 containerd[1549]: 2025-05-08 00:39:55.951 [INFO][7129] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:39:55.958941 containerd[1549]: 2025-05-08 00:39:55.955 [WARNING][7129] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" HandleID="k8s-pod-network.a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" Workload="localhost-k8s-calico--apiserver--577f55d98d--bjswh-eth0" May 8 00:39:55.958941 containerd[1549]: 2025-05-08 00:39:55.955 [INFO][7129] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" HandleID="k8s-pod-network.a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" Workload="localhost-k8s-calico--apiserver--577f55d98d--bjswh-eth0" May 8 00:39:55.958941 containerd[1549]: 2025-05-08 00:39:55.955 [INFO][7129] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:39:55.958941 containerd[1549]: 2025-05-08 00:39:55.957 [INFO][7122] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460" May 8 00:39:55.958941 containerd[1549]: time="2025-05-08T00:39:55.958796580Z" level=info msg="TearDown network for sandbox \"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\" successfully" May 8 00:39:55.960987 containerd[1549]: time="2025-05-08T00:39:55.960889579Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:39:55.960987 containerd[1549]: time="2025-05-08T00:39:55.960945867Z" level=info msg="RemovePodSandbox \"a7785d18d354b4f6b2a7d71127afe7810319cafa9deab28b9d6a882afd83c460\" returns successfully" May 8 00:39:55.962214 containerd[1549]: time="2025-05-08T00:39:55.962191375Z" level=info msg="StopPodSandbox for \"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\"" May 8 00:39:55.963358 systemd-logind[1523]: New session 23 of user core. May 8 00:39:55.968064 systemd[1]: Started session-23.scope - Session 23 of User core. May 8 00:39:56.012660 containerd[1549]: 2025-05-08 00:39:55.989 [WARNING][7147] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79589f44bf--xp2wm-eth0" May 8 00:39:56.012660 containerd[1549]: 2025-05-08 00:39:55.989 [INFO][7147] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724" May 8 00:39:56.012660 containerd[1549]: 2025-05-08 00:39:55.989 [INFO][7147] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724" iface="eth0" netns="" May 8 00:39:56.012660 containerd[1549]: 2025-05-08 00:39:55.989 [INFO][7147] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724" May 8 00:39:56.012660 containerd[1549]: 2025-05-08 00:39:55.989 [INFO][7147] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724" May 8 00:39:56.012660 containerd[1549]: 2025-05-08 00:39:56.005 [INFO][7155] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724" HandleID="k8s-pod-network.806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724" Workload="localhost-k8s-calico--kube--controllers--79589f44bf--xp2wm-eth0" May 8 00:39:56.012660 containerd[1549]: 2025-05-08 00:39:56.005 [INFO][7155] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:39:56.012660 containerd[1549]: 2025-05-08 00:39:56.005 [INFO][7155] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:39:56.012660 containerd[1549]: 2025-05-08 00:39:56.009 [WARNING][7155] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724" HandleID="k8s-pod-network.806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724" Workload="localhost-k8s-calico--kube--controllers--79589f44bf--xp2wm-eth0" May 8 00:39:56.012660 containerd[1549]: 2025-05-08 00:39:56.009 [INFO][7155] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724" HandleID="k8s-pod-network.806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724" Workload="localhost-k8s-calico--kube--controllers--79589f44bf--xp2wm-eth0" May 8 00:39:56.012660 containerd[1549]: 2025-05-08 00:39:56.010 [INFO][7155] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:39:56.012660 containerd[1549]: 2025-05-08 00:39:56.011 [INFO][7147] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724" May 8 00:39:56.013546 containerd[1549]: time="2025-05-08T00:39:56.012687944Z" level=info msg="TearDown network for sandbox \"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\" successfully" May 8 00:39:56.013546 containerd[1549]: time="2025-05-08T00:39:56.012718934Z" level=info msg="StopPodSandbox for \"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\" returns successfully" May 8 00:39:56.013546 containerd[1549]: time="2025-05-08T00:39:56.013468101Z" level=info msg="RemovePodSandbox for \"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\"" May 8 00:39:56.013546 containerd[1549]: time="2025-05-08T00:39:56.013491348Z" level=info msg="Forcibly stopping sandbox \"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\"" May 8 00:39:56.062241 containerd[1549]: 2025-05-08 00:39:56.037 [WARNING][7174] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79589f44bf--xp2wm-eth0" May 8 00:39:56.062241 containerd[1549]: 2025-05-08 00:39:56.038 [INFO][7174] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724" May 8 00:39:56.062241 containerd[1549]: 2025-05-08 00:39:56.038 [INFO][7174] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724" iface="eth0" netns="" May 8 00:39:56.062241 containerd[1549]: 2025-05-08 00:39:56.038 [INFO][7174] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724" May 8 00:39:56.062241 containerd[1549]: 2025-05-08 00:39:56.038 [INFO][7174] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724" May 8 00:39:56.062241 containerd[1549]: 2025-05-08 00:39:56.055 [INFO][7181] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724" HandleID="k8s-pod-network.806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724" Workload="localhost-k8s-calico--kube--controllers--79589f44bf--xp2wm-eth0" May 8 00:39:56.062241 containerd[1549]: 2025-05-08 00:39:56.055 [INFO][7181] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:39:56.062241 containerd[1549]: 2025-05-08 00:39:56.055 [INFO][7181] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:39:56.062241 containerd[1549]: 2025-05-08 00:39:56.059 [WARNING][7181] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724" HandleID="k8s-pod-network.806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724" Workload="localhost-k8s-calico--kube--controllers--79589f44bf--xp2wm-eth0" May 8 00:39:56.062241 containerd[1549]: 2025-05-08 00:39:56.059 [INFO][7181] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724" HandleID="k8s-pod-network.806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724" Workload="localhost-k8s-calico--kube--controllers--79589f44bf--xp2wm-eth0" May 8 00:39:56.062241 containerd[1549]: 2025-05-08 00:39:56.060 [INFO][7181] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:39:56.062241 containerd[1549]: 2025-05-08 00:39:56.061 [INFO][7174] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724" May 8 00:39:56.063046 containerd[1549]: time="2025-05-08T00:39:56.062265638Z" level=info msg="TearDown network for sandbox \"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\" successfully" May 8 00:39:56.077776 containerd[1549]: time="2025-05-08T00:39:56.077748944Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:39:56.077843 containerd[1549]: time="2025-05-08T00:39:56.077793623Z" level=info msg="RemovePodSandbox \"806a3e44f1c09a9a219a4a8258c24578de4b6e7321e00478ee8a332b8aee5724\" returns successfully" May 8 00:39:56.455012 sshd[6924]: pam_unix(sshd:session): session closed for user core May 8 00:39:56.457109 systemd[1]: sshd@21-139.178.70.103:22-139.178.68.195:55980.service: Deactivated successfully. May 8 00:39:56.458179 systemd[1]: session-23.scope: Deactivated successfully. May 8 00:39:56.458635 systemd-logind[1523]: Session 23 logged out. Waiting for processes to exit. May 8 00:39:56.459508 systemd-logind[1523]: Removed session 23. May 8 00:40:01.466692 systemd[1]: Started sshd@22-139.178.70.103:22-139.178.68.195:55990.service - OpenSSH per-connection server daemon (139.178.68.195:55990). May 8 00:40:01.512996 sshd[7199]: Accepted publickey for core from 139.178.68.195 port 55990 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:40:01.514064 sshd[7199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:40:01.517261 systemd-logind[1523]: New session 24 of user core. May 8 00:40:01.526057 systemd[1]: Started session-24.scope - Session 24 of User core. May 8 00:40:01.672692 sshd[7199]: pam_unix(sshd:session): session closed for user core May 8 00:40:01.675038 systemd[1]: sshd@22-139.178.70.103:22-139.178.68.195:55990.service: Deactivated successfully. May 8 00:40:01.676278 systemd[1]: session-24.scope: Deactivated successfully. May 8 00:40:01.676738 systemd-logind[1523]: Session 24 logged out. Waiting for processes to exit. May 8 00:40:01.677563 systemd-logind[1523]: Removed session 24. May 8 00:40:06.682443 systemd[1]: Started sshd@23-139.178.70.103:22-139.178.68.195:56040.service - OpenSSH per-connection server daemon (139.178.68.195:56040). May 8 00:40:06.777359 sshd[7214]: Accepted publickey for core from 139.178.68.195 port 56040 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:40:06.778635 sshd[7214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:40:06.781565 systemd-logind[1523]: New session 25 of user core. May 8 00:40:06.786031 systemd[1]: Started session-25.scope - Session 25 of User core. May 8 00:40:07.328813 sshd[7214]: pam_unix(sshd:session): session closed for user core May 8 00:40:07.337189 systemd[1]: sshd@23-139.178.70.103:22-139.178.68.195:56040.service: Deactivated successfully. May 8 00:40:07.339811 systemd[1]: session-25.scope: Deactivated successfully. May 8 00:40:07.341828 systemd-logind[1523]: Session 25 logged out. Waiting for processes to exit. May 8 00:40:07.342631 systemd-logind[1523]: Removed session 25.