Dec 13 13:31:13.735287 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 13 11:52:04 -00 2024 Dec 13 13:31:13.735304 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:31:13.735311 kernel: Disabled fast string operations Dec 13 13:31:13.735315 kernel: BIOS-provided physical RAM map: Dec 13 13:31:13.735319 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Dec 13 13:31:13.735323 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Dec 13 13:31:13.735330 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Dec 13 13:31:13.735334 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Dec 13 13:31:13.735338 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Dec 13 13:31:13.735342 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Dec 13 13:31:13.735347 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Dec 13 13:31:13.735351 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Dec 13 13:31:13.735355 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Dec 13 13:31:13.735360 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Dec 13 13:31:13.735366 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Dec 13 13:31:13.735371 kernel: NX (Execute Disable) protection: active Dec 13 13:31:13.735376 kernel: APIC: Static calls initialized Dec 13 13:31:13.735380 kernel: SMBIOS 2.7 present. Dec 13 13:31:13.735386 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Dec 13 13:31:13.735390 kernel: vmware: hypercall mode: 0x00 Dec 13 13:31:13.735395 kernel: Hypervisor detected: VMware Dec 13 13:31:13.735400 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Dec 13 13:31:13.735405 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Dec 13 13:31:13.735410 kernel: vmware: using clock offset of 2605382387 ns Dec 13 13:31:13.735415 kernel: tsc: Detected 3408.000 MHz processor Dec 13 13:31:13.735420 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 13:31:13.735425 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 13:31:13.735430 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Dec 13 13:31:13.735435 kernel: total RAM covered: 3072M Dec 13 13:31:13.735440 kernel: Found optimal setting for mtrr clean up Dec 13 13:31:13.735447 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Dec 13 13:31:13.735452 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Dec 13 13:31:13.735458 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 13:31:13.735463 kernel: Using GB pages for direct mapping Dec 13 13:31:13.735468 kernel: ACPI: Early table checksum verification disabled Dec 13 13:31:13.735473 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Dec 13 13:31:13.735478 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Dec 13 13:31:13.735483 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Dec 13 13:31:13.735487 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Dec 13 13:31:13.735492 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Dec 13 13:31:13.735511 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Dec 13 13:31:13.735516 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Dec 13 13:31:13.735522 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Dec 13 13:31:13.735527 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Dec 13 13:31:13.735532 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Dec 13 13:31:13.735537 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Dec 13 13:31:13.735544 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Dec 13 13:31:13.735549 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Dec 13 13:31:13.735554 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Dec 13 13:31:13.735560 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Dec 13 13:31:13.735565 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Dec 13 13:31:13.735570 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Dec 13 13:31:13.735575 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Dec 13 13:31:13.735580 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Dec 13 13:31:13.735585 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Dec 13 13:31:13.735591 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Dec 13 13:31:13.735596 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Dec 13 13:31:13.735601 kernel: system APIC only can use physical flat Dec 13 13:31:13.735606 kernel: APIC: Switched APIC routing to: physical flat Dec 13 13:31:13.735611 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 13:31:13.735617 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Dec 13 13:31:13.735622 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Dec 13 13:31:13.735627 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Dec 13 13:31:13.735632 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Dec 13 13:31:13.735637 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Dec 13 13:31:13.735643 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Dec 13 13:31:13.735648 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Dec 13 13:31:13.735653 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Dec 13 13:31:13.735658 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Dec 13 13:31:13.735663 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Dec 13 13:31:13.735668 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Dec 13 13:31:13.735673 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Dec 13 13:31:13.735678 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Dec 13 13:31:13.735683 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Dec 13 13:31:13.735688 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Dec 13 13:31:13.735694 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Dec 13 13:31:13.735699 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Dec 13 13:31:13.735704 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Dec 13 13:31:13.735709 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Dec 13 13:31:13.735714 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Dec 13 13:31:13.735719 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Dec 13 13:31:13.735724 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Dec 13 13:31:13.735729 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Dec 13 13:31:13.735733 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Dec 13 13:31:13.735739 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Dec 13 13:31:13.735744 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Dec 13 13:31:13.735750 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Dec 13 13:31:13.735754 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Dec 13 13:31:13.735759 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Dec 13 13:31:13.735765 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Dec 13 13:31:13.735770 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Dec 13 13:31:13.735775 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Dec 13 13:31:13.735780 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Dec 13 13:31:13.735785 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Dec 13 13:31:13.735789 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Dec 13 13:31:13.735796 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Dec 13 13:31:13.735801 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Dec 13 13:31:13.735805 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Dec 13 13:31:13.735811 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Dec 13 13:31:13.735815 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Dec 13 13:31:13.735821 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Dec 13 13:31:13.735826 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Dec 13 13:31:13.735831 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Dec 13 13:31:13.735835 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Dec 13 13:31:13.735840 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Dec 13 13:31:13.735846 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Dec 13 13:31:13.735851 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Dec 13 13:31:13.735857 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Dec 13 13:31:13.735861 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Dec 13 13:31:13.735867 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Dec 13 13:31:13.735871 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Dec 13 13:31:13.735877 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Dec 13 13:31:13.735882 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Dec 13 13:31:13.735887 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Dec 13 13:31:13.735892 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Dec 13 13:31:13.735898 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Dec 13 13:31:13.735903 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Dec 13 13:31:13.735908 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Dec 13 13:31:13.735916 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Dec 13 13:31:13.735923 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Dec 13 13:31:13.735928 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Dec 13 13:31:13.735933 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Dec 13 13:31:13.735939 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Dec 13 13:31:13.735944 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Dec 13 13:31:13.735950 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Dec 13 13:31:13.735956 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Dec 13 13:31:13.735961 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Dec 13 13:31:13.735966 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Dec 13 13:31:13.735971 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Dec 13 13:31:13.735977 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Dec 13 13:31:13.735982 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Dec 13 13:31:13.735987 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Dec 13 13:31:13.735993 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Dec 13 13:31:13.735998 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Dec 13 13:31:13.736004 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Dec 13 13:31:13.736009 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Dec 13 13:31:13.736015 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Dec 13 13:31:13.736020 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Dec 13 13:31:13.736025 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Dec 13 13:31:13.736031 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Dec 13 13:31:13.736036 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Dec 13 13:31:13.736041 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Dec 13 13:31:13.736047 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Dec 13 13:31:13.736052 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Dec 13 13:31:13.736058 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Dec 13 13:31:13.736064 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Dec 13 13:31:13.736069 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Dec 13 13:31:13.736074 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Dec 13 13:31:13.736080 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Dec 13 13:31:13.736085 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Dec 13 13:31:13.736090 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Dec 13 13:31:13.736095 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Dec 13 13:31:13.736101 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Dec 13 13:31:13.736106 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Dec 13 13:31:13.736112 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Dec 13 13:31:13.736118 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Dec 13 13:31:13.736123 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Dec 13 13:31:13.736128 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Dec 13 13:31:13.736134 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Dec 13 13:31:13.736139 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Dec 13 13:31:13.736144 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Dec 13 13:31:13.736150 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Dec 13 13:31:13.736155 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Dec 13 13:31:13.736160 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Dec 13 13:31:13.736167 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Dec 13 13:31:13.736172 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Dec 13 13:31:13.736177 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Dec 13 13:31:13.736183 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Dec 13 13:31:13.736188 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Dec 13 13:31:13.736193 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Dec 13 13:31:13.736199 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Dec 13 13:31:13.736204 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Dec 13 13:31:13.736209 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Dec 13 13:31:13.736214 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Dec 13 13:31:13.736220 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Dec 13 13:31:13.736226 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Dec 13 13:31:13.736231 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Dec 13 13:31:13.736237 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Dec 13 13:31:13.736242 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Dec 13 13:31:13.736247 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Dec 13 13:31:13.736252 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Dec 13 13:31:13.736258 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Dec 13 13:31:13.736263 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Dec 13 13:31:13.736268 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Dec 13 13:31:13.736274 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Dec 13 13:31:13.736280 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Dec 13 13:31:13.736286 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Dec 13 13:31:13.736291 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 13:31:13.736297 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 13 13:31:13.736302 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Dec 13 13:31:13.736308 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Dec 13 13:31:13.736313 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Dec 13 13:31:13.736319 kernel: Zone ranges: Dec 13 13:31:13.736324 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 13:31:13.736331 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Dec 13 13:31:13.736336 kernel: Normal empty Dec 13 13:31:13.736342 kernel: Movable zone start for each node Dec 13 13:31:13.736347 kernel: Early memory node ranges Dec 13 13:31:13.736353 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Dec 13 13:31:13.736358 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Dec 13 13:31:13.736364 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Dec 13 13:31:13.736369 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Dec 13 13:31:13.736375 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 13:31:13.736380 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Dec 13 13:31:13.736387 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Dec 13 13:31:13.736392 kernel: ACPI: PM-Timer IO Port: 0x1008 Dec 13 13:31:13.736398 kernel: system APIC only can use physical flat Dec 13 13:31:13.736403 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Dec 13 13:31:13.736409 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Dec 13 13:31:13.736414 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Dec 13 13:31:13.736419 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Dec 13 13:31:13.736425 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Dec 13 13:31:13.736430 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Dec 13 13:31:13.736437 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Dec 13 13:31:13.736442 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Dec 13 13:31:13.736448 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Dec 13 13:31:13.736453 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Dec 13 13:31:13.736459 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Dec 13 13:31:13.736464 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Dec 13 13:31:13.736469 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Dec 13 13:31:13.736475 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Dec 13 13:31:13.736480 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Dec 13 13:31:13.736485 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Dec 13 13:31:13.736491 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Dec 13 13:31:13.736497 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Dec 13 13:31:13.736510 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Dec 13 13:31:13.736516 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Dec 13 13:31:13.736522 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Dec 13 13:31:13.736527 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Dec 13 13:31:13.736532 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Dec 13 13:31:13.736538 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Dec 13 13:31:13.736543 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Dec 13 13:31:13.736551 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Dec 13 13:31:13.736556 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Dec 13 13:31:13.736561 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Dec 13 13:31:13.736567 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Dec 13 13:31:13.736572 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Dec 13 13:31:13.736578 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Dec 13 13:31:13.736583 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Dec 13 13:31:13.736588 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Dec 13 13:31:13.736594 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Dec 13 13:31:13.736599 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Dec 13 13:31:13.736606 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Dec 13 13:31:13.736611 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Dec 13 13:31:13.736616 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Dec 13 13:31:13.736622 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Dec 13 13:31:13.736627 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Dec 13 13:31:13.736633 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Dec 13 13:31:13.736638 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Dec 13 13:31:13.736643 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Dec 13 13:31:13.736649 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Dec 13 13:31:13.736654 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Dec 13 13:31:13.736660 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Dec 13 13:31:13.736666 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Dec 13 13:31:13.736671 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Dec 13 13:31:13.736677 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Dec 13 13:31:13.736682 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Dec 13 13:31:13.736687 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Dec 13 13:31:13.736693 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Dec 13 13:31:13.736698 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Dec 13 13:31:13.736703 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Dec 13 13:31:13.736709 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Dec 13 13:31:13.736715 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Dec 13 13:31:13.736721 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Dec 13 13:31:13.736726 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Dec 13 13:31:13.736732 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Dec 13 13:31:13.736737 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Dec 13 13:31:13.736742 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Dec 13 13:31:13.736748 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Dec 13 13:31:13.736753 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Dec 13 13:31:13.736758 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Dec 13 13:31:13.736765 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Dec 13 13:31:13.736770 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Dec 13 13:31:13.736775 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Dec 13 13:31:13.736781 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Dec 13 13:31:13.736786 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Dec 13 13:31:13.736791 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Dec 13 13:31:13.736797 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Dec 13 13:31:13.736802 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Dec 13 13:31:13.736808 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Dec 13 13:31:13.736813 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Dec 13 13:31:13.736819 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Dec 13 13:31:13.736825 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Dec 13 13:31:13.736830 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Dec 13 13:31:13.736836 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Dec 13 13:31:13.736841 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Dec 13 13:31:13.736846 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Dec 13 13:31:13.736852 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Dec 13 13:31:13.736857 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Dec 13 13:31:13.736862 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Dec 13 13:31:13.736868 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Dec 13 13:31:13.736874 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Dec 13 13:31:13.736879 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Dec 13 13:31:13.736885 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Dec 13 13:31:13.736890 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Dec 13 13:31:13.736896 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Dec 13 13:31:13.736901 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Dec 13 13:31:13.736906 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Dec 13 13:31:13.736912 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Dec 13 13:31:13.736917 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Dec 13 13:31:13.736922 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Dec 13 13:31:13.736929 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Dec 13 13:31:13.736934 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Dec 13 13:31:13.736939 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Dec 13 13:31:13.736945 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Dec 13 13:31:13.736950 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Dec 13 13:31:13.736955 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Dec 13 13:31:13.736961 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Dec 13 13:31:13.736966 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Dec 13 13:31:13.736971 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Dec 13 13:31:13.736978 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Dec 13 13:31:13.736983 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Dec 13 13:31:13.736988 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Dec 13 13:31:13.736994 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Dec 13 13:31:13.736999 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Dec 13 13:31:13.737005 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Dec 13 13:31:13.737010 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Dec 13 13:31:13.737016 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Dec 13 13:31:13.737021 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Dec 13 13:31:13.737026 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Dec 13 13:31:13.737033 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Dec 13 13:31:13.737038 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Dec 13 13:31:13.737043 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Dec 13 13:31:13.737049 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Dec 13 13:31:13.737054 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Dec 13 13:31:13.737059 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Dec 13 13:31:13.737065 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Dec 13 13:31:13.737070 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Dec 13 13:31:13.737076 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Dec 13 13:31:13.737081 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Dec 13 13:31:13.737087 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Dec 13 13:31:13.737093 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Dec 13 13:31:13.737098 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Dec 13 13:31:13.737103 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Dec 13 13:31:13.737109 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Dec 13 13:31:13.737114 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Dec 13 13:31:13.737120 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Dec 13 13:31:13.737125 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 13:31:13.737131 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Dec 13 13:31:13.737137 kernel: TSC deadline timer available Dec 13 13:31:13.737143 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Dec 13 13:31:13.737148 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Dec 13 13:31:13.737154 kernel: Booting paravirtualized kernel on VMware hypervisor Dec 13 13:31:13.737159 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 13:31:13.737165 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Dec 13 13:31:13.737171 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Dec 13 13:31:13.737176 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Dec 13 13:31:13.737182 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Dec 13 13:31:13.737188 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Dec 13 13:31:13.737194 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Dec 13 13:31:13.737199 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Dec 13 13:31:13.737205 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Dec 13 13:31:13.737217 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Dec 13 13:31:13.737224 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Dec 13 13:31:13.737229 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Dec 13 13:31:13.737235 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Dec 13 13:31:13.737241 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Dec 13 13:31:13.737247 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Dec 13 13:31:13.737253 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Dec 13 13:31:13.737259 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Dec 13 13:31:13.737264 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Dec 13 13:31:13.737270 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Dec 13 13:31:13.737276 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Dec 13 13:31:13.737282 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:31:13.737288 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 13:31:13.737295 kernel: random: crng init done Dec 13 13:31:13.737301 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Dec 13 13:31:13.737307 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Dec 13 13:31:13.737312 kernel: printk: log_buf_len min size: 262144 bytes Dec 13 13:31:13.737318 kernel: printk: log_buf_len: 1048576 bytes Dec 13 13:31:13.737324 kernel: printk: early log buf free: 239648(91%) Dec 13 13:31:13.737330 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 13:31:13.737336 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 13:31:13.737342 kernel: Fallback order for Node 0: 0 Dec 13 13:31:13.737348 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Dec 13 13:31:13.737354 kernel: Policy zone: DMA32 Dec 13 13:31:13.737360 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 13:31:13.737366 kernel: Memory: 1934284K/2096628K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43328K init, 1748K bss, 162084K reserved, 0K cma-reserved) Dec 13 13:31:13.737373 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Dec 13 13:31:13.737380 kernel: ftrace: allocating 37874 entries in 148 pages Dec 13 13:31:13.737386 kernel: ftrace: allocated 148 pages with 3 groups Dec 13 13:31:13.737392 kernel: Dynamic Preempt: voluntary Dec 13 13:31:13.737398 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 13:31:13.737404 kernel: rcu: RCU event tracing is enabled. Dec 13 13:31:13.737409 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Dec 13 13:31:13.737415 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 13:31:13.737421 kernel: Rude variant of Tasks RCU enabled. Dec 13 13:31:13.737427 kernel: Tracing variant of Tasks RCU enabled. Dec 13 13:31:13.737433 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 13:31:13.737440 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Dec 13 13:31:13.737446 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Dec 13 13:31:13.737451 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Dec 13 13:31:13.737457 kernel: Console: colour VGA+ 80x25 Dec 13 13:31:13.737463 kernel: printk: console [tty0] enabled Dec 13 13:31:13.737469 kernel: printk: console [ttyS0] enabled Dec 13 13:31:13.737474 kernel: ACPI: Core revision 20230628 Dec 13 13:31:13.737480 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Dec 13 13:31:13.737486 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 13:31:13.737493 kernel: x2apic enabled Dec 13 13:31:13.737499 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 13:31:13.737513 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 13:31:13.737519 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Dec 13 13:31:13.737525 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Dec 13 13:31:13.737531 kernel: Disabled fast string operations Dec 13 13:31:13.737537 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 13:31:13.737542 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 13:31:13.737548 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 13:31:13.737556 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Dec 13 13:31:13.737562 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Dec 13 13:31:13.737568 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Dec 13 13:31:13.737575 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 13:31:13.737581 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Dec 13 13:31:13.737587 kernel: RETBleed: Mitigation: Enhanced IBRS Dec 13 13:31:13.737592 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 13:31:13.737598 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 13:31:13.737604 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 13:31:13.737611 kernel: SRBDS: Unknown: Dependent on hypervisor status Dec 13 13:31:13.737617 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 13:31:13.737623 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 13:31:13.737629 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 13:31:13.737635 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 13:31:13.737640 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 13:31:13.737646 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 13 13:31:13.737652 kernel: Freeing SMP alternatives memory: 32K Dec 13 13:31:13.737658 kernel: pid_max: default: 131072 minimum: 1024 Dec 13 13:31:13.737665 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 13:31:13.737671 kernel: landlock: Up and running. Dec 13 13:31:13.737677 kernel: SELinux: Initializing. Dec 13 13:31:13.737683 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 13:31:13.737689 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 13:31:13.737695 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Dec 13 13:31:13.737701 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Dec 13 13:31:13.737707 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Dec 13 13:31:13.737714 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Dec 13 13:31:13.737720 kernel: Performance Events: Skylake events, core PMU driver. Dec 13 13:31:13.737726 kernel: core: CPUID marked event: 'cpu cycles' unavailable Dec 13 13:31:13.737732 kernel: core: CPUID marked event: 'instructions' unavailable Dec 13 13:31:13.737737 kernel: core: CPUID marked event: 'bus cycles' unavailable Dec 13 13:31:13.737743 kernel: core: CPUID marked event: 'cache references' unavailable Dec 13 13:31:13.737748 kernel: core: CPUID marked event: 'cache misses' unavailable Dec 13 13:31:13.737754 kernel: core: CPUID marked event: 'branch instructions' unavailable Dec 13 13:31:13.737760 kernel: core: CPUID marked event: 'branch misses' unavailable Dec 13 13:31:13.737766 kernel: ... version: 1 Dec 13 13:31:13.737772 kernel: ... bit width: 48 Dec 13 13:31:13.737778 kernel: ... generic registers: 4 Dec 13 13:31:13.737784 kernel: ... value mask: 0000ffffffffffff Dec 13 13:31:13.737790 kernel: ... max period: 000000007fffffff Dec 13 13:31:13.737795 kernel: ... fixed-purpose events: 0 Dec 13 13:31:13.737801 kernel: ... event mask: 000000000000000f Dec 13 13:31:13.737807 kernel: signal: max sigframe size: 1776 Dec 13 13:31:13.737813 kernel: rcu: Hierarchical SRCU implementation. Dec 13 13:31:13.737820 kernel: rcu: Max phase no-delay instances is 400. Dec 13 13:31:13.737826 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 13:31:13.737831 kernel: smp: Bringing up secondary CPUs ... Dec 13 13:31:13.737837 kernel: smpboot: x86: Booting SMP configuration: Dec 13 13:31:13.737843 kernel: .... node #0, CPUs: #1 Dec 13 13:31:13.737849 kernel: Disabled fast string operations Dec 13 13:31:13.737855 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Dec 13 13:31:13.737860 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Dec 13 13:31:13.737866 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 13:31:13.737872 kernel: smpboot: Max logical packages: 128 Dec 13 13:31:13.737878 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Dec 13 13:31:13.737884 kernel: devtmpfs: initialized Dec 13 13:31:13.737890 kernel: x86/mm: Memory block size: 128MB Dec 13 13:31:13.737896 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Dec 13 13:31:13.737902 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 13:31:13.737908 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Dec 13 13:31:13.737914 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 13:31:13.737919 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 13:31:13.737927 kernel: audit: initializing netlink subsys (disabled) Dec 13 13:31:13.737934 kernel: audit: type=2000 audit(1734096671.066:1): state=initialized audit_enabled=0 res=1 Dec 13 13:31:13.737939 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 13:31:13.737945 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 13:31:13.737951 kernel: cpuidle: using governor menu Dec 13 13:31:13.737957 kernel: Simple Boot Flag at 0x36 set to 0x80 Dec 13 13:31:13.737962 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 13:31:13.737968 kernel: dca service started, version 1.12.1 Dec 13 13:31:13.737974 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Dec 13 13:31:13.737980 kernel: PCI: Using configuration type 1 for base access Dec 13 13:31:13.737987 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 13:31:13.737993 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 13:31:13.737999 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 13:31:13.738005 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 13:31:13.738010 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 13:31:13.738016 kernel: ACPI: Added _OSI(Module Device) Dec 13 13:31:13.738022 kernel: ACPI: Added _OSI(Processor Device) Dec 13 13:31:13.738028 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 13:31:13.738033 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 13:31:13.738040 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 13:31:13.738046 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Dec 13 13:31:13.738052 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 13:31:13.738058 kernel: ACPI: Interpreter enabled Dec 13 13:31:13.738071 kernel: ACPI: PM: (supports S0 S1 S5) Dec 13 13:31:13.738077 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 13:31:13.738083 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 13:31:13.738088 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 13:31:13.738094 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Dec 13 13:31:13.738102 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Dec 13 13:31:13.738178 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 13:31:13.738234 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Dec 13 13:31:13.738288 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Dec 13 13:31:13.738297 kernel: PCI host bridge to bus 0000:00 Dec 13 13:31:13.738347 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 13:31:13.738395 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Dec 13 13:31:13.738438 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 13:31:13.738480 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 13:31:13.738547 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Dec 13 13:31:13.738591 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Dec 13 13:31:13.738649 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Dec 13 13:31:13.738703 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Dec 13 13:31:13.738760 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Dec 13 13:31:13.738813 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Dec 13 13:31:13.738863 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Dec 13 13:31:13.738912 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Dec 13 13:31:13.738960 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Dec 13 13:31:13.739009 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Dec 13 13:31:13.739060 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Dec 13 13:31:13.739117 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Dec 13 13:31:13.739167 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Dec 13 13:31:13.739215 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Dec 13 13:31:13.739267 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Dec 13 13:31:13.739316 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Dec 13 13:31:13.739367 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Dec 13 13:31:13.739421 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Dec 13 13:31:13.739469 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Dec 13 13:31:13.739533 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Dec 13 13:31:13.739584 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Dec 13 13:31:13.739632 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Dec 13 13:31:13.739681 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 13:31:13.739735 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Dec 13 13:31:13.739792 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.739842 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.739896 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.739945 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.739998 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.740047 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.740104 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.740153 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.740209 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.740258 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.740311 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.740360 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.740416 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.740465 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.740898 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.740957 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.741013 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.741064 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.741120 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.741172 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.741226 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.741281 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.741335 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.741387 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.741440 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.741489 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.741551 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.741600 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.741653 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.741703 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.741758 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.741808 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.741861 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.741910 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.741962 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.742011 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.742066 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.742116 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.742168 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.742218 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.742270 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.742320 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.742377 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.742427 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.742479 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.744487 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.744571 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.744626 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.744685 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.744736 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.744789 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.744839 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.744893 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.744943 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.744996 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.745049 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.745102 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.745152 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.745209 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.745264 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.745319 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.745372 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.745426 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.745475 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.745543 kernel: pci_bus 0000:01: extended config space not accessible Dec 13 13:31:13.745596 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 13 13:31:13.745646 kernel: pci_bus 0000:02: extended config space not accessible Dec 13 13:31:13.745657 kernel: acpiphp: Slot [32] registered Dec 13 13:31:13.745663 kernel: acpiphp: Slot [33] registered Dec 13 13:31:13.745669 kernel: acpiphp: Slot [34] registered Dec 13 13:31:13.745675 kernel: acpiphp: Slot [35] registered Dec 13 13:31:13.745681 kernel: acpiphp: Slot [36] registered Dec 13 13:31:13.745687 kernel: acpiphp: Slot [37] registered Dec 13 13:31:13.745693 kernel: acpiphp: Slot [38] registered Dec 13 13:31:13.745698 kernel: acpiphp: Slot [39] registered Dec 13 13:31:13.745704 kernel: acpiphp: Slot [40] registered Dec 13 13:31:13.745711 kernel: acpiphp: Slot [41] registered Dec 13 13:31:13.745717 kernel: acpiphp: Slot [42] registered Dec 13 13:31:13.745723 kernel: acpiphp: Slot [43] registered Dec 13 13:31:13.745729 kernel: acpiphp: Slot [44] registered Dec 13 13:31:13.745735 kernel: acpiphp: Slot [45] registered Dec 13 13:31:13.745740 kernel: acpiphp: Slot [46] registered Dec 13 13:31:13.745756 kernel: acpiphp: Slot [47] registered Dec 13 13:31:13.745762 kernel: acpiphp: Slot [48] registered Dec 13 13:31:13.745768 kernel: acpiphp: Slot [49] registered Dec 13 13:31:13.745774 kernel: acpiphp: Slot [50] registered Dec 13 13:31:13.745781 kernel: acpiphp: Slot [51] registered Dec 13 13:31:13.745787 kernel: acpiphp: Slot [52] registered Dec 13 13:31:13.745792 kernel: acpiphp: Slot [53] registered Dec 13 13:31:13.745798 kernel: acpiphp: Slot [54] registered Dec 13 13:31:13.745804 kernel: acpiphp: Slot [55] registered Dec 13 13:31:13.745810 kernel: acpiphp: Slot [56] registered Dec 13 13:31:13.745816 kernel: acpiphp: Slot [57] registered Dec 13 13:31:13.745821 kernel: acpiphp: Slot [58] registered Dec 13 13:31:13.745827 kernel: acpiphp: Slot [59] registered Dec 13 13:31:13.745834 kernel: acpiphp: Slot [60] registered Dec 13 13:31:13.745840 kernel: acpiphp: Slot [61] registered Dec 13 13:31:13.745845 kernel: acpiphp: Slot [62] registered Dec 13 13:31:13.745851 kernel: acpiphp: Slot [63] registered Dec 13 13:31:13.745902 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Dec 13 13:31:13.745951 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Dec 13 13:31:13.745998 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Dec 13 13:31:13.746046 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Dec 13 13:31:13.746094 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Dec 13 13:31:13.746145 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Dec 13 13:31:13.746193 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Dec 13 13:31:13.746242 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Dec 13 13:31:13.746290 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Dec 13 13:31:13.746345 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Dec 13 13:31:13.746396 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Dec 13 13:31:13.746446 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Dec 13 13:31:13.746498 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Dec 13 13:31:13.748581 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.748637 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Dec 13 13:31:13.748690 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Dec 13 13:31:13.748740 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Dec 13 13:31:13.748789 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Dec 13 13:31:13.748840 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Dec 13 13:31:13.748893 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Dec 13 13:31:13.749011 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Dec 13 13:31:13.749358 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Dec 13 13:31:13.749426 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Dec 13 13:31:13.749480 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Dec 13 13:31:13.749550 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Dec 13 13:31:13.749601 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Dec 13 13:31:13.749653 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Dec 13 13:31:13.749706 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Dec 13 13:31:13.749755 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Dec 13 13:31:13.749805 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Dec 13 13:31:13.749853 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Dec 13 13:31:13.749903 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Dec 13 13:31:13.750069 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Dec 13 13:31:13.750130 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Dec 13 13:31:13.750180 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Dec 13 13:31:13.750231 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Dec 13 13:31:13.750279 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Dec 13 13:31:13.750328 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Dec 13 13:31:13.750377 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Dec 13 13:31:13.750428 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Dec 13 13:31:13.750477 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Dec 13 13:31:13.750543 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Dec 13 13:31:13.750604 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Dec 13 13:31:13.750655 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Dec 13 13:31:13.750704 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Dec 13 13:31:13.750754 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Dec 13 13:31:13.750804 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Dec 13 13:31:13.750857 kernel: pci 0000:0b:00.0: supports D1 D2 Dec 13 13:31:13.750907 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 13:31:13.750956 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Dec 13 13:31:13.751006 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Dec 13 13:31:13.751055 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Dec 13 13:31:13.751103 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Dec 13 13:31:13.751152 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Dec 13 13:31:13.751203 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Dec 13 13:31:13.751252 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Dec 13 13:31:13.751300 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Dec 13 13:31:13.751352 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Dec 13 13:31:13.751401 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Dec 13 13:31:13.751449 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Dec 13 13:31:13.751498 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Dec 13 13:31:13.751736 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Dec 13 13:31:13.751791 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Dec 13 13:31:13.751839 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Dec 13 13:31:13.751890 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Dec 13 13:31:13.751939 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Dec 13 13:31:13.751988 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Dec 13 13:31:13.752038 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Dec 13 13:31:13.752087 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Dec 13 13:31:13.752135 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Dec 13 13:31:13.752189 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Dec 13 13:31:13.752237 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Dec 13 13:31:13.752290 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Dec 13 13:31:13.752340 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Dec 13 13:31:13.752388 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Dec 13 13:31:13.752436 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Dec 13 13:31:13.752487 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Dec 13 13:31:13.752559 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Dec 13 13:31:13.752614 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Dec 13 13:31:13.752662 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Dec 13 13:31:13.752713 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Dec 13 13:31:13.752762 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Dec 13 13:31:13.752810 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Dec 13 13:31:13.752859 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Dec 13 13:31:13.752909 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Dec 13 13:31:13.752961 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Dec 13 13:31:13.753009 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Dec 13 13:31:13.753059 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Dec 13 13:31:13.753110 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Dec 13 13:31:13.753160 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Dec 13 13:31:13.753208 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Dec 13 13:31:13.753259 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Dec 13 13:31:13.753308 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Dec 13 13:31:13.753359 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Dec 13 13:31:13.753409 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Dec 13 13:31:13.753458 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Dec 13 13:31:13.753523 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Dec 13 13:31:13.753577 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Dec 13 13:31:13.753626 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Dec 13 13:31:13.753675 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Dec 13 13:31:13.753726 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Dec 13 13:31:13.753778 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Dec 13 13:31:13.753827 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Dec 13 13:31:13.753876 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Dec 13 13:31:13.753925 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Dec 13 13:31:13.753974 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Dec 13 13:31:13.754023 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Dec 13 13:31:13.754072 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Dec 13 13:31:13.754121 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Dec 13 13:31:13.754173 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Dec 13 13:31:13.754221 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Dec 13 13:31:13.754270 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Dec 13 13:31:13.754319 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Dec 13 13:31:13.754368 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Dec 13 13:31:13.754418 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Dec 13 13:31:13.754467 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Dec 13 13:31:13.755545 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Dec 13 13:31:13.755611 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Dec 13 13:31:13.755665 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Dec 13 13:31:13.755715 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Dec 13 13:31:13.755766 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Dec 13 13:31:13.755816 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Dec 13 13:31:13.755865 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Dec 13 13:31:13.755916 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Dec 13 13:31:13.755965 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Dec 13 13:31:13.756017 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Dec 13 13:31:13.756068 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Dec 13 13:31:13.756117 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Dec 13 13:31:13.756166 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Dec 13 13:31:13.756175 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Dec 13 13:31:13.756182 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Dec 13 13:31:13.756188 kernel: ACPI: PCI: Interrupt link LNKB disabled Dec 13 13:31:13.756194 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 13:31:13.756202 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Dec 13 13:31:13.756208 kernel: iommu: Default domain type: Translated Dec 13 13:31:13.756214 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 13:31:13.756220 kernel: PCI: Using ACPI for IRQ routing Dec 13 13:31:13.756226 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 13:31:13.756232 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Dec 13 13:31:13.756238 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Dec 13 13:31:13.756286 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Dec 13 13:31:13.756335 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Dec 13 13:31:13.756387 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 13:31:13.756396 kernel: vgaarb: loaded Dec 13 13:31:13.756402 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Dec 13 13:31:13.756408 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Dec 13 13:31:13.756414 kernel: clocksource: Switched to clocksource tsc-early Dec 13 13:31:13.756420 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 13:31:13.756426 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 13:31:13.756432 kernel: pnp: PnP ACPI init Dec 13 13:31:13.756485 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Dec 13 13:31:13.757559 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Dec 13 13:31:13.757610 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Dec 13 13:31:13.757661 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Dec 13 13:31:13.757709 kernel: pnp 00:06: [dma 2] Dec 13 13:31:13.757758 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Dec 13 13:31:13.757802 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Dec 13 13:31:13.757849 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Dec 13 13:31:13.757858 kernel: pnp: PnP ACPI: found 8 devices Dec 13 13:31:13.757864 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 13:31:13.757871 kernel: NET: Registered PF_INET protocol family Dec 13 13:31:13.757877 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 13:31:13.757883 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 13:31:13.757889 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 13:31:13.757895 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 13:31:13.757902 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 13:31:13.757908 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 13:31:13.757914 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 13:31:13.757920 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 13:31:13.757926 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 13:31:13.757932 kernel: NET: Registered PF_XDP protocol family Dec 13 13:31:13.757984 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Dec 13 13:31:13.758037 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 13 13:31:13.758091 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 13:31:13.758141 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 13:31:13.758191 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 13:31:13.758240 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Dec 13 13:31:13.758295 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Dec 13 13:31:13.758345 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Dec 13 13:31:13.758396 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Dec 13 13:31:13.758447 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Dec 13 13:31:13.758497 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Dec 13 13:31:13.759010 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Dec 13 13:31:13.759065 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Dec 13 13:31:13.759118 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Dec 13 13:31:13.759172 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Dec 13 13:31:13.759222 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Dec 13 13:31:13.759272 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Dec 13 13:31:13.759322 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Dec 13 13:31:13.759372 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Dec 13 13:31:13.759422 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Dec 13 13:31:13.759474 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Dec 13 13:31:13.759582 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Dec 13 13:31:13.759634 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Dec 13 13:31:13.759683 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Dec 13 13:31:13.759732 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Dec 13 13:31:13.759782 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.759833 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.759882 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.759931 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.759981 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.760030 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.760080 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.760128 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.760177 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.760229 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.760278 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.760327 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.760376 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.760425 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.760474 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.760537 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.760587 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.760639 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.760689 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.760738 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.760787 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.760836 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.760901 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.760948 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.760997 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.761047 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.761094 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.761142 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.761190 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.761239 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.761291 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.761340 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.761388 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.761439 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.761487 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.761582 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.761631 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.761679 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.761727 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.761774 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.761821 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.761868 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.761938 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.761986 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.762034 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.762082 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.762130 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.762178 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.762227 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.762275 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.762322 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.762373 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.762422 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.762470 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.762529 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.762580 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.762629 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.762678 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.762727 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.762776 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.762827 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.762877 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.762925 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.762974 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.763023 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.763072 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.763121 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.763170 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.763219 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.763268 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.763320 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.763368 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.763417 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.763466 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.763529 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.763579 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.763629 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.763677 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.763726 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.763778 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.763827 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.763876 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.763924 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.763973 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.764023 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 13 13:31:13.764073 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Dec 13 13:31:13.764121 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Dec 13 13:31:13.764169 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Dec 13 13:31:13.764218 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Dec 13 13:31:13.764276 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Dec 13 13:31:13.764326 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Dec 13 13:31:13.764376 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Dec 13 13:31:13.764425 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Dec 13 13:31:13.764474 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Dec 13 13:31:13.764582 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Dec 13 13:31:13.764634 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Dec 13 13:31:13.764683 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Dec 13 13:31:13.764735 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Dec 13 13:31:13.764784 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Dec 13 13:31:13.764833 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Dec 13 13:31:13.764881 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Dec 13 13:31:13.764930 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Dec 13 13:31:13.764979 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Dec 13 13:31:13.765028 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Dec 13 13:31:13.765076 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Dec 13 13:31:13.765125 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Dec 13 13:31:13.765176 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Dec 13 13:31:13.765225 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Dec 13 13:31:13.765283 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Dec 13 13:31:13.765332 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Dec 13 13:31:13.765382 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Dec 13 13:31:13.765431 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Dec 13 13:31:13.765482 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Dec 13 13:31:13.765547 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Dec 13 13:31:13.765597 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Dec 13 13:31:13.765647 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Dec 13 13:31:13.765696 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Dec 13 13:31:13.765749 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Dec 13 13:31:13.765799 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Dec 13 13:31:13.765848 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Dec 13 13:31:13.765898 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Dec 13 13:31:13.765951 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Dec 13 13:31:13.766002 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Dec 13 13:31:13.766051 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Dec 13 13:31:13.766100 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Dec 13 13:31:13.766149 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Dec 13 13:31:13.766200 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Dec 13 13:31:13.766250 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Dec 13 13:31:13.766299 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Dec 13 13:31:13.766348 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Dec 13 13:31:13.766397 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Dec 13 13:31:13.766449 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Dec 13 13:31:13.766558 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Dec 13 13:31:13.766617 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Dec 13 13:31:13.766666 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Dec 13 13:31:13.766714 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Dec 13 13:31:13.766762 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Dec 13 13:31:13.766811 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Dec 13 13:31:13.766859 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Dec 13 13:31:13.766908 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Dec 13 13:31:13.766959 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Dec 13 13:31:13.767007 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Dec 13 13:31:13.767056 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Dec 13 13:31:13.767105 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Dec 13 13:31:13.767153 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Dec 13 13:31:13.767204 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Dec 13 13:31:13.767253 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Dec 13 13:31:13.767300 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Dec 13 13:31:13.767349 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Dec 13 13:31:13.767400 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Dec 13 13:31:13.767451 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Dec 13 13:31:13.769538 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Dec 13 13:31:13.769601 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Dec 13 13:31:13.769655 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Dec 13 13:31:13.769707 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Dec 13 13:31:13.769757 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Dec 13 13:31:13.769806 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Dec 13 13:31:13.769856 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Dec 13 13:31:13.769906 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Dec 13 13:31:13.769957 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Dec 13 13:31:13.770007 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Dec 13 13:31:13.770056 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Dec 13 13:31:13.770105 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Dec 13 13:31:13.770156 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Dec 13 13:31:13.770204 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Dec 13 13:31:13.770253 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Dec 13 13:31:13.770303 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Dec 13 13:31:13.770352 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Dec 13 13:31:13.770402 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Dec 13 13:31:13.770454 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Dec 13 13:31:13.770530 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Dec 13 13:31:13.770585 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Dec 13 13:31:13.770636 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Dec 13 13:31:13.770685 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Dec 13 13:31:13.770734 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Dec 13 13:31:13.770783 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Dec 13 13:31:13.770833 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Dec 13 13:31:13.770882 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Dec 13 13:31:13.770934 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Dec 13 13:31:13.770983 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Dec 13 13:31:13.771032 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Dec 13 13:31:13.771081 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Dec 13 13:31:13.771131 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Dec 13 13:31:13.771180 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Dec 13 13:31:13.771230 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Dec 13 13:31:13.771289 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Dec 13 13:31:13.771340 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Dec 13 13:31:13.771389 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Dec 13 13:31:13.771441 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Dec 13 13:31:13.771492 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Dec 13 13:31:13.771555 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Dec 13 13:31:13.771605 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Dec 13 13:31:13.771718 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Dec 13 13:31:13.771976 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Dec 13 13:31:13.772046 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Dec 13 13:31:13.772125 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Dec 13 13:31:13.772178 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Dec 13 13:31:13.772230 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Dec 13 13:31:13.772281 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Dec 13 13:31:13.772326 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Dec 13 13:31:13.772369 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Dec 13 13:31:13.772412 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Dec 13 13:31:13.772456 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Dec 13 13:31:13.772550 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Dec 13 13:31:13.772599 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Dec 13 13:31:13.772646 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Dec 13 13:31:13.772691 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Dec 13 13:31:13.772735 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Dec 13 13:31:13.772779 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Dec 13 13:31:13.772823 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Dec 13 13:31:13.772868 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Dec 13 13:31:13.772917 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Dec 13 13:31:13.772965 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Dec 13 13:31:13.773009 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Dec 13 13:31:13.773058 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Dec 13 13:31:13.773102 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Dec 13 13:31:13.773146 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Dec 13 13:31:13.773195 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Dec 13 13:31:13.773240 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Dec 13 13:31:13.773287 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Dec 13 13:31:13.773336 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Dec 13 13:31:13.773381 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Dec 13 13:31:13.773432 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Dec 13 13:31:13.773478 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Dec 13 13:31:13.773533 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Dec 13 13:31:13.773581 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Dec 13 13:31:13.773630 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Dec 13 13:31:13.773675 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Dec 13 13:31:13.773741 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Dec 13 13:31:13.773799 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Dec 13 13:31:13.773852 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Dec 13 13:31:13.773899 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Dec 13 13:31:13.773945 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Dec 13 13:31:13.773994 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Dec 13 13:31:13.774040 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Dec 13 13:31:13.774085 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Dec 13 13:31:13.774136 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Dec 13 13:31:13.774185 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Dec 13 13:31:13.774233 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Dec 13 13:31:13.774286 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Dec 13 13:31:13.774332 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Dec 13 13:31:13.774383 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Dec 13 13:31:13.774429 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Dec 13 13:31:13.774477 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Dec 13 13:31:13.775060 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Dec 13 13:31:13.775117 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Dec 13 13:31:13.775164 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Dec 13 13:31:13.775214 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Dec 13 13:31:13.775265 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Dec 13 13:31:13.775315 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Dec 13 13:31:13.775363 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Dec 13 13:31:13.775409 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Dec 13 13:31:13.775593 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Dec 13 13:31:13.775647 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Dec 13 13:31:13.775693 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Dec 13 13:31:13.775742 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Dec 13 13:31:13.775788 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Dec 13 13:31:13.775837 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Dec 13 13:31:13.775885 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Dec 13 13:31:13.775931 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Dec 13 13:31:13.775982 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Dec 13 13:31:13.776030 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Dec 13 13:31:13.776079 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Dec 13 13:31:13.776127 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Dec 13 13:31:13.776176 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Dec 13 13:31:13.776221 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Dec 13 13:31:13.776269 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Dec 13 13:31:13.776314 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Dec 13 13:31:13.776364 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Dec 13 13:31:13.776412 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Dec 13 13:31:13.776457 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Dec 13 13:31:13.776557 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Dec 13 13:31:13.776606 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Dec 13 13:31:13.776651 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Dec 13 13:31:13.776699 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Dec 13 13:31:13.776747 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Dec 13 13:31:13.776828 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Dec 13 13:31:13.776876 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Dec 13 13:31:13.776927 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Dec 13 13:31:13.776972 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Dec 13 13:31:13.777021 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Dec 13 13:31:13.777066 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Dec 13 13:31:13.777119 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Dec 13 13:31:13.777164 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Dec 13 13:31:13.777317 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Dec 13 13:31:13.777400 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Dec 13 13:31:13.777457 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 13:31:13.777467 kernel: PCI: CLS 32 bytes, default 64 Dec 13 13:31:13.777477 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 13:31:13.777484 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Dec 13 13:31:13.777490 kernel: clocksource: Switched to clocksource tsc Dec 13 13:31:13.777497 kernel: Initialise system trusted keyrings Dec 13 13:31:13.777510 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 13:31:13.777516 kernel: Key type asymmetric registered Dec 13 13:31:13.777523 kernel: Asymmetric key parser 'x509' registered Dec 13 13:31:13.777529 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 13:31:13.777563 kernel: io scheduler mq-deadline registered Dec 13 13:31:13.777572 kernel: io scheduler kyber registered Dec 13 13:31:13.777578 kernel: io scheduler bfq registered Dec 13 13:31:13.777635 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Dec 13 13:31:13.777688 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.777740 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Dec 13 13:31:13.777790 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.777841 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Dec 13 13:31:13.777892 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.777946 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Dec 13 13:31:13.777997 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.778048 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Dec 13 13:31:13.778098 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.778148 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Dec 13 13:31:13.778201 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.778251 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Dec 13 13:31:13.778302 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.778419 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Dec 13 13:31:13.778470 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.778545 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Dec 13 13:31:13.778602 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.778653 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Dec 13 13:31:13.778705 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.778756 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Dec 13 13:31:13.778806 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.778856 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Dec 13 13:31:13.778910 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.778964 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Dec 13 13:31:13.779014 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.779065 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Dec 13 13:31:13.779115 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.779166 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Dec 13 13:31:13.779219 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.779269 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Dec 13 13:31:13.779320 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.779371 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Dec 13 13:31:13.779421 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.779471 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Dec 13 13:31:13.779566 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.779618 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Dec 13 13:31:13.779667 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.779717 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Dec 13 13:31:13.779766 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.779816 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Dec 13 13:31:13.779869 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.779918 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Dec 13 13:31:13.779969 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.780018 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Dec 13 13:31:13.780067 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.780117 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Dec 13 13:31:13.780170 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.780220 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Dec 13 13:31:13.780269 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.780319 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Dec 13 13:31:13.780369 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.780419 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Dec 13 13:31:13.780472 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.782113 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Dec 13 13:31:13.782175 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.782230 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Dec 13 13:31:13.782283 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.782339 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Dec 13 13:31:13.782390 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.782441 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Dec 13 13:31:13.782491 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.782551 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Dec 13 13:31:13.782601 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.782614 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 13:31:13.782621 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 13:31:13.782627 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 13:31:13.782634 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Dec 13 13:31:13.782640 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 13:31:13.782646 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 13:31:13.782697 kernel: rtc_cmos 00:01: registered as rtc0 Dec 13 13:31:13.782746 kernel: rtc_cmos 00:01: setting system clock to 2024-12-13T13:31:13 UTC (1734096673) Dec 13 13:31:13.782791 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Dec 13 13:31:13.782800 kernel: intel_pstate: CPU model not supported Dec 13 13:31:13.782806 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 13:31:13.782813 kernel: NET: Registered PF_INET6 protocol family Dec 13 13:31:13.782819 kernel: Segment Routing with IPv6 Dec 13 13:31:13.782826 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 13:31:13.782832 kernel: NET: Registered PF_PACKET protocol family Dec 13 13:31:13.782838 kernel: Key type dns_resolver registered Dec 13 13:31:13.782846 kernel: IPI shorthand broadcast: enabled Dec 13 13:31:13.782852 kernel: sched_clock: Marking stable (955166365, 224609646)->(1195476067, -15700056) Dec 13 13:31:13.782859 kernel: registered taskstats version 1 Dec 13 13:31:13.782865 kernel: Loading compiled-in X.509 certificates Dec 13 13:31:13.782871 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: 87a680e70013684f1bdd04e047addefc714bd162' Dec 13 13:31:13.782878 kernel: Key type .fscrypt registered Dec 13 13:31:13.782884 kernel: Key type fscrypt-provisioning registered Dec 13 13:31:13.782890 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 13:31:13.782897 kernel: ima: Allocated hash algorithm: sha1 Dec 13 13:31:13.782903 kernel: ima: No architecture policies found Dec 13 13:31:13.782910 kernel: clk: Disabling unused clocks Dec 13 13:31:13.782916 kernel: Freeing unused kernel image (initmem) memory: 43328K Dec 13 13:31:13.782922 kernel: Write protecting the kernel read-only data: 38912k Dec 13 13:31:13.782929 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Dec 13 13:31:13.782935 kernel: Run /init as init process Dec 13 13:31:13.782942 kernel: with arguments: Dec 13 13:31:13.782948 kernel: /init Dec 13 13:31:13.782954 kernel: with environment: Dec 13 13:31:13.782961 kernel: HOME=/ Dec 13 13:31:13.782967 kernel: TERM=linux Dec 13 13:31:13.782973 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 13:31:13.782981 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:31:13.782989 systemd[1]: Detected virtualization vmware. Dec 13 13:31:13.782996 systemd[1]: Detected architecture x86-64. Dec 13 13:31:13.783002 systemd[1]: Running in initrd. Dec 13 13:31:13.783008 systemd[1]: No hostname configured, using default hostname. Dec 13 13:31:13.783016 systemd[1]: Hostname set to . Dec 13 13:31:13.783023 systemd[1]: Initializing machine ID from random generator. Dec 13 13:31:13.783029 systemd[1]: Queued start job for default target initrd.target. Dec 13 13:31:13.783036 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:31:13.783042 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:31:13.783050 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 13:31:13.783056 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:31:13.783064 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 13:31:13.783071 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 13:31:13.783079 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 13:31:13.783086 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 13:31:13.783092 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:31:13.783099 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:31:13.783106 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:31:13.783114 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:31:13.783120 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:31:13.783127 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:31:13.783133 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:31:13.783140 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:31:13.783147 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 13:31:13.783154 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 13:31:13.783160 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:31:13.783167 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:31:13.783175 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:31:13.783181 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:31:13.783188 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 13:31:13.783195 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:31:13.783201 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 13:31:13.783207 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 13:31:13.783214 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:31:13.783221 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:31:13.783229 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:31:13.783235 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 13:31:13.783253 systemd-journald[216]: Collecting audit messages is disabled. Dec 13 13:31:13.783269 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:31:13.783278 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 13:31:13.783285 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 13:31:13.783292 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:31:13.783298 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:31:13.783305 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 13:31:13.783313 kernel: Bridge firewalling registered Dec 13 13:31:13.783319 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:31:13.783326 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:31:13.783333 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:31:13.783340 systemd-journald[216]: Journal started Dec 13 13:31:13.783355 systemd-journald[216]: Runtime Journal (/run/log/journal/ccff4398c6bf457098231338991eb52a) is 4.8M, max 38.6M, 33.8M free. Dec 13 13:31:13.748201 systemd-modules-load[217]: Inserted module 'overlay' Dec 13 13:31:13.775491 systemd-modules-load[217]: Inserted module 'br_netfilter' Dec 13 13:31:13.790769 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:31:13.790800 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:31:13.791426 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:31:13.792740 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:31:13.793812 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 13:31:13.795589 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:31:13.795809 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:31:13.802225 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:31:13.804062 dracut-cmdline[245]: dracut-dracut-053 Dec 13 13:31:13.806345 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:31:13.807967 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:31:13.823445 systemd-resolved[257]: Positive Trust Anchors: Dec 13 13:31:13.823451 systemd-resolved[257]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:31:13.823474 systemd-resolved[257]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:31:13.825052 systemd-resolved[257]: Defaulting to hostname 'linux'. Dec 13 13:31:13.825864 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:31:13.826121 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:31:13.852519 kernel: SCSI subsystem initialized Dec 13 13:31:13.859738 kernel: Loading iSCSI transport class v2.0-870. Dec 13 13:31:13.864514 kernel: iscsi: registered transport (tcp) Dec 13 13:31:13.877516 kernel: iscsi: registered transport (qla4xxx) Dec 13 13:31:13.877568 kernel: QLogic iSCSI HBA Driver Dec 13 13:31:13.897124 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 13:31:13.900599 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 13:31:13.914808 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 13:31:13.914848 kernel: device-mapper: uevent: version 1.0.3 Dec 13 13:31:13.915972 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 13:31:13.947522 kernel: raid6: avx2x4 gen() 47172 MB/s Dec 13 13:31:13.963521 kernel: raid6: avx2x2 gen() 52709 MB/s Dec 13 13:31:13.980707 kernel: raid6: avx2x1 gen() 44354 MB/s Dec 13 13:31:13.980761 kernel: raid6: using algorithm avx2x2 gen() 52709 MB/s Dec 13 13:31:13.998719 kernel: raid6: .... xor() 32194 MB/s, rmw enabled Dec 13 13:31:13.998772 kernel: raid6: using avx2x2 recovery algorithm Dec 13 13:31:14.011517 kernel: xor: automatically using best checksumming function avx Dec 13 13:31:14.105526 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 13:31:14.110528 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:31:14.114613 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:31:14.122728 systemd-udevd[434]: Using default interface naming scheme 'v255'. Dec 13 13:31:14.125153 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:31:14.134631 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 13:31:14.141385 dracut-pre-trigger[436]: rd.md=0: removing MD RAID activation Dec 13 13:31:14.156780 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:31:14.161619 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:31:14.233078 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:31:14.238606 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 13:31:14.249944 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 13:31:14.250999 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:31:14.251307 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:31:14.251577 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:31:14.255644 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 13:31:14.266838 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:31:14.304512 kernel: VMware PVSCSI driver - version 1.0.7.0-k Dec 13 13:31:14.308767 kernel: vmw_pvscsi: using 64bit dma Dec 13 13:31:14.308797 kernel: vmw_pvscsi: max_id: 16 Dec 13 13:31:14.308806 kernel: vmw_pvscsi: setting ring_pages to 8 Dec 13 13:31:14.319620 kernel: vmw_pvscsi: enabling reqCallThreshold Dec 13 13:31:14.319655 kernel: vmw_pvscsi: driver-based request coalescing enabled Dec 13 13:31:14.319664 kernel: vmw_pvscsi: using MSI-X Dec 13 13:31:14.319671 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Dec 13 13:31:14.320641 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Dec 13 13:31:14.322615 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Dec 13 13:31:14.337541 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 13:31:14.337559 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Dec 13 13:31:14.337662 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Dec 13 13:31:14.337759 kernel: libata version 3.00 loaded. Dec 13 13:31:14.337768 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Dec 13 13:31:14.339713 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:31:14.339794 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:31:14.340414 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:31:14.340563 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:31:14.340633 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:31:14.340778 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:31:14.344522 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Dec 13 13:31:14.344691 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:31:14.347365 kernel: ata_piix 0000:00:07.1: version 2.13 Dec 13 13:31:14.357995 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 13:31:14.358010 kernel: AES CTR mode by8 optimization enabled Dec 13 13:31:14.358023 kernel: scsi host1: ata_piix Dec 13 13:31:14.358100 kernel: scsi host2: ata_piix Dec 13 13:31:14.358159 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Dec 13 13:31:14.358168 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Dec 13 13:31:14.363797 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Dec 13 13:31:14.372903 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 13:31:14.372980 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Dec 13 13:31:14.373041 kernel: sd 0:0:0:0: [sda] Cache data unavailable Dec 13 13:31:14.373099 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Dec 13 13:31:14.373158 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 13:31:14.373168 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 13:31:14.373453 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:31:14.376594 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:31:14.388139 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:31:14.526536 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Dec 13 13:31:14.533536 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Dec 13 13:31:14.557091 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Dec 13 13:31:14.566286 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 13:31:14.566306 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 13:31:14.571517 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (488) Dec 13 13:31:14.575082 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Dec 13 13:31:14.578902 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Dec 13 13:31:14.581890 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Dec 13 13:31:14.582539 kernel: BTRFS: device fsid 79c74448-2326-4c98-b9ff-09542b30ea52 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (477) Dec 13 13:31:14.588809 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Dec 13 13:31:14.589097 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Dec 13 13:31:14.598612 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 13:31:14.624522 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 13:31:15.634546 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 13:31:15.634594 disk-uuid[587]: The operation has completed successfully. Dec 13 13:31:15.783022 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 13:31:15.783091 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 13:31:15.788669 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 13:31:15.791120 sh[604]: Success Dec 13 13:31:15.801534 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 13:31:15.961918 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 13:31:15.963590 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 13:31:15.963953 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 13:31:15.980520 kernel: BTRFS info (device dm-0): first mount of filesystem 79c74448-2326-4c98-b9ff-09542b30ea52 Dec 13 13:31:15.980562 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:31:15.980574 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 13:31:15.981701 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 13:31:15.982555 kernel: BTRFS info (device dm-0): using free space tree Dec 13 13:31:16.086530 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 13:31:16.089252 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 13:31:16.099710 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Dec 13 13:31:16.101731 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 13:31:16.175139 kernel: BTRFS info (device sda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:31:16.175181 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:31:16.175190 kernel: BTRFS info (device sda6): using free space tree Dec 13 13:31:16.225564 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 13:31:16.235133 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 13:31:16.236528 kernel: BTRFS info (device sda6): last unmount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:31:16.244267 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 13:31:16.247634 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 13:31:16.325610 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Dec 13 13:31:16.331678 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 13:31:16.388181 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:31:16.393661 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:31:16.406388 systemd-networkd[793]: lo: Link UP Dec 13 13:31:16.406394 systemd-networkd[793]: lo: Gained carrier Dec 13 13:31:16.407598 systemd-networkd[793]: Enumeration completed Dec 13 13:31:16.407863 systemd-networkd[793]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Dec 13 13:31:16.411316 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Dec 13 13:31:16.411445 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Dec 13 13:31:16.411431 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:31:16.411768 systemd-networkd[793]: ens192: Link UP Dec 13 13:31:16.411770 systemd-networkd[793]: ens192: Gained carrier Dec 13 13:31:16.411954 systemd[1]: Reached target network.target - Network. Dec 13 13:31:16.443096 ignition[665]: Ignition 2.20.0 Dec 13 13:31:16.443111 ignition[665]: Stage: fetch-offline Dec 13 13:31:16.443157 ignition[665]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:31:16.443167 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 13:31:16.443250 ignition[665]: parsed url from cmdline: "" Dec 13 13:31:16.443254 ignition[665]: no config URL provided Dec 13 13:31:16.443258 ignition[665]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:31:16.443266 ignition[665]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:31:16.443812 ignition[665]: config successfully fetched Dec 13 13:31:16.443841 ignition[665]: parsing config with SHA512: 5b04d72a530caa1019f1231c71478a7dd01f2f4aa5b74fc8aa345e22055dff7e9f3bca7b4cb56b0510269c72e7201f102eb9ab2ce0af206a7b33496d45651226 Dec 13 13:31:16.447122 unknown[665]: fetched base config from "system" Dec 13 13:31:16.447134 unknown[665]: fetched user config from "vmware" Dec 13 13:31:16.447411 ignition[665]: fetch-offline: fetch-offline passed Dec 13 13:31:16.447491 ignition[665]: Ignition finished successfully Dec 13 13:31:16.448316 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:31:16.448550 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 13:31:16.452629 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 13:31:16.460948 ignition[801]: Ignition 2.20.0 Dec 13 13:31:16.460955 ignition[801]: Stage: kargs Dec 13 13:31:16.461069 ignition[801]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:31:16.461076 ignition[801]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 13:31:16.461686 ignition[801]: kargs: kargs passed Dec 13 13:31:16.461722 ignition[801]: Ignition finished successfully Dec 13 13:31:16.462989 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 13:31:16.468653 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 13:31:16.477214 ignition[807]: Ignition 2.20.0 Dec 13 13:31:16.477224 ignition[807]: Stage: disks Dec 13 13:31:16.477344 ignition[807]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:31:16.477350 ignition[807]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 13:31:16.477971 ignition[807]: disks: disks passed Dec 13 13:31:16.478005 ignition[807]: Ignition finished successfully Dec 13 13:31:16.478663 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 13:31:16.479083 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 13:31:16.479233 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 13:31:16.479424 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:31:16.479640 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:31:16.479813 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:31:16.483604 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 13:31:16.551604 systemd-fsck[816]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 13:31:16.552894 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 13:31:16.558621 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 13:31:16.667183 kernel: EXT4-fs (sda9): mounted filesystem 8801d4fe-2f40-4e12-9140-c192f2e7d668 r/w with ordered data mode. Quota mode: none. Dec 13 13:31:16.666660 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 13:31:16.667107 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 13:31:16.679593 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:31:16.683895 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 13:31:16.684182 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 13:31:16.684210 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 13:31:16.684226 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:31:16.688954 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 13:31:16.689611 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 13:31:16.708525 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (824) Dec 13 13:31:16.725412 kernel: BTRFS info (device sda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:31:16.725468 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:31:16.725483 kernel: BTRFS info (device sda6): using free space tree Dec 13 13:31:16.860528 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 13:31:16.870574 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:31:17.076857 initrd-setup-root[848]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 13:31:17.092988 initrd-setup-root[855]: cut: /sysroot/etc/group: No such file or directory Dec 13 13:31:17.103280 initrd-setup-root[862]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 13:31:17.105659 initrd-setup-root[869]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 13:31:17.220754 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 13:31:17.224565 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 13:31:17.227094 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 13:31:17.231172 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 13:31:17.232531 kernel: BTRFS info (device sda6): last unmount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:31:17.256095 ignition[936]: INFO : Ignition 2.20.0 Dec 13 13:31:17.256392 ignition[936]: INFO : Stage: mount Dec 13 13:31:17.256719 ignition[936]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:31:17.256851 ignition[936]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 13:31:17.257570 ignition[936]: INFO : mount: mount passed Dec 13 13:31:17.257717 ignition[936]: INFO : Ignition finished successfully Dec 13 13:31:17.258273 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 13:31:17.263570 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 13:31:17.268035 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:31:17.313734 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (944) Dec 13 13:31:17.313772 kernel: BTRFS info (device sda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:31:17.316803 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:31:17.316835 kernel: BTRFS info (device sda6): using free space tree Dec 13 13:31:17.321519 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 13:31:17.323691 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 13:31:17.323952 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:31:17.338915 ignition[965]: INFO : Ignition 2.20.0 Dec 13 13:31:17.338915 ignition[965]: INFO : Stage: files Dec 13 13:31:17.339614 ignition[965]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:31:17.339614 ignition[965]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 13:31:17.339878 ignition[965]: DEBUG : files: compiled without relabeling support, skipping Dec 13 13:31:17.347258 ignition[965]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 13:31:17.347258 ignition[965]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 13:31:17.368986 ignition[965]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 13:31:17.369199 ignition[965]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 13:31:17.369328 ignition[965]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 13:31:17.369262 unknown[965]: wrote ssh authorized keys file for user: core Dec 13 13:31:17.403947 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 13:31:17.404264 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 13:31:17.439271 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 13:31:17.524242 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 13:31:17.525021 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 13:31:17.525021 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 13:31:17.525021 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:31:17.525021 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:31:17.525021 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:31:17.525021 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:31:17.525021 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:31:17.525021 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:31:17.525021 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:31:17.526563 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:31:17.526563 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 13:31:17.526563 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 13:31:17.526563 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 13:31:17.526563 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 13:31:17.672700 systemd-networkd[793]: ens192: Gained IPv6LL Dec 13 13:31:18.008462 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 13:31:18.331236 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 13:31:18.331236 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Dec 13 13:31:18.331755 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Dec 13 13:31:18.331755 ignition[965]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 13:31:18.332467 ignition[965]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:31:18.332679 ignition[965]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:31:18.332679 ignition[965]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 13:31:18.332679 ignition[965]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 13 13:31:18.332679 ignition[965]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 13:31:18.333269 ignition[965]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 13:31:18.333269 ignition[965]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 13 13:31:18.333269 ignition[965]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 13:31:18.549666 ignition[965]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 13:31:18.553384 ignition[965]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 13:31:18.553651 ignition[965]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 13:31:18.553651 ignition[965]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Dec 13 13:31:18.553651 ignition[965]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 13:31:18.554171 ignition[965]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:31:18.554171 ignition[965]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:31:18.554171 ignition[965]: INFO : files: files passed Dec 13 13:31:18.554171 ignition[965]: INFO : Ignition finished successfully Dec 13 13:31:18.555056 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 13:31:18.559714 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 13:31:18.562072 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 13:31:18.562776 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 13:31:18.563010 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 13:31:18.572653 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:31:18.573049 initrd-setup-root-after-ignition[997]: grep: Dec 13 13:31:18.573431 initrd-setup-root-after-ignition[1001]: grep: Dec 13 13:31:18.573598 initrd-setup-root-after-ignition[997]: /sysroot/usr/share/flatcar/enabled-sysext.conf Dec 13 13:31:18.573766 initrd-setup-root-after-ignition[1001]: /sysroot/etc/flatcar/enabled-sysext.conf Dec 13 13:31:18.573766 initrd-setup-root-after-ignition[997]: : No such file or directory Dec 13 13:31:18.573611 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:31:18.574411 initrd-setup-root-after-ignition[1001]: : No such file or directory Dec 13 13:31:18.574215 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 13:31:18.578672 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 13:31:18.603127 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 13:31:18.603210 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 13:31:18.603752 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 13:31:18.604047 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 13:31:18.604335 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 13:31:18.605061 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 13:31:18.616021 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:31:18.621602 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 13:31:18.628350 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:31:18.628528 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:31:18.628696 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 13:31:18.628881 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 13:31:18.628950 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:31:18.629157 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 13:31:18.629382 systemd[1]: Stopped target basic.target - Basic System. Dec 13 13:31:18.629581 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 13:31:18.629785 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:31:18.630118 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 13:31:18.630317 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 13:31:18.630520 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:31:18.630731 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 13:31:18.630920 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 13:31:18.631115 systemd[1]: Stopped target swap.target - Swaps. Dec 13 13:31:18.631285 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 13:31:18.631349 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:31:18.631604 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:31:18.631875 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:31:18.632027 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 13:31:18.632073 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:31:18.632233 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 13:31:18.632335 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 13:31:18.632550 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 13:31:18.632610 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:31:18.632849 systemd[1]: Stopped target paths.target - Path Units. Dec 13 13:31:18.632983 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 13:31:18.636524 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:31:18.636691 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 13:31:18.636890 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 13:31:18.637068 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 13:31:18.637134 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:31:18.637335 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 13:31:18.637379 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:31:18.637622 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 13:31:18.637681 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:31:18.637928 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 13:31:18.637984 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 13:31:18.646700 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 13:31:18.649538 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 13:31:18.649656 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 13:31:18.649751 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:31:18.650023 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 13:31:18.650100 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:31:18.652135 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 13:31:18.652195 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 13:31:18.656247 ignition[1021]: INFO : Ignition 2.20.0 Dec 13 13:31:18.656599 ignition[1021]: INFO : Stage: umount Dec 13 13:31:18.656786 ignition[1021]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:31:18.657060 ignition[1021]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 13:31:18.657650 ignition[1021]: INFO : umount: umount passed Dec 13 13:31:18.658160 ignition[1021]: INFO : Ignition finished successfully Dec 13 13:31:18.658475 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 13:31:18.658558 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 13:31:18.658807 systemd[1]: Stopped target network.target - Network. Dec 13 13:31:18.658905 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 13:31:18.658931 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 13:31:18.659074 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 13:31:18.659095 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 13:31:18.659233 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 13:31:18.659254 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 13:31:18.659396 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 13:31:18.659415 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 13:31:18.659634 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 13:31:18.659907 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 13:31:18.664241 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 13:31:18.664413 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 13:31:18.664821 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 13:31:18.664870 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 13:31:18.666301 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 13:31:18.666324 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:31:18.669573 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 13:31:18.669668 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 13:31:18.669696 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:31:18.669828 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Dec 13 13:31:18.669850 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Dec 13 13:31:18.669968 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 13:31:18.669990 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:31:18.670096 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 13:31:18.670118 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 13:31:18.670227 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 13:31:18.670246 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:31:18.670400 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:31:18.676434 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 13:31:18.676507 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 13:31:18.682901 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 13:31:18.682989 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:31:18.683329 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 13:31:18.683365 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 13:31:18.683580 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 13:31:18.683599 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:31:18.683808 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 13:31:18.683836 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:31:18.684106 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 13:31:18.684131 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 13:31:18.684418 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:31:18.684442 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:31:18.693737 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 13:31:18.693907 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 13:31:18.693951 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:31:18.694097 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 13:31:18.694119 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:31:18.694244 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 13:31:18.694270 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:31:18.694396 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:31:18.694419 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:31:18.695289 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 13:31:18.698889 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 13:31:18.698979 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 13:31:18.982418 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 13:31:18.982487 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 13:31:18.982838 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 13:31:18.982986 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 13:31:18.983022 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 13:31:18.987629 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 13:31:19.003183 systemd[1]: Switching root. Dec 13 13:31:19.050773 systemd-journald[216]: Journal stopped Dec 13 13:31:13.735287 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 13 11:52:04 -00 2024 Dec 13 13:31:13.735304 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:31:13.735311 kernel: Disabled fast string operations Dec 13 13:31:13.735315 kernel: BIOS-provided physical RAM map: Dec 13 13:31:13.735319 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Dec 13 13:31:13.735323 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Dec 13 13:31:13.735330 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Dec 13 13:31:13.735334 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Dec 13 13:31:13.735338 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Dec 13 13:31:13.735342 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Dec 13 13:31:13.735347 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Dec 13 13:31:13.735351 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Dec 13 13:31:13.735355 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Dec 13 13:31:13.735360 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Dec 13 13:31:13.735366 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Dec 13 13:31:13.735371 kernel: NX (Execute Disable) protection: active Dec 13 13:31:13.735376 kernel: APIC: Static calls initialized Dec 13 13:31:13.735380 kernel: SMBIOS 2.7 present. Dec 13 13:31:13.735386 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Dec 13 13:31:13.735390 kernel: vmware: hypercall mode: 0x00 Dec 13 13:31:13.735395 kernel: Hypervisor detected: VMware Dec 13 13:31:13.735400 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Dec 13 13:31:13.735405 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Dec 13 13:31:13.735410 kernel: vmware: using clock offset of 2605382387 ns Dec 13 13:31:13.735415 kernel: tsc: Detected 3408.000 MHz processor Dec 13 13:31:13.735420 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 13:31:13.735425 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 13:31:13.735430 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Dec 13 13:31:13.735435 kernel: total RAM covered: 3072M Dec 13 13:31:13.735440 kernel: Found optimal setting for mtrr clean up Dec 13 13:31:13.735447 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Dec 13 13:31:13.735452 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Dec 13 13:31:13.735458 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 13:31:13.735463 kernel: Using GB pages for direct mapping Dec 13 13:31:13.735468 kernel: ACPI: Early table checksum verification disabled Dec 13 13:31:13.735473 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Dec 13 13:31:13.735478 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Dec 13 13:31:13.735483 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Dec 13 13:31:13.735487 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Dec 13 13:31:13.735492 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Dec 13 13:31:13.735511 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Dec 13 13:31:13.735516 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Dec 13 13:31:13.735522 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Dec 13 13:31:13.735527 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Dec 13 13:31:13.735532 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Dec 13 13:31:13.735537 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Dec 13 13:31:13.735544 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Dec 13 13:31:13.735549 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Dec 13 13:31:13.735554 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Dec 13 13:31:13.735560 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Dec 13 13:31:13.735565 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Dec 13 13:31:13.735570 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Dec 13 13:31:13.735575 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Dec 13 13:31:13.735580 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Dec 13 13:31:13.735585 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Dec 13 13:31:13.735591 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Dec 13 13:31:13.735596 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Dec 13 13:31:13.735601 kernel: system APIC only can use physical flat Dec 13 13:31:13.735606 kernel: APIC: Switched APIC routing to: physical flat Dec 13 13:31:13.735611 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 13:31:13.735617 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Dec 13 13:31:13.735622 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Dec 13 13:31:13.735627 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Dec 13 13:31:13.735632 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Dec 13 13:31:13.735637 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Dec 13 13:31:13.735643 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Dec 13 13:31:13.735648 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Dec 13 13:31:13.735653 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Dec 13 13:31:13.735658 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Dec 13 13:31:13.735663 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Dec 13 13:31:13.735668 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Dec 13 13:31:13.735673 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Dec 13 13:31:13.735678 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Dec 13 13:31:13.735683 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Dec 13 13:31:13.735688 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Dec 13 13:31:13.735694 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Dec 13 13:31:13.735699 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Dec 13 13:31:13.735704 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Dec 13 13:31:13.735709 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Dec 13 13:31:13.735714 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Dec 13 13:31:13.735719 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Dec 13 13:31:13.735724 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Dec 13 13:31:13.735729 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Dec 13 13:31:13.735733 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Dec 13 13:31:13.735739 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Dec 13 13:31:13.735744 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Dec 13 13:31:13.735750 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Dec 13 13:31:13.735754 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Dec 13 13:31:13.735759 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Dec 13 13:31:13.735765 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Dec 13 13:31:13.735770 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Dec 13 13:31:13.735775 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Dec 13 13:31:13.735780 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Dec 13 13:31:13.735785 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Dec 13 13:31:13.735789 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Dec 13 13:31:13.735796 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Dec 13 13:31:13.735801 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Dec 13 13:31:13.735805 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Dec 13 13:31:13.735811 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Dec 13 13:31:13.735815 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Dec 13 13:31:13.735821 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Dec 13 13:31:13.735826 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Dec 13 13:31:13.735831 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Dec 13 13:31:13.735835 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Dec 13 13:31:13.735840 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Dec 13 13:31:13.735846 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Dec 13 13:31:13.735851 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Dec 13 13:31:13.735857 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Dec 13 13:31:13.735861 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Dec 13 13:31:13.735867 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Dec 13 13:31:13.735871 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Dec 13 13:31:13.735877 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Dec 13 13:31:13.735882 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Dec 13 13:31:13.735887 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Dec 13 13:31:13.735892 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Dec 13 13:31:13.735898 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Dec 13 13:31:13.735903 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Dec 13 13:31:13.735908 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Dec 13 13:31:13.735916 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Dec 13 13:31:13.735923 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Dec 13 13:31:13.735928 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Dec 13 13:31:13.735933 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Dec 13 13:31:13.735939 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Dec 13 13:31:13.735944 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Dec 13 13:31:13.735950 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Dec 13 13:31:13.735956 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Dec 13 13:31:13.735961 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Dec 13 13:31:13.735966 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Dec 13 13:31:13.735971 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Dec 13 13:31:13.735977 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Dec 13 13:31:13.735982 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Dec 13 13:31:13.735987 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Dec 13 13:31:13.735993 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Dec 13 13:31:13.735998 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Dec 13 13:31:13.736004 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Dec 13 13:31:13.736009 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Dec 13 13:31:13.736015 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Dec 13 13:31:13.736020 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Dec 13 13:31:13.736025 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Dec 13 13:31:13.736031 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Dec 13 13:31:13.736036 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Dec 13 13:31:13.736041 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Dec 13 13:31:13.736047 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Dec 13 13:31:13.736052 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Dec 13 13:31:13.736058 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Dec 13 13:31:13.736064 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Dec 13 13:31:13.736069 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Dec 13 13:31:13.736074 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Dec 13 13:31:13.736080 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Dec 13 13:31:13.736085 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Dec 13 13:31:13.736090 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Dec 13 13:31:13.736095 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Dec 13 13:31:13.736101 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Dec 13 13:31:13.736106 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Dec 13 13:31:13.736112 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Dec 13 13:31:13.736118 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Dec 13 13:31:13.736123 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Dec 13 13:31:13.736128 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Dec 13 13:31:13.736134 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Dec 13 13:31:13.736139 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Dec 13 13:31:13.736144 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Dec 13 13:31:13.736150 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Dec 13 13:31:13.736155 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Dec 13 13:31:13.736160 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Dec 13 13:31:13.736167 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Dec 13 13:31:13.736172 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Dec 13 13:31:13.736177 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Dec 13 13:31:13.736183 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Dec 13 13:31:13.736188 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Dec 13 13:31:13.736193 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Dec 13 13:31:13.736199 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Dec 13 13:31:13.736204 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Dec 13 13:31:13.736209 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Dec 13 13:31:13.736214 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Dec 13 13:31:13.736220 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Dec 13 13:31:13.736226 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Dec 13 13:31:13.736231 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Dec 13 13:31:13.736237 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Dec 13 13:31:13.736242 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Dec 13 13:31:13.736247 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Dec 13 13:31:13.736252 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Dec 13 13:31:13.736258 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Dec 13 13:31:13.736263 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Dec 13 13:31:13.736268 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Dec 13 13:31:13.736274 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Dec 13 13:31:13.736280 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Dec 13 13:31:13.736286 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Dec 13 13:31:13.736291 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 13:31:13.736297 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 13 13:31:13.736302 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Dec 13 13:31:13.736308 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Dec 13 13:31:13.736313 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Dec 13 13:31:13.736319 kernel: Zone ranges: Dec 13 13:31:13.736324 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 13:31:13.736331 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Dec 13 13:31:13.736336 kernel: Normal empty Dec 13 13:31:13.736342 kernel: Movable zone start for each node Dec 13 13:31:13.736347 kernel: Early memory node ranges Dec 13 13:31:13.736353 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Dec 13 13:31:13.736358 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Dec 13 13:31:13.736364 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Dec 13 13:31:13.736369 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Dec 13 13:31:13.736375 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 13:31:13.736380 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Dec 13 13:31:13.736387 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Dec 13 13:31:13.736392 kernel: ACPI: PM-Timer IO Port: 0x1008 Dec 13 13:31:13.736398 kernel: system APIC only can use physical flat Dec 13 13:31:13.736403 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Dec 13 13:31:13.736409 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Dec 13 13:31:13.736414 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Dec 13 13:31:13.736419 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Dec 13 13:31:13.736425 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Dec 13 13:31:13.736430 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Dec 13 13:31:13.736437 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Dec 13 13:31:13.736442 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Dec 13 13:31:13.736448 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Dec 13 13:31:13.736453 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Dec 13 13:31:13.736459 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Dec 13 13:31:13.736464 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Dec 13 13:31:13.736469 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Dec 13 13:31:13.736475 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Dec 13 13:31:13.736480 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Dec 13 13:31:13.736485 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Dec 13 13:31:13.736491 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Dec 13 13:31:13.736497 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Dec 13 13:31:13.736510 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Dec 13 13:31:13.736516 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Dec 13 13:31:13.736522 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Dec 13 13:31:13.736527 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Dec 13 13:31:13.736532 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Dec 13 13:31:13.736538 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Dec 13 13:31:13.736543 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Dec 13 13:31:13.736551 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Dec 13 13:31:13.736556 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Dec 13 13:31:13.736561 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Dec 13 13:31:13.736567 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Dec 13 13:31:13.736572 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Dec 13 13:31:13.736578 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Dec 13 13:31:13.736583 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Dec 13 13:31:13.736588 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Dec 13 13:31:13.736594 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Dec 13 13:31:13.736599 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Dec 13 13:31:13.736606 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Dec 13 13:31:13.736611 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Dec 13 13:31:13.736616 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Dec 13 13:31:13.736622 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Dec 13 13:31:13.736627 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Dec 13 13:31:13.736633 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Dec 13 13:31:13.736638 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Dec 13 13:31:13.736643 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Dec 13 13:31:13.736649 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Dec 13 13:31:13.736654 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Dec 13 13:31:13.736660 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Dec 13 13:31:13.736666 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Dec 13 13:31:13.736671 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Dec 13 13:31:13.736677 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Dec 13 13:31:13.736682 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Dec 13 13:31:13.736687 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Dec 13 13:31:13.736693 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Dec 13 13:31:13.736698 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Dec 13 13:31:13.736703 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Dec 13 13:31:13.736709 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Dec 13 13:31:13.736715 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Dec 13 13:31:13.736721 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Dec 13 13:31:13.736726 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Dec 13 13:31:13.736732 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Dec 13 13:31:13.736737 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Dec 13 13:31:13.736742 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Dec 13 13:31:13.736748 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Dec 13 13:31:13.736753 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Dec 13 13:31:13.736758 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Dec 13 13:31:13.736765 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Dec 13 13:31:13.736770 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Dec 13 13:31:13.736775 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Dec 13 13:31:13.736781 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Dec 13 13:31:13.736786 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Dec 13 13:31:13.736791 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Dec 13 13:31:13.736797 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Dec 13 13:31:13.736802 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Dec 13 13:31:13.736808 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Dec 13 13:31:13.736813 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Dec 13 13:31:13.736819 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Dec 13 13:31:13.736825 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Dec 13 13:31:13.736830 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Dec 13 13:31:13.736836 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Dec 13 13:31:13.736841 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Dec 13 13:31:13.736846 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Dec 13 13:31:13.736852 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Dec 13 13:31:13.736857 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Dec 13 13:31:13.736862 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Dec 13 13:31:13.736868 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Dec 13 13:31:13.736874 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Dec 13 13:31:13.736879 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Dec 13 13:31:13.736885 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Dec 13 13:31:13.736890 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Dec 13 13:31:13.736896 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Dec 13 13:31:13.736901 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Dec 13 13:31:13.736906 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Dec 13 13:31:13.736912 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Dec 13 13:31:13.736917 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Dec 13 13:31:13.736922 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Dec 13 13:31:13.736929 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Dec 13 13:31:13.736934 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Dec 13 13:31:13.736939 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Dec 13 13:31:13.736945 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Dec 13 13:31:13.736950 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Dec 13 13:31:13.736955 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Dec 13 13:31:13.736961 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Dec 13 13:31:13.736966 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Dec 13 13:31:13.736971 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Dec 13 13:31:13.736978 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Dec 13 13:31:13.736983 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Dec 13 13:31:13.736988 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Dec 13 13:31:13.736994 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Dec 13 13:31:13.736999 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Dec 13 13:31:13.737005 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Dec 13 13:31:13.737010 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Dec 13 13:31:13.737016 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Dec 13 13:31:13.737021 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Dec 13 13:31:13.737026 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Dec 13 13:31:13.737033 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Dec 13 13:31:13.737038 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Dec 13 13:31:13.737043 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Dec 13 13:31:13.737049 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Dec 13 13:31:13.737054 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Dec 13 13:31:13.737059 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Dec 13 13:31:13.737065 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Dec 13 13:31:13.737070 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Dec 13 13:31:13.737076 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Dec 13 13:31:13.737081 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Dec 13 13:31:13.737087 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Dec 13 13:31:13.737093 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Dec 13 13:31:13.737098 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Dec 13 13:31:13.737103 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Dec 13 13:31:13.737109 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Dec 13 13:31:13.737114 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Dec 13 13:31:13.737120 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Dec 13 13:31:13.737125 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 13:31:13.737131 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Dec 13 13:31:13.737137 kernel: TSC deadline timer available Dec 13 13:31:13.737143 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Dec 13 13:31:13.737148 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Dec 13 13:31:13.737154 kernel: Booting paravirtualized kernel on VMware hypervisor Dec 13 13:31:13.737159 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 13:31:13.737165 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Dec 13 13:31:13.737171 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Dec 13 13:31:13.737176 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Dec 13 13:31:13.737182 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Dec 13 13:31:13.737188 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Dec 13 13:31:13.737194 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Dec 13 13:31:13.737199 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Dec 13 13:31:13.737205 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Dec 13 13:31:13.737217 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Dec 13 13:31:13.737224 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Dec 13 13:31:13.737229 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Dec 13 13:31:13.737235 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Dec 13 13:31:13.737241 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Dec 13 13:31:13.737247 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Dec 13 13:31:13.737253 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Dec 13 13:31:13.737259 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Dec 13 13:31:13.737264 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Dec 13 13:31:13.737270 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Dec 13 13:31:13.737276 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Dec 13 13:31:13.737282 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:31:13.737288 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 13:31:13.737295 kernel: random: crng init done Dec 13 13:31:13.737301 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Dec 13 13:31:13.737307 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Dec 13 13:31:13.737312 kernel: printk: log_buf_len min size: 262144 bytes Dec 13 13:31:13.737318 kernel: printk: log_buf_len: 1048576 bytes Dec 13 13:31:13.737324 kernel: printk: early log buf free: 239648(91%) Dec 13 13:31:13.737330 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 13:31:13.737336 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 13:31:13.737342 kernel: Fallback order for Node 0: 0 Dec 13 13:31:13.737348 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Dec 13 13:31:13.737354 kernel: Policy zone: DMA32 Dec 13 13:31:13.737360 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 13:31:13.737366 kernel: Memory: 1934284K/2096628K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43328K init, 1748K bss, 162084K reserved, 0K cma-reserved) Dec 13 13:31:13.737373 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Dec 13 13:31:13.737380 kernel: ftrace: allocating 37874 entries in 148 pages Dec 13 13:31:13.737386 kernel: ftrace: allocated 148 pages with 3 groups Dec 13 13:31:13.737392 kernel: Dynamic Preempt: voluntary Dec 13 13:31:13.737398 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 13:31:13.737404 kernel: rcu: RCU event tracing is enabled. Dec 13 13:31:13.737409 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Dec 13 13:31:13.737415 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 13:31:13.737421 kernel: Rude variant of Tasks RCU enabled. Dec 13 13:31:13.737427 kernel: Tracing variant of Tasks RCU enabled. Dec 13 13:31:13.737433 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 13:31:13.737440 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Dec 13 13:31:13.737446 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Dec 13 13:31:13.737451 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Dec 13 13:31:13.737457 kernel: Console: colour VGA+ 80x25 Dec 13 13:31:13.737463 kernel: printk: console [tty0] enabled Dec 13 13:31:13.737469 kernel: printk: console [ttyS0] enabled Dec 13 13:31:13.737474 kernel: ACPI: Core revision 20230628 Dec 13 13:31:13.737480 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Dec 13 13:31:13.737486 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 13:31:13.737493 kernel: x2apic enabled Dec 13 13:31:13.737499 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 13:31:13.737513 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 13:31:13.737519 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Dec 13 13:31:13.737525 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Dec 13 13:31:13.737531 kernel: Disabled fast string operations Dec 13 13:31:13.737537 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 13:31:13.737542 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 13:31:13.737548 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 13:31:13.737556 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Dec 13 13:31:13.737562 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Dec 13 13:31:13.737568 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Dec 13 13:31:13.737575 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 13:31:13.737581 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Dec 13 13:31:13.737587 kernel: RETBleed: Mitigation: Enhanced IBRS Dec 13 13:31:13.737592 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 13:31:13.737598 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 13:31:13.737604 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 13:31:13.737611 kernel: SRBDS: Unknown: Dependent on hypervisor status Dec 13 13:31:13.737617 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 13:31:13.737623 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 13:31:13.737629 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 13:31:13.737635 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 13:31:13.737640 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 13:31:13.737646 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 13 13:31:13.737652 kernel: Freeing SMP alternatives memory: 32K Dec 13 13:31:13.737658 kernel: pid_max: default: 131072 minimum: 1024 Dec 13 13:31:13.737665 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 13:31:13.737671 kernel: landlock: Up and running. Dec 13 13:31:13.737677 kernel: SELinux: Initializing. Dec 13 13:31:13.737683 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 13:31:13.737689 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 13:31:13.737695 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Dec 13 13:31:13.737701 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Dec 13 13:31:13.737707 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Dec 13 13:31:13.737714 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Dec 13 13:31:13.737720 kernel: Performance Events: Skylake events, core PMU driver. Dec 13 13:31:13.737726 kernel: core: CPUID marked event: 'cpu cycles' unavailable Dec 13 13:31:13.737732 kernel: core: CPUID marked event: 'instructions' unavailable Dec 13 13:31:13.737737 kernel: core: CPUID marked event: 'bus cycles' unavailable Dec 13 13:31:13.737743 kernel: core: CPUID marked event: 'cache references' unavailable Dec 13 13:31:13.737748 kernel: core: CPUID marked event: 'cache misses' unavailable Dec 13 13:31:13.737754 kernel: core: CPUID marked event: 'branch instructions' unavailable Dec 13 13:31:13.737760 kernel: core: CPUID marked event: 'branch misses' unavailable Dec 13 13:31:13.737766 kernel: ... version: 1 Dec 13 13:31:13.737772 kernel: ... bit width: 48 Dec 13 13:31:13.737778 kernel: ... generic registers: 4 Dec 13 13:31:13.737784 kernel: ... value mask: 0000ffffffffffff Dec 13 13:31:13.737790 kernel: ... max period: 000000007fffffff Dec 13 13:31:13.737795 kernel: ... fixed-purpose events: 0 Dec 13 13:31:13.737801 kernel: ... event mask: 000000000000000f Dec 13 13:31:13.737807 kernel: signal: max sigframe size: 1776 Dec 13 13:31:13.737813 kernel: rcu: Hierarchical SRCU implementation. Dec 13 13:31:13.737820 kernel: rcu: Max phase no-delay instances is 400. Dec 13 13:31:13.737826 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 13:31:13.737831 kernel: smp: Bringing up secondary CPUs ... Dec 13 13:31:13.737837 kernel: smpboot: x86: Booting SMP configuration: Dec 13 13:31:13.737843 kernel: .... node #0, CPUs: #1 Dec 13 13:31:13.737849 kernel: Disabled fast string operations Dec 13 13:31:13.737855 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Dec 13 13:31:13.737860 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Dec 13 13:31:13.737866 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 13:31:13.737872 kernel: smpboot: Max logical packages: 128 Dec 13 13:31:13.737878 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Dec 13 13:31:13.737884 kernel: devtmpfs: initialized Dec 13 13:31:13.737890 kernel: x86/mm: Memory block size: 128MB Dec 13 13:31:13.737896 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Dec 13 13:31:13.737902 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 13:31:13.737908 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Dec 13 13:31:13.737914 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 13:31:13.737919 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 13:31:13.737927 kernel: audit: initializing netlink subsys (disabled) Dec 13 13:31:13.737934 kernel: audit: type=2000 audit(1734096671.066:1): state=initialized audit_enabled=0 res=1 Dec 13 13:31:13.737939 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 13:31:13.737945 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 13:31:13.737951 kernel: cpuidle: using governor menu Dec 13 13:31:13.737957 kernel: Simple Boot Flag at 0x36 set to 0x80 Dec 13 13:31:13.737962 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 13:31:13.737968 kernel: dca service started, version 1.12.1 Dec 13 13:31:13.737974 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Dec 13 13:31:13.737980 kernel: PCI: Using configuration type 1 for base access Dec 13 13:31:13.737987 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 13:31:13.737993 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 13:31:13.737999 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 13:31:13.738005 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 13:31:13.738010 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 13:31:13.738016 kernel: ACPI: Added _OSI(Module Device) Dec 13 13:31:13.738022 kernel: ACPI: Added _OSI(Processor Device) Dec 13 13:31:13.738028 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 13:31:13.738033 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 13:31:13.738040 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 13:31:13.738046 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Dec 13 13:31:13.738052 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 13:31:13.738058 kernel: ACPI: Interpreter enabled Dec 13 13:31:13.738071 kernel: ACPI: PM: (supports S0 S1 S5) Dec 13 13:31:13.738077 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 13:31:13.738083 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 13:31:13.738088 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 13:31:13.738094 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Dec 13 13:31:13.738102 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Dec 13 13:31:13.738178 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 13:31:13.738234 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Dec 13 13:31:13.738288 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Dec 13 13:31:13.738297 kernel: PCI host bridge to bus 0000:00 Dec 13 13:31:13.738347 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 13:31:13.738395 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Dec 13 13:31:13.738438 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 13:31:13.738480 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 13:31:13.738547 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Dec 13 13:31:13.738591 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Dec 13 13:31:13.738649 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Dec 13 13:31:13.738703 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Dec 13 13:31:13.738760 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Dec 13 13:31:13.738813 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Dec 13 13:31:13.738863 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Dec 13 13:31:13.738912 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Dec 13 13:31:13.738960 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Dec 13 13:31:13.739009 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Dec 13 13:31:13.739060 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Dec 13 13:31:13.739117 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Dec 13 13:31:13.739167 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Dec 13 13:31:13.739215 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Dec 13 13:31:13.739267 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Dec 13 13:31:13.739316 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Dec 13 13:31:13.739367 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Dec 13 13:31:13.739421 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Dec 13 13:31:13.739469 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Dec 13 13:31:13.739533 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Dec 13 13:31:13.739584 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Dec 13 13:31:13.739632 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Dec 13 13:31:13.739681 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 13:31:13.739735 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Dec 13 13:31:13.739792 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.739842 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.739896 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.739945 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.739998 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.740047 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.740104 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.740153 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.740209 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.740258 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.740311 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.740360 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.740416 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.740465 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.740898 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.740957 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.741013 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.741064 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.741120 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.741172 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.741226 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.741281 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.741335 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.741387 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.741440 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.741489 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.741551 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.741600 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.741653 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.741703 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.741758 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.741808 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.741861 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.741910 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.741962 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.742011 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.742066 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.742116 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.742168 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.742218 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.742270 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.742320 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.742377 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.742427 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.742479 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.744487 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.744571 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.744626 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.744685 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.744736 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.744789 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.744839 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.744893 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.744943 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.744996 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.745049 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.745102 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.745152 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.745209 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.745264 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.745319 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.745372 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.745426 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Dec 13 13:31:13.745475 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.745543 kernel: pci_bus 0000:01: extended config space not accessible Dec 13 13:31:13.745596 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 13 13:31:13.745646 kernel: pci_bus 0000:02: extended config space not accessible Dec 13 13:31:13.745657 kernel: acpiphp: Slot [32] registered Dec 13 13:31:13.745663 kernel: acpiphp: Slot [33] registered Dec 13 13:31:13.745669 kernel: acpiphp: Slot [34] registered Dec 13 13:31:13.745675 kernel: acpiphp: Slot [35] registered Dec 13 13:31:13.745681 kernel: acpiphp: Slot [36] registered Dec 13 13:31:13.745687 kernel: acpiphp: Slot [37] registered Dec 13 13:31:13.745693 kernel: acpiphp: Slot [38] registered Dec 13 13:31:13.745698 kernel: acpiphp: Slot [39] registered Dec 13 13:31:13.745704 kernel: acpiphp: Slot [40] registered Dec 13 13:31:13.745711 kernel: acpiphp: Slot [41] registered Dec 13 13:31:13.745717 kernel: acpiphp: Slot [42] registered Dec 13 13:31:13.745723 kernel: acpiphp: Slot [43] registered Dec 13 13:31:13.745729 kernel: acpiphp: Slot [44] registered Dec 13 13:31:13.745735 kernel: acpiphp: Slot [45] registered Dec 13 13:31:13.745740 kernel: acpiphp: Slot [46] registered Dec 13 13:31:13.745756 kernel: acpiphp: Slot [47] registered Dec 13 13:31:13.745762 kernel: acpiphp: Slot [48] registered Dec 13 13:31:13.745768 kernel: acpiphp: Slot [49] registered Dec 13 13:31:13.745774 kernel: acpiphp: Slot [50] registered Dec 13 13:31:13.745781 kernel: acpiphp: Slot [51] registered Dec 13 13:31:13.745787 kernel: acpiphp: Slot [52] registered Dec 13 13:31:13.745792 kernel: acpiphp: Slot [53] registered Dec 13 13:31:13.745798 kernel: acpiphp: Slot [54] registered Dec 13 13:31:13.745804 kernel: acpiphp: Slot [55] registered Dec 13 13:31:13.745810 kernel: acpiphp: Slot [56] registered Dec 13 13:31:13.745816 kernel: acpiphp: Slot [57] registered Dec 13 13:31:13.745821 kernel: acpiphp: Slot [58] registered Dec 13 13:31:13.745827 kernel: acpiphp: Slot [59] registered Dec 13 13:31:13.745834 kernel: acpiphp: Slot [60] registered Dec 13 13:31:13.745840 kernel: acpiphp: Slot [61] registered Dec 13 13:31:13.745845 kernel: acpiphp: Slot [62] registered Dec 13 13:31:13.745851 kernel: acpiphp: Slot [63] registered Dec 13 13:31:13.745902 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Dec 13 13:31:13.745951 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Dec 13 13:31:13.745998 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Dec 13 13:31:13.746046 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Dec 13 13:31:13.746094 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Dec 13 13:31:13.746145 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Dec 13 13:31:13.746193 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Dec 13 13:31:13.746242 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Dec 13 13:31:13.746290 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Dec 13 13:31:13.746345 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Dec 13 13:31:13.746396 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Dec 13 13:31:13.746446 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Dec 13 13:31:13.746498 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Dec 13 13:31:13.748581 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Dec 13 13:31:13.748637 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Dec 13 13:31:13.748690 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Dec 13 13:31:13.748740 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Dec 13 13:31:13.748789 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Dec 13 13:31:13.748840 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Dec 13 13:31:13.748893 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Dec 13 13:31:13.749011 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Dec 13 13:31:13.749358 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Dec 13 13:31:13.749426 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Dec 13 13:31:13.749480 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Dec 13 13:31:13.749550 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Dec 13 13:31:13.749601 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Dec 13 13:31:13.749653 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Dec 13 13:31:13.749706 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Dec 13 13:31:13.749755 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Dec 13 13:31:13.749805 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Dec 13 13:31:13.749853 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Dec 13 13:31:13.749903 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Dec 13 13:31:13.750069 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Dec 13 13:31:13.750130 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Dec 13 13:31:13.750180 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Dec 13 13:31:13.750231 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Dec 13 13:31:13.750279 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Dec 13 13:31:13.750328 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Dec 13 13:31:13.750377 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Dec 13 13:31:13.750428 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Dec 13 13:31:13.750477 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Dec 13 13:31:13.750543 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Dec 13 13:31:13.750604 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Dec 13 13:31:13.750655 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Dec 13 13:31:13.750704 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Dec 13 13:31:13.750754 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Dec 13 13:31:13.750804 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Dec 13 13:31:13.750857 kernel: pci 0000:0b:00.0: supports D1 D2 Dec 13 13:31:13.750907 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 13:31:13.750956 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Dec 13 13:31:13.751006 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Dec 13 13:31:13.751055 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Dec 13 13:31:13.751103 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Dec 13 13:31:13.751152 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Dec 13 13:31:13.751203 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Dec 13 13:31:13.751252 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Dec 13 13:31:13.751300 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Dec 13 13:31:13.751352 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Dec 13 13:31:13.751401 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Dec 13 13:31:13.751449 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Dec 13 13:31:13.751498 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Dec 13 13:31:13.751736 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Dec 13 13:31:13.751791 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Dec 13 13:31:13.751839 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Dec 13 13:31:13.751890 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Dec 13 13:31:13.751939 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Dec 13 13:31:13.751988 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Dec 13 13:31:13.752038 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Dec 13 13:31:13.752087 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Dec 13 13:31:13.752135 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Dec 13 13:31:13.752189 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Dec 13 13:31:13.752237 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Dec 13 13:31:13.752290 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Dec 13 13:31:13.752340 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Dec 13 13:31:13.752388 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Dec 13 13:31:13.752436 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Dec 13 13:31:13.752487 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Dec 13 13:31:13.752559 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Dec 13 13:31:13.752614 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Dec 13 13:31:13.752662 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Dec 13 13:31:13.752713 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Dec 13 13:31:13.752762 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Dec 13 13:31:13.752810 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Dec 13 13:31:13.752859 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Dec 13 13:31:13.752909 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Dec 13 13:31:13.752961 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Dec 13 13:31:13.753009 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Dec 13 13:31:13.753059 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Dec 13 13:31:13.753110 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Dec 13 13:31:13.753160 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Dec 13 13:31:13.753208 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Dec 13 13:31:13.753259 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Dec 13 13:31:13.753308 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Dec 13 13:31:13.753359 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Dec 13 13:31:13.753409 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Dec 13 13:31:13.753458 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Dec 13 13:31:13.753523 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Dec 13 13:31:13.753577 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Dec 13 13:31:13.753626 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Dec 13 13:31:13.753675 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Dec 13 13:31:13.753726 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Dec 13 13:31:13.753778 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Dec 13 13:31:13.753827 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Dec 13 13:31:13.753876 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Dec 13 13:31:13.753925 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Dec 13 13:31:13.753974 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Dec 13 13:31:13.754023 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Dec 13 13:31:13.754072 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Dec 13 13:31:13.754121 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Dec 13 13:31:13.754173 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Dec 13 13:31:13.754221 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Dec 13 13:31:13.754270 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Dec 13 13:31:13.754319 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Dec 13 13:31:13.754368 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Dec 13 13:31:13.754418 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Dec 13 13:31:13.754467 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Dec 13 13:31:13.755545 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Dec 13 13:31:13.755611 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Dec 13 13:31:13.755665 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Dec 13 13:31:13.755715 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Dec 13 13:31:13.755766 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Dec 13 13:31:13.755816 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Dec 13 13:31:13.755865 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Dec 13 13:31:13.755916 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Dec 13 13:31:13.755965 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Dec 13 13:31:13.756017 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Dec 13 13:31:13.756068 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Dec 13 13:31:13.756117 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Dec 13 13:31:13.756166 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Dec 13 13:31:13.756175 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Dec 13 13:31:13.756182 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Dec 13 13:31:13.756188 kernel: ACPI: PCI: Interrupt link LNKB disabled Dec 13 13:31:13.756194 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 13:31:13.756202 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Dec 13 13:31:13.756208 kernel: iommu: Default domain type: Translated Dec 13 13:31:13.756214 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 13:31:13.756220 kernel: PCI: Using ACPI for IRQ routing Dec 13 13:31:13.756226 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 13:31:13.756232 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Dec 13 13:31:13.756238 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Dec 13 13:31:13.756286 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Dec 13 13:31:13.756335 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Dec 13 13:31:13.756387 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 13:31:13.756396 kernel: vgaarb: loaded Dec 13 13:31:13.756402 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Dec 13 13:31:13.756408 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Dec 13 13:31:13.756414 kernel: clocksource: Switched to clocksource tsc-early Dec 13 13:31:13.756420 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 13:31:13.756426 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 13:31:13.756432 kernel: pnp: PnP ACPI init Dec 13 13:31:13.756485 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Dec 13 13:31:13.757559 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Dec 13 13:31:13.757610 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Dec 13 13:31:13.757661 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Dec 13 13:31:13.757709 kernel: pnp 00:06: [dma 2] Dec 13 13:31:13.757758 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Dec 13 13:31:13.757802 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Dec 13 13:31:13.757849 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Dec 13 13:31:13.757858 kernel: pnp: PnP ACPI: found 8 devices Dec 13 13:31:13.757864 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 13:31:13.757871 kernel: NET: Registered PF_INET protocol family Dec 13 13:31:13.757877 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 13:31:13.757883 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 13:31:13.757889 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 13:31:13.757895 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 13:31:13.757902 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 13:31:13.757908 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 13:31:13.757914 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 13:31:13.757920 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 13:31:13.757926 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 13:31:13.757932 kernel: NET: Registered PF_XDP protocol family Dec 13 13:31:13.757984 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Dec 13 13:31:13.758037 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 13 13:31:13.758091 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 13:31:13.758141 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 13:31:13.758191 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 13:31:13.758240 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Dec 13 13:31:13.758295 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Dec 13 13:31:13.758345 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Dec 13 13:31:13.758396 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Dec 13 13:31:13.758447 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Dec 13 13:31:13.758497 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Dec 13 13:31:13.759010 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Dec 13 13:31:13.759065 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Dec 13 13:31:13.759118 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Dec 13 13:31:13.759172 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Dec 13 13:31:13.759222 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Dec 13 13:31:13.759272 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Dec 13 13:31:13.759322 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Dec 13 13:31:13.759372 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Dec 13 13:31:13.759422 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Dec 13 13:31:13.759474 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Dec 13 13:31:13.759582 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Dec 13 13:31:13.759634 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Dec 13 13:31:13.759683 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Dec 13 13:31:13.759732 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Dec 13 13:31:13.759782 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.759833 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.759882 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.759931 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.759981 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.760030 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.760080 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.760128 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.760177 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.760229 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.760278 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.760327 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.760376 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.760425 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.760474 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.760537 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.760587 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.760639 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.760689 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.760738 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.760787 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.760836 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.760901 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.760948 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.760997 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.761047 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.761094 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.761142 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.761190 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.761239 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.761291 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.761340 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.761388 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.761439 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.761487 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.761582 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.761631 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.761679 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.761727 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.761774 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.761821 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.761868 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.761938 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.761986 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.762034 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.762082 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.762130 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.762178 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.762227 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.762275 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.762322 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.762373 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.762422 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.762470 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.762529 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.762580 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.762629 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.762678 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.762727 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.762776 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.762827 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.762877 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.762925 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.762974 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.763023 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.763072 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.763121 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.763170 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.763219 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.763268 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.763320 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.763368 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.763417 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.763466 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.763529 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.763579 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.763629 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.763677 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.763726 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.763778 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.763827 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.763876 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.763924 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Dec 13 13:31:13.763973 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Dec 13 13:31:13.764023 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 13 13:31:13.764073 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Dec 13 13:31:13.764121 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Dec 13 13:31:13.764169 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Dec 13 13:31:13.764218 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Dec 13 13:31:13.764276 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Dec 13 13:31:13.764326 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Dec 13 13:31:13.764376 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Dec 13 13:31:13.764425 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Dec 13 13:31:13.764474 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Dec 13 13:31:13.764582 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Dec 13 13:31:13.764634 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Dec 13 13:31:13.764683 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Dec 13 13:31:13.764735 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Dec 13 13:31:13.764784 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Dec 13 13:31:13.764833 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Dec 13 13:31:13.764881 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Dec 13 13:31:13.764930 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Dec 13 13:31:13.764979 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Dec 13 13:31:13.765028 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Dec 13 13:31:13.765076 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Dec 13 13:31:13.765125 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Dec 13 13:31:13.765176 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Dec 13 13:31:13.765225 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Dec 13 13:31:13.765283 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Dec 13 13:31:13.765332 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Dec 13 13:31:13.765382 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Dec 13 13:31:13.765431 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Dec 13 13:31:13.765482 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Dec 13 13:31:13.765547 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Dec 13 13:31:13.765597 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Dec 13 13:31:13.765647 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Dec 13 13:31:13.765696 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Dec 13 13:31:13.765749 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Dec 13 13:31:13.765799 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Dec 13 13:31:13.765848 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Dec 13 13:31:13.765898 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Dec 13 13:31:13.765951 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Dec 13 13:31:13.766002 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Dec 13 13:31:13.766051 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Dec 13 13:31:13.766100 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Dec 13 13:31:13.766149 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Dec 13 13:31:13.766200 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Dec 13 13:31:13.766250 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Dec 13 13:31:13.766299 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Dec 13 13:31:13.766348 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Dec 13 13:31:13.766397 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Dec 13 13:31:13.766449 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Dec 13 13:31:13.766558 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Dec 13 13:31:13.766617 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Dec 13 13:31:13.766666 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Dec 13 13:31:13.766714 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Dec 13 13:31:13.766762 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Dec 13 13:31:13.766811 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Dec 13 13:31:13.766859 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Dec 13 13:31:13.766908 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Dec 13 13:31:13.766959 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Dec 13 13:31:13.767007 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Dec 13 13:31:13.767056 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Dec 13 13:31:13.767105 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Dec 13 13:31:13.767153 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Dec 13 13:31:13.767204 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Dec 13 13:31:13.767253 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Dec 13 13:31:13.767300 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Dec 13 13:31:13.767349 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Dec 13 13:31:13.767400 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Dec 13 13:31:13.767451 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Dec 13 13:31:13.769538 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Dec 13 13:31:13.769601 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Dec 13 13:31:13.769655 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Dec 13 13:31:13.769707 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Dec 13 13:31:13.769757 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Dec 13 13:31:13.769806 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Dec 13 13:31:13.769856 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Dec 13 13:31:13.769906 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Dec 13 13:31:13.769957 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Dec 13 13:31:13.770007 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Dec 13 13:31:13.770056 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Dec 13 13:31:13.770105 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Dec 13 13:31:13.770156 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Dec 13 13:31:13.770204 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Dec 13 13:31:13.770253 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Dec 13 13:31:13.770303 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Dec 13 13:31:13.770352 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Dec 13 13:31:13.770402 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Dec 13 13:31:13.770454 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Dec 13 13:31:13.770530 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Dec 13 13:31:13.770585 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Dec 13 13:31:13.770636 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Dec 13 13:31:13.770685 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Dec 13 13:31:13.770734 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Dec 13 13:31:13.770783 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Dec 13 13:31:13.770833 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Dec 13 13:31:13.770882 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Dec 13 13:31:13.770934 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Dec 13 13:31:13.770983 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Dec 13 13:31:13.771032 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Dec 13 13:31:13.771081 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Dec 13 13:31:13.771131 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Dec 13 13:31:13.771180 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Dec 13 13:31:13.771230 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Dec 13 13:31:13.771289 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Dec 13 13:31:13.771340 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Dec 13 13:31:13.771389 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Dec 13 13:31:13.771441 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Dec 13 13:31:13.771492 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Dec 13 13:31:13.771555 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Dec 13 13:31:13.771605 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Dec 13 13:31:13.771718 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Dec 13 13:31:13.771976 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Dec 13 13:31:13.772046 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Dec 13 13:31:13.772125 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Dec 13 13:31:13.772178 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Dec 13 13:31:13.772230 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Dec 13 13:31:13.772281 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Dec 13 13:31:13.772326 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Dec 13 13:31:13.772369 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Dec 13 13:31:13.772412 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Dec 13 13:31:13.772456 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Dec 13 13:31:13.772550 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Dec 13 13:31:13.772599 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Dec 13 13:31:13.772646 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Dec 13 13:31:13.772691 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Dec 13 13:31:13.772735 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Dec 13 13:31:13.772779 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Dec 13 13:31:13.772823 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Dec 13 13:31:13.772868 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Dec 13 13:31:13.772917 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Dec 13 13:31:13.772965 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Dec 13 13:31:13.773009 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Dec 13 13:31:13.773058 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Dec 13 13:31:13.773102 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Dec 13 13:31:13.773146 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Dec 13 13:31:13.773195 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Dec 13 13:31:13.773240 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Dec 13 13:31:13.773287 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Dec 13 13:31:13.773336 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Dec 13 13:31:13.773381 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Dec 13 13:31:13.773432 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Dec 13 13:31:13.773478 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Dec 13 13:31:13.773533 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Dec 13 13:31:13.773581 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Dec 13 13:31:13.773630 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Dec 13 13:31:13.773675 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Dec 13 13:31:13.773741 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Dec 13 13:31:13.773799 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Dec 13 13:31:13.773852 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Dec 13 13:31:13.773899 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Dec 13 13:31:13.773945 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Dec 13 13:31:13.773994 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Dec 13 13:31:13.774040 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Dec 13 13:31:13.774085 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Dec 13 13:31:13.774136 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Dec 13 13:31:13.774185 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Dec 13 13:31:13.774233 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Dec 13 13:31:13.774286 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Dec 13 13:31:13.774332 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Dec 13 13:31:13.774383 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Dec 13 13:31:13.774429 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Dec 13 13:31:13.774477 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Dec 13 13:31:13.775060 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Dec 13 13:31:13.775117 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Dec 13 13:31:13.775164 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Dec 13 13:31:13.775214 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Dec 13 13:31:13.775265 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Dec 13 13:31:13.775315 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Dec 13 13:31:13.775363 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Dec 13 13:31:13.775409 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Dec 13 13:31:13.775593 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Dec 13 13:31:13.775647 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Dec 13 13:31:13.775693 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Dec 13 13:31:13.775742 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Dec 13 13:31:13.775788 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Dec 13 13:31:13.775837 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Dec 13 13:31:13.775885 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Dec 13 13:31:13.775931 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Dec 13 13:31:13.775982 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Dec 13 13:31:13.776030 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Dec 13 13:31:13.776079 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Dec 13 13:31:13.776127 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Dec 13 13:31:13.776176 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Dec 13 13:31:13.776221 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Dec 13 13:31:13.776269 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Dec 13 13:31:13.776314 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Dec 13 13:31:13.776364 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Dec 13 13:31:13.776412 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Dec 13 13:31:13.776457 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Dec 13 13:31:13.776557 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Dec 13 13:31:13.776606 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Dec 13 13:31:13.776651 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Dec 13 13:31:13.776699 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Dec 13 13:31:13.776747 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Dec 13 13:31:13.776828 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Dec 13 13:31:13.776876 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Dec 13 13:31:13.776927 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Dec 13 13:31:13.776972 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Dec 13 13:31:13.777021 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Dec 13 13:31:13.777066 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Dec 13 13:31:13.777119 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Dec 13 13:31:13.777164 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Dec 13 13:31:13.777317 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Dec 13 13:31:13.777400 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Dec 13 13:31:13.777457 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 13:31:13.777467 kernel: PCI: CLS 32 bytes, default 64 Dec 13 13:31:13.777477 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 13:31:13.777484 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Dec 13 13:31:13.777490 kernel: clocksource: Switched to clocksource tsc Dec 13 13:31:13.777497 kernel: Initialise system trusted keyrings Dec 13 13:31:13.777510 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 13:31:13.777516 kernel: Key type asymmetric registered Dec 13 13:31:13.777523 kernel: Asymmetric key parser 'x509' registered Dec 13 13:31:13.777529 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 13:31:13.777563 kernel: io scheduler mq-deadline registered Dec 13 13:31:13.777572 kernel: io scheduler kyber registered Dec 13 13:31:13.777578 kernel: io scheduler bfq registered Dec 13 13:31:13.777635 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Dec 13 13:31:13.777688 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.777740 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Dec 13 13:31:13.777790 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.777841 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Dec 13 13:31:13.777892 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.777946 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Dec 13 13:31:13.777997 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.778048 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Dec 13 13:31:13.778098 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.778148 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Dec 13 13:31:13.778201 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.778251 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Dec 13 13:31:13.778302 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.778419 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Dec 13 13:31:13.778470 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.778545 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Dec 13 13:31:13.778602 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.778653 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Dec 13 13:31:13.778705 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.778756 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Dec 13 13:31:13.778806 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.778856 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Dec 13 13:31:13.778910 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.778964 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Dec 13 13:31:13.779014 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.779065 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Dec 13 13:31:13.779115 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.779166 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Dec 13 13:31:13.779219 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.779269 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Dec 13 13:31:13.779320 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.779371 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Dec 13 13:31:13.779421 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.779471 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Dec 13 13:31:13.779566 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.779618 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Dec 13 13:31:13.779667 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.779717 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Dec 13 13:31:13.779766 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.779816 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Dec 13 13:31:13.779869 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.779918 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Dec 13 13:31:13.779969 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.780018 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Dec 13 13:31:13.780067 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.780117 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Dec 13 13:31:13.780170 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.780220 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Dec 13 13:31:13.780269 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.780319 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Dec 13 13:31:13.780369 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.780419 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Dec 13 13:31:13.780472 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.782113 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Dec 13 13:31:13.782175 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.782230 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Dec 13 13:31:13.782283 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.782339 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Dec 13 13:31:13.782390 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.782441 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Dec 13 13:31:13.782491 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.782551 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Dec 13 13:31:13.782601 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 13:31:13.782614 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 13:31:13.782621 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 13:31:13.782627 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 13:31:13.782634 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Dec 13 13:31:13.782640 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 13:31:13.782646 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 13:31:13.782697 kernel: rtc_cmos 00:01: registered as rtc0 Dec 13 13:31:13.782746 kernel: rtc_cmos 00:01: setting system clock to 2024-12-13T13:31:13 UTC (1734096673) Dec 13 13:31:13.782791 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Dec 13 13:31:13.782800 kernel: intel_pstate: CPU model not supported Dec 13 13:31:13.782806 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 13:31:13.782813 kernel: NET: Registered PF_INET6 protocol family Dec 13 13:31:13.782819 kernel: Segment Routing with IPv6 Dec 13 13:31:13.782826 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 13:31:13.782832 kernel: NET: Registered PF_PACKET protocol family Dec 13 13:31:13.782838 kernel: Key type dns_resolver registered Dec 13 13:31:13.782846 kernel: IPI shorthand broadcast: enabled Dec 13 13:31:13.782852 kernel: sched_clock: Marking stable (955166365, 224609646)->(1195476067, -15700056) Dec 13 13:31:13.782859 kernel: registered taskstats version 1 Dec 13 13:31:13.782865 kernel: Loading compiled-in X.509 certificates Dec 13 13:31:13.782871 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: 87a680e70013684f1bdd04e047addefc714bd162' Dec 13 13:31:13.782878 kernel: Key type .fscrypt registered Dec 13 13:31:13.782884 kernel: Key type fscrypt-provisioning registered Dec 13 13:31:13.782890 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 13:31:13.782897 kernel: ima: Allocated hash algorithm: sha1 Dec 13 13:31:13.782903 kernel: ima: No architecture policies found Dec 13 13:31:13.782910 kernel: clk: Disabling unused clocks Dec 13 13:31:13.782916 kernel: Freeing unused kernel image (initmem) memory: 43328K Dec 13 13:31:13.782922 kernel: Write protecting the kernel read-only data: 38912k Dec 13 13:31:13.782929 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Dec 13 13:31:13.782935 kernel: Run /init as init process Dec 13 13:31:13.782942 kernel: with arguments: Dec 13 13:31:13.782948 kernel: /init Dec 13 13:31:13.782954 kernel: with environment: Dec 13 13:31:13.782961 kernel: HOME=/ Dec 13 13:31:13.782967 kernel: TERM=linux Dec 13 13:31:13.782973 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 13:31:13.782981 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:31:13.782989 systemd[1]: Detected virtualization vmware. Dec 13 13:31:13.782996 systemd[1]: Detected architecture x86-64. Dec 13 13:31:13.783002 systemd[1]: Running in initrd. Dec 13 13:31:13.783008 systemd[1]: No hostname configured, using default hostname. Dec 13 13:31:13.783016 systemd[1]: Hostname set to . Dec 13 13:31:13.783023 systemd[1]: Initializing machine ID from random generator. Dec 13 13:31:13.783029 systemd[1]: Queued start job for default target initrd.target. Dec 13 13:31:13.783036 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:31:13.783042 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:31:13.783050 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 13:31:13.783056 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:31:13.783064 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 13:31:13.783071 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 13:31:13.783079 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 13:31:13.783086 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 13:31:13.783092 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:31:13.783099 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:31:13.783106 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:31:13.783114 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:31:13.783120 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:31:13.783127 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:31:13.783133 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:31:13.783140 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:31:13.783147 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 13:31:13.783154 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 13:31:13.783160 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:31:13.783167 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:31:13.783175 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:31:13.783181 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:31:13.783188 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 13:31:13.783195 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:31:13.783201 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 13:31:13.783207 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 13:31:13.783214 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:31:13.783221 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:31:13.783229 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:31:13.783235 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 13:31:13.783253 systemd-journald[216]: Collecting audit messages is disabled. Dec 13 13:31:13.783269 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:31:13.783278 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 13:31:13.783285 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 13:31:13.783292 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:31:13.783298 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:31:13.783305 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 13:31:13.783313 kernel: Bridge firewalling registered Dec 13 13:31:13.783319 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:31:13.783326 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:31:13.783333 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:31:13.783340 systemd-journald[216]: Journal started Dec 13 13:31:13.783355 systemd-journald[216]: Runtime Journal (/run/log/journal/ccff4398c6bf457098231338991eb52a) is 4.8M, max 38.6M, 33.8M free. Dec 13 13:31:13.748201 systemd-modules-load[217]: Inserted module 'overlay' Dec 13 13:31:13.775491 systemd-modules-load[217]: Inserted module 'br_netfilter' Dec 13 13:31:13.790769 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:31:13.790800 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:31:13.791426 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:31:13.792740 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:31:13.793812 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 13:31:13.795589 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:31:13.795809 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:31:13.802225 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:31:13.804062 dracut-cmdline[245]: dracut-dracut-053 Dec 13 13:31:13.806345 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:31:13.807967 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:31:13.823445 systemd-resolved[257]: Positive Trust Anchors: Dec 13 13:31:13.823451 systemd-resolved[257]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:31:13.823474 systemd-resolved[257]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:31:13.825052 systemd-resolved[257]: Defaulting to hostname 'linux'. Dec 13 13:31:13.825864 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:31:13.826121 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:31:13.852519 kernel: SCSI subsystem initialized Dec 13 13:31:13.859738 kernel: Loading iSCSI transport class v2.0-870. Dec 13 13:31:13.864514 kernel: iscsi: registered transport (tcp) Dec 13 13:31:13.877516 kernel: iscsi: registered transport (qla4xxx) Dec 13 13:31:13.877568 kernel: QLogic iSCSI HBA Driver Dec 13 13:31:13.897124 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 13:31:13.900599 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 13:31:13.914808 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 13:31:13.914848 kernel: device-mapper: uevent: version 1.0.3 Dec 13 13:31:13.915972 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 13:31:13.947522 kernel: raid6: avx2x4 gen() 47172 MB/s Dec 13 13:31:13.963521 kernel: raid6: avx2x2 gen() 52709 MB/s Dec 13 13:31:13.980707 kernel: raid6: avx2x1 gen() 44354 MB/s Dec 13 13:31:13.980761 kernel: raid6: using algorithm avx2x2 gen() 52709 MB/s Dec 13 13:31:13.998719 kernel: raid6: .... xor() 32194 MB/s, rmw enabled Dec 13 13:31:13.998772 kernel: raid6: using avx2x2 recovery algorithm Dec 13 13:31:14.011517 kernel: xor: automatically using best checksumming function avx Dec 13 13:31:14.105526 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 13:31:14.110528 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:31:14.114613 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:31:14.122728 systemd-udevd[434]: Using default interface naming scheme 'v255'. Dec 13 13:31:14.125153 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:31:14.134631 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 13:31:14.141385 dracut-pre-trigger[436]: rd.md=0: removing MD RAID activation Dec 13 13:31:14.156780 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:31:14.161619 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:31:14.233078 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:31:14.238606 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 13:31:14.249944 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 13:31:14.250999 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:31:14.251307 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:31:14.251577 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:31:14.255644 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 13:31:14.266838 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:31:14.304512 kernel: VMware PVSCSI driver - version 1.0.7.0-k Dec 13 13:31:14.308767 kernel: vmw_pvscsi: using 64bit dma Dec 13 13:31:14.308797 kernel: vmw_pvscsi: max_id: 16 Dec 13 13:31:14.308806 kernel: vmw_pvscsi: setting ring_pages to 8 Dec 13 13:31:14.319620 kernel: vmw_pvscsi: enabling reqCallThreshold Dec 13 13:31:14.319655 kernel: vmw_pvscsi: driver-based request coalescing enabled Dec 13 13:31:14.319664 kernel: vmw_pvscsi: using MSI-X Dec 13 13:31:14.319671 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Dec 13 13:31:14.320641 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Dec 13 13:31:14.322615 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Dec 13 13:31:14.337541 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 13:31:14.337559 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Dec 13 13:31:14.337662 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Dec 13 13:31:14.337759 kernel: libata version 3.00 loaded. Dec 13 13:31:14.337768 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Dec 13 13:31:14.339713 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:31:14.339794 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:31:14.340414 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:31:14.340563 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:31:14.340633 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:31:14.340778 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:31:14.344522 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Dec 13 13:31:14.344691 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:31:14.347365 kernel: ata_piix 0000:00:07.1: version 2.13 Dec 13 13:31:14.357995 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 13:31:14.358010 kernel: AES CTR mode by8 optimization enabled Dec 13 13:31:14.358023 kernel: scsi host1: ata_piix Dec 13 13:31:14.358100 kernel: scsi host2: ata_piix Dec 13 13:31:14.358159 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Dec 13 13:31:14.358168 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Dec 13 13:31:14.363797 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Dec 13 13:31:14.372903 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 13:31:14.372980 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Dec 13 13:31:14.373041 kernel: sd 0:0:0:0: [sda] Cache data unavailable Dec 13 13:31:14.373099 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Dec 13 13:31:14.373158 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 13:31:14.373168 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 13:31:14.373453 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:31:14.376594 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:31:14.388139 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:31:14.526536 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Dec 13 13:31:14.533536 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Dec 13 13:31:14.557091 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Dec 13 13:31:14.566286 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 13:31:14.566306 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 13:31:14.571517 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (488) Dec 13 13:31:14.575082 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Dec 13 13:31:14.578902 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Dec 13 13:31:14.581890 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Dec 13 13:31:14.582539 kernel: BTRFS: device fsid 79c74448-2326-4c98-b9ff-09542b30ea52 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (477) Dec 13 13:31:14.588809 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Dec 13 13:31:14.589097 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Dec 13 13:31:14.598612 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 13:31:14.624522 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 13:31:15.634546 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 13:31:15.634594 disk-uuid[587]: The operation has completed successfully. Dec 13 13:31:15.783022 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 13:31:15.783091 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 13:31:15.788669 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 13:31:15.791120 sh[604]: Success Dec 13 13:31:15.801534 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 13:31:15.961918 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 13:31:15.963590 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 13:31:15.963953 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 13:31:15.980520 kernel: BTRFS info (device dm-0): first mount of filesystem 79c74448-2326-4c98-b9ff-09542b30ea52 Dec 13 13:31:15.980562 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:31:15.980574 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 13:31:15.981701 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 13:31:15.982555 kernel: BTRFS info (device dm-0): using free space tree Dec 13 13:31:16.086530 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 13:31:16.089252 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 13:31:16.099710 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Dec 13 13:31:16.101731 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 13:31:16.175139 kernel: BTRFS info (device sda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:31:16.175181 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:31:16.175190 kernel: BTRFS info (device sda6): using free space tree Dec 13 13:31:16.225564 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 13:31:16.235133 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 13:31:16.236528 kernel: BTRFS info (device sda6): last unmount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:31:16.244267 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 13:31:16.247634 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 13:31:16.325610 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Dec 13 13:31:16.331678 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 13:31:16.388181 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:31:16.393661 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:31:16.406388 systemd-networkd[793]: lo: Link UP Dec 13 13:31:16.406394 systemd-networkd[793]: lo: Gained carrier Dec 13 13:31:16.407598 systemd-networkd[793]: Enumeration completed Dec 13 13:31:16.407863 systemd-networkd[793]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Dec 13 13:31:16.411316 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Dec 13 13:31:16.411445 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Dec 13 13:31:16.411431 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:31:16.411768 systemd-networkd[793]: ens192: Link UP Dec 13 13:31:16.411770 systemd-networkd[793]: ens192: Gained carrier Dec 13 13:31:16.411954 systemd[1]: Reached target network.target - Network. Dec 13 13:31:16.443096 ignition[665]: Ignition 2.20.0 Dec 13 13:31:16.443111 ignition[665]: Stage: fetch-offline Dec 13 13:31:16.443157 ignition[665]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:31:16.443167 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 13:31:16.443250 ignition[665]: parsed url from cmdline: "" Dec 13 13:31:16.443254 ignition[665]: no config URL provided Dec 13 13:31:16.443258 ignition[665]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:31:16.443266 ignition[665]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:31:16.443812 ignition[665]: config successfully fetched Dec 13 13:31:16.443841 ignition[665]: parsing config with SHA512: 5b04d72a530caa1019f1231c71478a7dd01f2f4aa5b74fc8aa345e22055dff7e9f3bca7b4cb56b0510269c72e7201f102eb9ab2ce0af206a7b33496d45651226 Dec 13 13:31:16.447122 unknown[665]: fetched base config from "system" Dec 13 13:31:16.447134 unknown[665]: fetched user config from "vmware" Dec 13 13:31:16.447411 ignition[665]: fetch-offline: fetch-offline passed Dec 13 13:31:16.447491 ignition[665]: Ignition finished successfully Dec 13 13:31:16.448316 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:31:16.448550 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 13:31:16.452629 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 13:31:16.460948 ignition[801]: Ignition 2.20.0 Dec 13 13:31:16.460955 ignition[801]: Stage: kargs Dec 13 13:31:16.461069 ignition[801]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:31:16.461076 ignition[801]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 13:31:16.461686 ignition[801]: kargs: kargs passed Dec 13 13:31:16.461722 ignition[801]: Ignition finished successfully Dec 13 13:31:16.462989 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 13:31:16.468653 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 13:31:16.477214 ignition[807]: Ignition 2.20.0 Dec 13 13:31:16.477224 ignition[807]: Stage: disks Dec 13 13:31:16.477344 ignition[807]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:31:16.477350 ignition[807]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 13:31:16.477971 ignition[807]: disks: disks passed Dec 13 13:31:16.478005 ignition[807]: Ignition finished successfully Dec 13 13:31:16.478663 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 13:31:16.479083 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 13:31:16.479233 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 13:31:16.479424 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:31:16.479640 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:31:16.479813 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:31:16.483604 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 13:31:16.551604 systemd-fsck[816]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 13:31:16.552894 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 13:31:16.558621 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 13:31:16.667183 kernel: EXT4-fs (sda9): mounted filesystem 8801d4fe-2f40-4e12-9140-c192f2e7d668 r/w with ordered data mode. Quota mode: none. Dec 13 13:31:16.666660 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 13:31:16.667107 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 13:31:16.679593 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:31:16.683895 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 13:31:16.684182 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 13:31:16.684210 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 13:31:16.684226 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:31:16.688954 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 13:31:16.689611 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 13:31:16.708525 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (824) Dec 13 13:31:16.725412 kernel: BTRFS info (device sda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:31:16.725468 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:31:16.725483 kernel: BTRFS info (device sda6): using free space tree Dec 13 13:31:16.860528 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 13:31:16.870574 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:31:17.076857 initrd-setup-root[848]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 13:31:17.092988 initrd-setup-root[855]: cut: /sysroot/etc/group: No such file or directory Dec 13 13:31:17.103280 initrd-setup-root[862]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 13:31:17.105659 initrd-setup-root[869]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 13:31:17.220754 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 13:31:17.224565 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 13:31:17.227094 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 13:31:17.231172 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 13:31:17.232531 kernel: BTRFS info (device sda6): last unmount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:31:17.256095 ignition[936]: INFO : Ignition 2.20.0 Dec 13 13:31:17.256392 ignition[936]: INFO : Stage: mount Dec 13 13:31:17.256719 ignition[936]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:31:17.256851 ignition[936]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 13:31:17.257570 ignition[936]: INFO : mount: mount passed Dec 13 13:31:17.257717 ignition[936]: INFO : Ignition finished successfully Dec 13 13:31:17.258273 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 13:31:17.263570 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 13:31:17.268035 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:31:17.313734 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (944) Dec 13 13:31:17.313772 kernel: BTRFS info (device sda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:31:17.316803 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:31:17.316835 kernel: BTRFS info (device sda6): using free space tree Dec 13 13:31:17.321519 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 13:31:17.323691 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 13:31:17.323952 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:31:17.338915 ignition[965]: INFO : Ignition 2.20.0 Dec 13 13:31:17.338915 ignition[965]: INFO : Stage: files Dec 13 13:31:17.339614 ignition[965]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:31:17.339614 ignition[965]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 13:31:17.339878 ignition[965]: DEBUG : files: compiled without relabeling support, skipping Dec 13 13:31:17.347258 ignition[965]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 13:31:17.347258 ignition[965]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 13:31:17.368986 ignition[965]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 13:31:17.369199 ignition[965]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 13:31:17.369328 ignition[965]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 13:31:17.369262 unknown[965]: wrote ssh authorized keys file for user: core Dec 13 13:31:17.403947 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 13:31:17.404264 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 13:31:17.439271 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 13:31:17.524242 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 13:31:17.525021 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 13:31:17.525021 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 13:31:17.525021 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:31:17.525021 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:31:17.525021 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:31:17.525021 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:31:17.525021 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:31:17.525021 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:31:17.525021 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:31:17.526563 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:31:17.526563 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 13:31:17.526563 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 13:31:17.526563 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 13:31:17.526563 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 13:31:17.672700 systemd-networkd[793]: ens192: Gained IPv6LL Dec 13 13:31:18.008462 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 13:31:18.331236 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 13:31:18.331236 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Dec 13 13:31:18.331755 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Dec 13 13:31:18.331755 ignition[965]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 13:31:18.332467 ignition[965]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:31:18.332679 ignition[965]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:31:18.332679 ignition[965]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 13:31:18.332679 ignition[965]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 13 13:31:18.332679 ignition[965]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 13:31:18.333269 ignition[965]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 13:31:18.333269 ignition[965]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 13 13:31:18.333269 ignition[965]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 13:31:18.549666 ignition[965]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 13:31:18.553384 ignition[965]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 13:31:18.553651 ignition[965]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 13:31:18.553651 ignition[965]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Dec 13 13:31:18.553651 ignition[965]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 13:31:18.554171 ignition[965]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:31:18.554171 ignition[965]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:31:18.554171 ignition[965]: INFO : files: files passed Dec 13 13:31:18.554171 ignition[965]: INFO : Ignition finished successfully Dec 13 13:31:18.555056 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 13:31:18.559714 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 13:31:18.562072 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 13:31:18.562776 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 13:31:18.563010 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 13:31:18.572653 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:31:18.573049 initrd-setup-root-after-ignition[997]: grep: Dec 13 13:31:18.573431 initrd-setup-root-after-ignition[1001]: grep: Dec 13 13:31:18.573598 initrd-setup-root-after-ignition[997]: /sysroot/usr/share/flatcar/enabled-sysext.conf Dec 13 13:31:18.573766 initrd-setup-root-after-ignition[1001]: /sysroot/etc/flatcar/enabled-sysext.conf Dec 13 13:31:18.573766 initrd-setup-root-after-ignition[997]: : No such file or directory Dec 13 13:31:18.573611 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:31:18.574411 initrd-setup-root-after-ignition[1001]: : No such file or directory Dec 13 13:31:18.574215 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 13:31:18.578672 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 13:31:18.603127 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 13:31:18.603210 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 13:31:18.603752 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 13:31:18.604047 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 13:31:18.604335 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 13:31:18.605061 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 13:31:18.616021 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:31:18.621602 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 13:31:18.628350 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:31:18.628528 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:31:18.628696 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 13:31:18.628881 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 13:31:18.628950 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:31:18.629157 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 13:31:18.629382 systemd[1]: Stopped target basic.target - Basic System. Dec 13 13:31:18.629581 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 13:31:18.629785 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:31:18.630118 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 13:31:18.630317 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 13:31:18.630520 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:31:18.630731 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 13:31:18.630920 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 13:31:18.631115 systemd[1]: Stopped target swap.target - Swaps. Dec 13 13:31:18.631285 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 13:31:18.631349 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:31:18.631604 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:31:18.631875 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:31:18.632027 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 13:31:18.632073 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:31:18.632233 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 13:31:18.632335 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 13:31:18.632550 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 13:31:18.632610 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:31:18.632849 systemd[1]: Stopped target paths.target - Path Units. Dec 13 13:31:18.632983 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 13:31:18.636524 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:31:18.636691 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 13:31:18.636890 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 13:31:18.637068 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 13:31:18.637134 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:31:18.637335 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 13:31:18.637379 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:31:18.637622 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 13:31:18.637681 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:31:18.637928 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 13:31:18.637984 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 13:31:18.646700 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 13:31:18.649538 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 13:31:18.649656 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 13:31:18.649751 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:31:18.650023 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 13:31:18.650100 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:31:18.652135 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 13:31:18.652195 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 13:31:18.656247 ignition[1021]: INFO : Ignition 2.20.0 Dec 13 13:31:18.656599 ignition[1021]: INFO : Stage: umount Dec 13 13:31:18.656786 ignition[1021]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:31:18.657060 ignition[1021]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 13:31:18.657650 ignition[1021]: INFO : umount: umount passed Dec 13 13:31:18.658160 ignition[1021]: INFO : Ignition finished successfully Dec 13 13:31:18.658475 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 13:31:18.658558 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 13:31:18.658807 systemd[1]: Stopped target network.target - Network. Dec 13 13:31:18.658905 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 13:31:18.658931 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 13:31:18.659074 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 13:31:18.659095 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 13:31:18.659233 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 13:31:18.659254 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 13:31:18.659396 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 13:31:18.659415 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 13:31:18.659634 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 13:31:18.659907 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 13:31:18.664241 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 13:31:18.664413 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 13:31:18.664821 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 13:31:18.664870 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 13:31:18.666301 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 13:31:18.666324 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:31:18.669573 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 13:31:18.669668 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 13:31:18.669696 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:31:18.669828 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Dec 13 13:31:18.669850 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Dec 13 13:31:18.669968 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 13:31:18.669990 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:31:18.670096 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 13:31:18.670118 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 13:31:18.670227 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 13:31:18.670246 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:31:18.670400 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:31:18.676434 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 13:31:18.676507 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 13:31:18.682901 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 13:31:18.682989 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:31:18.683329 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 13:31:18.683365 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 13:31:18.683580 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 13:31:18.683599 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:31:18.683808 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 13:31:18.683836 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:31:18.684106 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 13:31:18.684131 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 13:31:18.684418 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:31:18.684442 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:31:18.693737 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 13:31:18.693907 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 13:31:18.693951 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:31:18.694097 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 13:31:18.694119 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:31:18.694244 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 13:31:18.694270 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:31:18.694396 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:31:18.694419 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:31:18.695289 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 13:31:18.698889 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 13:31:18.698979 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 13:31:18.982418 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 13:31:18.982487 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 13:31:18.982838 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 13:31:18.982986 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 13:31:18.983022 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 13:31:18.987629 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 13:31:19.003183 systemd[1]: Switching root. Dec 13 13:31:19.050773 systemd-journald[216]: Journal stopped Dec 13 13:31:21.379717 systemd-journald[216]: Received SIGTERM from PID 1 (systemd). Dec 13 13:31:21.379759 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 13:31:21.379772 kernel: SELinux: policy capability open_perms=1 Dec 13 13:31:21.379781 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 13:31:21.379791 kernel: SELinux: policy capability always_check_network=0 Dec 13 13:31:21.379800 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 13:31:21.379814 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 13:31:21.379824 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 13:31:21.379833 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 13:31:21.379843 kernel: audit: type=1403 audit(1734096679.851:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 13:31:21.379853 systemd[1]: Successfully loaded SELinux policy in 35.426ms. Dec 13 13:31:21.379865 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.928ms. Dec 13 13:31:21.379878 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:31:21.379893 systemd[1]: Detected virtualization vmware. Dec 13 13:31:21.379905 systemd[1]: Detected architecture x86-64. Dec 13 13:31:21.379916 systemd[1]: Detected first boot. Dec 13 13:31:21.379928 systemd[1]: Initializing machine ID from random generator. Dec 13 13:31:21.379941 zram_generator::config[1064]: No configuration found. Dec 13 13:31:21.379953 systemd[1]: Populated /etc with preset unit settings. Dec 13 13:31:21.379966 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Dec 13 13:31:21.379979 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Dec 13 13:31:21.379991 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 13:31:21.380002 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 13:31:21.380014 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 13:31:21.380026 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 13:31:21.380042 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 13:31:21.380053 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 13:31:21.380066 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 13:31:21.380079 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 13:31:21.380090 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 13:31:21.380101 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 13:31:21.380112 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 13:31:21.380126 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:31:21.380138 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:31:21.380150 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 13:31:21.380161 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 13:31:21.380172 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 13:31:21.380184 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:31:21.380195 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 13:31:21.380207 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:31:21.380222 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 13:31:21.380236 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 13:31:21.380248 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 13:31:21.380261 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 13:31:21.380273 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:31:21.380285 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:31:21.380298 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:31:21.380312 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:31:21.380324 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 13:31:21.380336 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 13:31:21.381249 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:31:21.381279 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:31:21.381299 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:31:21.381312 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 13:31:21.381325 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 13:31:21.381336 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 13:31:21.381349 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 13:31:21.381362 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:31:21.381375 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 13:31:21.381390 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 13:31:21.381405 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 13:31:21.381419 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 13:31:21.381432 systemd[1]: Reached target machines.target - Containers. Dec 13 13:31:21.381444 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 13:31:21.381457 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Dec 13 13:31:21.381467 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:31:21.381478 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 13:31:21.381489 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:31:21.381534 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:31:21.381553 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:31:21.381566 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 13:31:21.381578 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:31:21.381593 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 13:31:21.381605 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 13:31:21.381616 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 13:31:21.381628 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 13:31:21.381641 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 13:31:21.381655 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:31:21.381667 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:31:21.381680 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 13:31:21.381692 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 13:31:21.381704 kernel: fuse: init (API version 7.39) Dec 13 13:31:21.381714 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:31:21.381726 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 13:31:21.381737 systemd[1]: Stopped verity-setup.service. Dec 13 13:31:21.381748 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:31:21.381762 kernel: loop: module loaded Dec 13 13:31:21.381773 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 13:31:21.381785 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 13:31:21.381798 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 13:31:21.381810 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 13:31:21.381846 systemd-journald[1144]: Collecting audit messages is disabled. Dec 13 13:31:21.381876 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 13:31:21.381891 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 13:31:21.381904 systemd-journald[1144]: Journal started Dec 13 13:31:21.381927 systemd-journald[1144]: Runtime Journal (/run/log/journal/ea171f2c654e439792513bd49dcf90fc) is 4.8M, max 38.6M, 33.8M free. Dec 13 13:31:21.150975 systemd[1]: Queued start job for default target multi-user.target. Dec 13 13:31:21.191237 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 13 13:31:21.191534 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 13:31:21.384001 jq[1131]: true Dec 13 13:31:21.384524 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:31:21.387569 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:31:21.388118 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 13:31:21.388241 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 13:31:21.388539 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:31:21.388655 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:31:21.388913 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:31:21.389003 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:31:21.389277 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 13:31:21.389365 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 13:31:21.389676 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:31:21.389820 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:31:21.390095 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:31:21.390393 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 13:31:21.390683 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 13:31:21.402396 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 13:31:21.403089 jq[1163]: true Dec 13 13:31:21.413762 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 13:31:21.416714 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 13:31:21.416881 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 13:31:21.416915 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:31:21.417776 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 13:31:21.420898 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 13:31:21.423428 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 13:31:21.423660 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:31:21.435657 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 13:31:21.443723 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 13:31:21.443894 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:31:21.447592 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 13:31:21.447765 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:31:21.448827 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:31:21.450609 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 13:31:21.451841 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 13:31:21.454855 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 13:31:21.455051 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 13:31:21.455350 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 13:31:21.459741 kernel: ACPI: bus type drm_connector registered Dec 13 13:31:21.461782 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:31:21.462242 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:31:21.462719 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 13:31:21.465184 systemd-journald[1144]: Time spent on flushing to /var/log/journal/ea171f2c654e439792513bd49dcf90fc is 42.028ms for 1838 entries. Dec 13 13:31:21.465184 systemd-journald[1144]: System Journal (/var/log/journal/ea171f2c654e439792513bd49dcf90fc) is 8.0M, max 584.8M, 576.8M free. Dec 13 13:31:21.573630 systemd-journald[1144]: Received client request to flush runtime journal. Dec 13 13:31:21.573670 kernel: loop0: detected capacity change from 0 to 138184 Dec 13 13:31:21.491760 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 13:31:21.491978 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 13:31:21.499666 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 13:31:21.524858 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:31:21.577118 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 13:31:21.586024 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:31:21.586933 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Dec 13 13:31:21.586942 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Dec 13 13:31:21.596613 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 13:31:21.596914 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:31:21.601929 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 13:31:21.611726 udevadm[1217]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 13:31:21.644316 ignition[1200]: Ignition 2.20.0 Dec 13 13:31:21.644807 ignition[1200]: deleting config from guestinfo properties Dec 13 13:31:21.711478 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 13:31:21.712948 ignition[1200]: Successfully deleted config Dec 13 13:31:21.714239 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Dec 13 13:31:21.714650 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 13:31:21.772273 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 13:31:21.772792 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 13:31:21.780912 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:31:21.805533 kernel: loop1: detected capacity change from 0 to 141000 Dec 13 13:31:21.814447 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Dec 13 13:31:21.814465 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Dec 13 13:31:21.821993 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:31:21.893524 kernel: loop2: detected capacity change from 0 to 2960 Dec 13 13:31:22.047647 kernel: loop3: detected capacity change from 0 to 211296 Dec 13 13:31:22.130536 kernel: loop4: detected capacity change from 0 to 138184 Dec 13 13:31:22.305547 kernel: loop5: detected capacity change from 0 to 141000 Dec 13 13:31:22.334520 kernel: loop6: detected capacity change from 0 to 2960 Dec 13 13:31:22.392523 kernel: loop7: detected capacity change from 0 to 211296 Dec 13 13:31:22.599633 (sd-merge)[1238]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Dec 13 13:31:22.600040 (sd-merge)[1238]: Merged extensions into '/usr'. Dec 13 13:31:22.602638 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 13:31:22.612434 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:31:22.612990 systemd[1]: Reloading requested from client PID 1198 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 13:31:22.613001 systemd[1]: Reloading... Dec 13 13:31:22.632346 systemd-udevd[1240]: Using default interface naming scheme 'v255'. Dec 13 13:31:22.678530 zram_generator::config[1266]: No configuration found. Dec 13 13:31:22.740803 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Dec 13 13:31:22.756916 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:31:22.787617 systemd[1]: Reloading finished in 173 ms. Dec 13 13:31:22.812028 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 13:31:22.819741 systemd[1]: Starting ensure-sysext.service... Dec 13 13:31:22.820835 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:31:22.836035 systemd-tmpfiles[1322]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 13:31:22.836471 systemd-tmpfiles[1322]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 13:31:22.837163 systemd-tmpfiles[1322]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 13:31:22.837430 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. Dec 13 13:31:22.837533 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. Dec 13 13:31:22.839530 systemd[1]: Reloading requested from client PID 1321 ('systemctl') (unit ensure-sysext.service)... Dec 13 13:31:22.839543 systemd[1]: Reloading... Dec 13 13:31:22.853277 systemd-tmpfiles[1322]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:31:22.853284 systemd-tmpfiles[1322]: Skipping /boot Dec 13 13:31:22.860824 systemd-tmpfiles[1322]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:31:22.860834 systemd-tmpfiles[1322]: Skipping /boot Dec 13 13:31:22.891524 zram_generator::config[1348]: No configuration found. Dec 13 13:31:22.987218 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Dec 13 13:31:23.012217 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:31:23.038515 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 13:31:23.042514 kernel: ACPI: button: Power Button [PWRF] Dec 13 13:31:23.066218 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 13:31:23.066430 systemd[1]: Reloading finished in 226 ms. Dec 13 13:31:23.075267 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:31:23.080129 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:31:23.091237 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:31:23.095713 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:31:23.101638 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 13:31:23.103602 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:31:23.110941 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:31:23.113845 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:31:23.116631 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:31:23.116823 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:31:23.118105 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 13:31:23.120882 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:31:23.128687 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:31:23.130676 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 13:31:23.131052 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:31:23.133493 systemd[1]: Finished ensure-sysext.service. Dec 13 13:31:23.133820 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:31:23.133957 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:31:23.134273 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:31:23.134373 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:31:23.156230 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 13:31:23.157420 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Dec 13 13:31:23.156605 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:31:23.156744 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:31:23.159167 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:31:23.164095 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 13:31:23.171326 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:31:23.171464 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:31:23.171824 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:31:23.182577 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1400) Dec 13 13:31:23.185813 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 13:31:23.188571 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1390) Dec 13 13:31:23.193571 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Dec 13 13:31:23.198224 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1400) Dec 13 13:31:23.198730 kernel: Guest personality initialized and is active Dec 13 13:31:23.200157 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Dec 13 13:31:23.200192 kernel: Initialized host personality Dec 13 13:31:23.241510 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 13:31:23.240919 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 13:31:23.243858 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 13:31:23.247713 (udev-worker)[1408]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Dec 13 13:31:23.257153 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:31:23.264549 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 13:31:23.300308 augenrules[1485]: No rules Dec 13 13:31:23.301202 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:31:23.301325 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:31:23.317414 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Dec 13 13:31:23.323621 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 13:31:23.323793 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 13:31:23.323945 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 13:31:23.334954 systemd-resolved[1441]: Positive Trust Anchors: Dec 13 13:31:23.334961 systemd-resolved[1441]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:31:23.334984 systemd-resolved[1441]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:31:23.356474 systemd-resolved[1441]: Defaulting to hostname 'linux'. Dec 13 13:31:23.357582 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:31:23.358858 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 13:31:23.359333 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:31:23.364105 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 13:31:23.368755 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 13:31:23.388257 lvm[1498]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:31:23.385994 systemd-networkd[1439]: lo: Link UP Dec 13 13:31:23.385997 systemd-networkd[1439]: lo: Gained carrier Dec 13 13:31:23.389743 systemd-networkd[1439]: Enumeration completed Dec 13 13:31:23.389924 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:31:23.390108 systemd[1]: Reached target network.target - Network. Dec 13 13:31:23.390292 systemd-networkd[1439]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Dec 13 13:31:23.393858 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Dec 13 13:31:23.394036 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Dec 13 13:31:23.394224 systemd-networkd[1439]: ens192: Link UP Dec 13 13:31:23.394370 systemd-networkd[1439]: ens192: Gained carrier Dec 13 13:31:23.398665 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 13:31:23.399477 systemd-timesyncd[1451]: Network configuration changed, trying to establish connection. Dec 13 13:31:23.417586 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 13:31:23.417878 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:31:23.422843 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 13:31:23.425974 lvm[1501]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:31:23.461649 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 13:31:23.500835 ldconfig[1190]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 13:31:23.544157 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 13:31:23.548677 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 13:31:23.565366 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 13:31:23.595122 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 13:31:23.595752 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 13:31:23.608447 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:31:23.609111 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:31:23.609326 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 13:31:23.609476 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 13:31:23.609739 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 13:31:23.609909 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 13:31:23.610047 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 13:31:23.610163 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 13:31:23.610190 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:31:23.610285 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:31:23.611250 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 13:31:23.612738 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 13:31:23.617127 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 13:31:23.617797 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 13:31:23.617973 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:31:23.618076 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:31:23.618201 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:31:23.618232 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:31:23.619236 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 13:31:23.622633 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 13:31:23.624592 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 13:31:23.626634 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 13:31:23.627566 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 13:31:23.628625 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 13:31:23.632590 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 13:31:23.633635 jq[1514]: false Dec 13 13:31:23.634612 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 13:31:23.638641 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 13:31:23.641361 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 13:31:23.641764 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 13:31:23.642335 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 13:31:23.644061 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 13:31:23.647589 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 13:31:23.649136 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Dec 13 13:31:23.650983 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 13:31:23.651661 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 13:31:23.657781 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 13:31:23.657894 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 13:31:23.673294 jq[1523]: true Dec 13 13:31:23.680875 jq[1532]: true Dec 13 13:31:23.688716 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Dec 13 13:31:23.694666 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Dec 13 13:31:23.700594 update_engine[1522]: I20241213 13:31:23.699863 1522 main.cc:92] Flatcar Update Engine starting Dec 13 13:31:23.703550 extend-filesystems[1515]: Found loop4 Dec 13 13:31:23.703550 extend-filesystems[1515]: Found loop5 Dec 13 13:31:23.703550 extend-filesystems[1515]: Found loop6 Dec 13 13:31:23.703550 extend-filesystems[1515]: Found loop7 Dec 13 13:31:23.703550 extend-filesystems[1515]: Found sda Dec 13 13:31:23.703550 extend-filesystems[1515]: Found sda1 Dec 13 13:31:23.703550 extend-filesystems[1515]: Found sda2 Dec 13 13:31:23.703550 extend-filesystems[1515]: Found sda3 Dec 13 13:31:23.703550 extend-filesystems[1515]: Found usr Dec 13 13:31:23.703550 extend-filesystems[1515]: Found sda4 Dec 13 13:31:23.703550 extend-filesystems[1515]: Found sda6 Dec 13 13:31:23.703550 extend-filesystems[1515]: Found sda7 Dec 13 13:31:23.703550 extend-filesystems[1515]: Found sda9 Dec 13 13:31:23.703550 extend-filesystems[1515]: Checking size of /dev/sda9 Dec 13 13:31:23.701714 (ntainerd)[1540]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 13:31:23.704114 systemd-logind[1520]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 13:31:23.704127 systemd-logind[1520]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 13:31:23.705996 systemd-logind[1520]: New seat seat0. Dec 13 13:31:23.707583 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 13:31:23.716718 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Dec 13 13:31:23.718810 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 13:31:23.718924 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 13:31:23.724557 tar[1526]: linux-amd64/helm Dec 13 13:31:23.736140 unknown[1542]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Dec 13 13:31:23.737847 unknown[1542]: Core dump limit set to -1 Dec 13 13:31:23.758524 kernel: NET: Registered PF_VSOCK protocol family Dec 13 13:31:23.770730 extend-filesystems[1515]: Old size kept for /dev/sda9 Dec 13 13:31:23.770730 extend-filesystems[1515]: Found sr0 Dec 13 13:31:23.771251 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 13:31:23.773372 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 13:31:23.797776 bash[1567]: Updated "/home/core/.ssh/authorized_keys" Dec 13 13:31:23.799620 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 13:31:23.800893 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 13:31:23.811765 dbus-daemon[1513]: [system] SELinux support is enabled Dec 13 13:31:23.813197 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 13:31:23.821207 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 13:31:23.821238 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 13:31:23.822582 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 13:31:23.822599 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 13:31:23.829125 dbus-daemon[1513]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 13:31:23.832279 systemd[1]: Started update-engine.service - Update Engine. Dec 13 13:31:23.837993 update_engine[1522]: I20241213 13:31:23.836313 1522 update_check_scheduler.cc:74] Next update check in 9m30s Dec 13 13:31:23.839520 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1406) Dec 13 13:31:23.842763 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 13:31:23.978642 locksmithd[1579]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 13:31:24.150873 sshd_keygen[1555]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 13:31:24.177775 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 13:31:24.184893 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 13:31:24.192108 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 13:31:24.192230 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 13:31:24.193530 containerd[1540]: time="2024-12-13T13:31:24.193053042Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Dec 13 13:31:24.201728 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 13:31:24.217201 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 13:31:24.221693 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 13:31:24.223166 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 13:31:24.223476 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 13:31:24.227789 containerd[1540]: time="2024-12-13T13:31:24.227766369Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:31:24.228636 containerd[1540]: time="2024-12-13T13:31:24.228620457Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:31:24.228706 containerd[1540]: time="2024-12-13T13:31:24.228679324Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 13:31:24.228744 containerd[1540]: time="2024-12-13T13:31:24.228736258Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 13:31:24.228859 containerd[1540]: time="2024-12-13T13:31:24.228850004Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 13:31:24.228895 containerd[1540]: time="2024-12-13T13:31:24.228888247Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 13:31:24.229363 containerd[1540]: time="2024-12-13T13:31:24.228948882Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:31:24.229363 containerd[1540]: time="2024-12-13T13:31:24.228958471Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:31:24.229363 containerd[1540]: time="2024-12-13T13:31:24.229056064Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:31:24.229363 containerd[1540]: time="2024-12-13T13:31:24.229064522Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 13:31:24.229363 containerd[1540]: time="2024-12-13T13:31:24.229071765Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:31:24.229363 containerd[1540]: time="2024-12-13T13:31:24.229076714Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 13:31:24.229363 containerd[1540]: time="2024-12-13T13:31:24.229117101Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:31:24.229363 containerd[1540]: time="2024-12-13T13:31:24.229222467Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:31:24.229363 containerd[1540]: time="2024-12-13T13:31:24.229273895Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:31:24.229363 containerd[1540]: time="2024-12-13T13:31:24.229281689Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 13:31:24.229363 containerd[1540]: time="2024-12-13T13:31:24.229320859Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 13:31:24.229539 containerd[1540]: time="2024-12-13T13:31:24.229346592Z" level=info msg="metadata content store policy set" policy=shared Dec 13 13:31:24.239886 tar[1526]: linux-amd64/LICENSE Dec 13 13:31:24.239933 tar[1526]: linux-amd64/README.md Dec 13 13:31:24.250161 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 13:31:24.269526 containerd[1540]: time="2024-12-13T13:31:24.268602977Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 13:31:24.269526 containerd[1540]: time="2024-12-13T13:31:24.268661754Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 13:31:24.269526 containerd[1540]: time="2024-12-13T13:31:24.268677443Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 13:31:24.269526 containerd[1540]: time="2024-12-13T13:31:24.268688847Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 13:31:24.269526 containerd[1540]: time="2024-12-13T13:31:24.268697662Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 13:31:24.269526 containerd[1540]: time="2024-12-13T13:31:24.268779821Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 13:31:24.269526 containerd[1540]: time="2024-12-13T13:31:24.268918611Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 13:31:24.269526 containerd[1540]: time="2024-12-13T13:31:24.268975355Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 13:31:24.269526 containerd[1540]: time="2024-12-13T13:31:24.268985389Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 13:31:24.269526 containerd[1540]: time="2024-12-13T13:31:24.268993998Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 13:31:24.269526 containerd[1540]: time="2024-12-13T13:31:24.269001602Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 13:31:24.269526 containerd[1540]: time="2024-12-13T13:31:24.269008904Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 13:31:24.269526 containerd[1540]: time="2024-12-13T13:31:24.269015703Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 13:31:24.269526 containerd[1540]: time="2024-12-13T13:31:24.269024032Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 13:31:24.269817 containerd[1540]: time="2024-12-13T13:31:24.269031561Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 13:31:24.269817 containerd[1540]: time="2024-12-13T13:31:24.269038492Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 13:31:24.269817 containerd[1540]: time="2024-12-13T13:31:24.269045900Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 13:31:24.269817 containerd[1540]: time="2024-12-13T13:31:24.269055491Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 13:31:24.269817 containerd[1540]: time="2024-12-13T13:31:24.269066835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 13:31:24.269817 containerd[1540]: time="2024-12-13T13:31:24.269074193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 13:31:24.269817 containerd[1540]: time="2024-12-13T13:31:24.269081781Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 13:31:24.269817 containerd[1540]: time="2024-12-13T13:31:24.269089148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 13:31:24.269817 containerd[1540]: time="2024-12-13T13:31:24.269095974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 13:31:24.269817 containerd[1540]: time="2024-12-13T13:31:24.269102560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 13:31:24.269817 containerd[1540]: time="2024-12-13T13:31:24.269110329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 13:31:24.269817 containerd[1540]: time="2024-12-13T13:31:24.269117952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 13:31:24.269817 containerd[1540]: time="2024-12-13T13:31:24.269125063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 13:31:24.269817 containerd[1540]: time="2024-12-13T13:31:24.269133034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 13:31:24.270075 containerd[1540]: time="2024-12-13T13:31:24.269139880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 13:31:24.270075 containerd[1540]: time="2024-12-13T13:31:24.269146130Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 13:31:24.270075 containerd[1540]: time="2024-12-13T13:31:24.269152453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 13:31:24.270075 containerd[1540]: time="2024-12-13T13:31:24.269160237Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 13:31:24.270075 containerd[1540]: time="2024-12-13T13:31:24.269171908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 13:31:24.270075 containerd[1540]: time="2024-12-13T13:31:24.269179502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 13:31:24.270075 containerd[1540]: time="2024-12-13T13:31:24.269186014Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 13:31:24.270075 containerd[1540]: time="2024-12-13T13:31:24.269208139Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 13:31:24.270075 containerd[1540]: time="2024-12-13T13:31:24.269219322Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 13:31:24.270075 containerd[1540]: time="2024-12-13T13:31:24.269225234Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 13:31:24.270075 containerd[1540]: time="2024-12-13T13:31:24.269231665Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 13:31:24.270075 containerd[1540]: time="2024-12-13T13:31:24.269237038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 13:31:24.270075 containerd[1540]: time="2024-12-13T13:31:24.269243946Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 13:31:24.270075 containerd[1540]: time="2024-12-13T13:31:24.269249640Z" level=info msg="NRI interface is disabled by configuration." Dec 13 13:31:24.270337 containerd[1540]: time="2024-12-13T13:31:24.269255208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 13:31:24.270355 containerd[1540]: time="2024-12-13T13:31:24.269433866Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 13:31:24.270355 containerd[1540]: time="2024-12-13T13:31:24.269462985Z" level=info msg="Connect containerd service" Dec 13 13:31:24.270355 containerd[1540]: time="2024-12-13T13:31:24.269493658Z" level=info msg="using legacy CRI server" Dec 13 13:31:24.270355 containerd[1540]: time="2024-12-13T13:31:24.269516762Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 13:31:24.270355 containerd[1540]: time="2024-12-13T13:31:24.269611462Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 13:31:24.270355 containerd[1540]: time="2024-12-13T13:31:24.270018117Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 13:31:24.270355 containerd[1540]: time="2024-12-13T13:31:24.270118008Z" level=info msg="Start subscribing containerd event" Dec 13 13:31:24.270355 containerd[1540]: time="2024-12-13T13:31:24.270147588Z" level=info msg="Start recovering state" Dec 13 13:31:24.270355 containerd[1540]: time="2024-12-13T13:31:24.270178018Z" level=info msg="Start event monitor" Dec 13 13:31:24.270355 containerd[1540]: time="2024-12-13T13:31:24.270184984Z" level=info msg="Start snapshots syncer" Dec 13 13:31:24.270355 containerd[1540]: time="2024-12-13T13:31:24.270189603Z" level=info msg="Start cni network conf syncer for default" Dec 13 13:31:24.270355 containerd[1540]: time="2024-12-13T13:31:24.270193920Z" level=info msg="Start streaming server" Dec 13 13:31:24.270641 containerd[1540]: time="2024-12-13T13:31:24.270368329Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 13:31:24.270641 containerd[1540]: time="2024-12-13T13:31:24.270421617Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 13:31:24.270641 containerd[1540]: time="2024-12-13T13:31:24.270466031Z" level=info msg="containerd successfully booted in 0.079027s" Dec 13 13:31:24.270558 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 13:31:24.840640 systemd-networkd[1439]: ens192: Gained IPv6LL Dec 13 13:31:24.841082 systemd-timesyncd[1451]: Network configuration changed, trying to establish connection. Dec 13 13:31:24.842367 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 13:31:24.842931 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 13:31:24.847714 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Dec 13 13:31:24.854099 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:31:24.856697 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 13:31:24.903570 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 13:31:24.938542 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 13:31:24.938699 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Dec 13 13:31:24.939581 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 13:31:27.275040 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:31:27.275560 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 13:31:27.275756 systemd[1]: Startup finished in 1.037s (kernel) + 6.221s (initrd) + 7.457s (userspace) = 14.717s. Dec 13 13:31:27.282466 (kubelet)[1692]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:31:27.294997 agetty[1625]: failed to open credentials directory Dec 13 13:31:27.295916 agetty[1627]: failed to open credentials directory Dec 13 13:31:27.514481 login[1625]: pam_lastlog(login:session): file /var/log/lastlog is locked/read, retrying Dec 13 13:31:27.519142 login[1627]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 13:31:27.526875 systemd-logind[1520]: New session 2 of user core. Dec 13 13:31:27.528078 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 13:31:27.534682 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 13:31:27.546768 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 13:31:27.551768 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 13:31:27.557627 (systemd)[1699]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 13:31:27.689284 systemd[1699]: Queued start job for default target default.target. Dec 13 13:31:27.701453 systemd[1699]: Created slice app.slice - User Application Slice. Dec 13 13:31:27.701587 systemd[1699]: Reached target paths.target - Paths. Dec 13 13:31:27.701638 systemd[1699]: Reached target timers.target - Timers. Dec 13 13:31:27.702466 systemd[1699]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 13:31:27.709180 systemd[1699]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 13:31:27.709212 systemd[1699]: Reached target sockets.target - Sockets. Dec 13 13:31:27.709222 systemd[1699]: Reached target basic.target - Basic System. Dec 13 13:31:27.709246 systemd[1699]: Reached target default.target - Main User Target. Dec 13 13:31:27.709266 systemd[1699]: Startup finished in 147ms. Dec 13 13:31:27.709445 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 13:31:27.713673 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 13:31:28.514888 login[1625]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 13:31:28.518760 systemd-logind[1520]: New session 1 of user core. Dec 13 13:31:28.524651 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 13:31:30.577714 kubelet[1692]: E1213 13:31:30.577649 1692 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:31:30.579791 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:31:30.579912 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:31:40.644447 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 13:31:40.651772 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:31:40.792217 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:31:40.802815 (kubelet)[1742]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:31:40.862719 kubelet[1742]: E1213 13:31:40.862665 1742 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:31:40.865992 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:31:40.866106 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:31:50.894445 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 13:31:50.903878 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:31:51.251174 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:31:51.254182 (kubelet)[1759]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:31:51.281097 kubelet[1759]: E1213 13:31:51.281068 1759 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:31:51.282975 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:31:51.283076 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:33:04.604194 systemd-timesyncd[1451]: Contacted time server 198.71.50.75:123 (2.flatcar.pool.ntp.org). Dec 13 13:33:04.604210 systemd-resolved[1441]: Clock change detected. Flushing caches. Dec 13 13:33:04.604244 systemd-timesyncd[1451]: Initial clock synchronization to Fri 2024-12-13 13:33:04.604063 UTC. Dec 13 13:33:11.006038 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 13:33:11.018401 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:33:11.352714 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:33:11.355955 (kubelet)[1775]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:33:11.395137 kubelet[1775]: E1213 13:33:11.395100 1775 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:33:11.396417 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:33:11.396495 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:33:13.666284 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 13:33:13.666986 systemd[1]: Started sshd@0-139.178.70.100:22-139.178.89.65:57008.service - OpenSSH per-connection server daemon (139.178.89.65:57008). Dec 13 13:33:13.742405 sshd[1785]: Accepted publickey for core from 139.178.89.65 port 57008 ssh2: RSA SHA256:EtsYlJgVzDm1FrvTYRxrQroWNl1HlwlEKuzBOqrtR6c Dec 13 13:33:13.743083 sshd-session[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:33:13.746507 systemd-logind[1520]: New session 3 of user core. Dec 13 13:33:13.752338 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 13:33:13.808288 systemd[1]: Started sshd@1-139.178.70.100:22-139.178.89.65:57020.service - OpenSSH per-connection server daemon (139.178.89.65:57020). Dec 13 13:33:13.839477 sshd[1790]: Accepted publickey for core from 139.178.89.65 port 57020 ssh2: RSA SHA256:EtsYlJgVzDm1FrvTYRxrQroWNl1HlwlEKuzBOqrtR6c Dec 13 13:33:13.839426 sshd-session[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:33:13.842852 systemd-logind[1520]: New session 4 of user core. Dec 13 13:33:13.847352 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 13:33:13.898422 sshd[1792]: Connection closed by 139.178.89.65 port 57020 Dec 13 13:33:13.898889 sshd-session[1790]: pam_unix(sshd:session): session closed for user core Dec 13 13:33:13.904124 systemd[1]: sshd@1-139.178.70.100:22-139.178.89.65:57020.service: Deactivated successfully. Dec 13 13:33:13.905217 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 13:33:13.906150 systemd-logind[1520]: Session 4 logged out. Waiting for processes to exit. Dec 13 13:33:13.909449 systemd[1]: Started sshd@2-139.178.70.100:22-139.178.89.65:57032.service - OpenSSH per-connection server daemon (139.178.89.65:57032). Dec 13 13:33:13.910494 systemd-logind[1520]: Removed session 4. Dec 13 13:33:13.942906 sshd[1797]: Accepted publickey for core from 139.178.89.65 port 57032 ssh2: RSA SHA256:EtsYlJgVzDm1FrvTYRxrQroWNl1HlwlEKuzBOqrtR6c Dec 13 13:33:13.943773 sshd-session[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:33:13.947395 systemd-logind[1520]: New session 5 of user core. Dec 13 13:33:13.954345 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 13:33:14.001440 sshd[1799]: Connection closed by 139.178.89.65 port 57032 Dec 13 13:33:14.001891 sshd-session[1797]: pam_unix(sshd:session): session closed for user core Dec 13 13:33:14.014950 systemd[1]: sshd@2-139.178.70.100:22-139.178.89.65:57032.service: Deactivated successfully. Dec 13 13:33:14.015812 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 13:33:14.016694 systemd-logind[1520]: Session 5 logged out. Waiting for processes to exit. Dec 13 13:33:14.017392 systemd[1]: Started sshd@3-139.178.70.100:22-139.178.89.65:57036.service - OpenSSH per-connection server daemon (139.178.89.65:57036). Dec 13 13:33:14.018452 systemd-logind[1520]: Removed session 5. Dec 13 13:33:14.049564 sshd[1804]: Accepted publickey for core from 139.178.89.65 port 57036 ssh2: RSA SHA256:EtsYlJgVzDm1FrvTYRxrQroWNl1HlwlEKuzBOqrtR6c Dec 13 13:33:14.050526 sshd-session[1804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:33:14.053679 systemd-logind[1520]: New session 6 of user core. Dec 13 13:33:14.065369 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 13:33:14.113475 sshd[1806]: Connection closed by 139.178.89.65 port 57036 Dec 13 13:33:14.114308 sshd-session[1804]: pam_unix(sshd:session): session closed for user core Dec 13 13:33:14.122853 systemd[1]: sshd@3-139.178.70.100:22-139.178.89.65:57036.service: Deactivated successfully. Dec 13 13:33:14.123802 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 13:33:14.124659 systemd-logind[1520]: Session 6 logged out. Waiting for processes to exit. Dec 13 13:33:14.125559 systemd[1]: Started sshd@4-139.178.70.100:22-139.178.89.65:57050.service - OpenSSH per-connection server daemon (139.178.89.65:57050). Dec 13 13:33:14.127402 systemd-logind[1520]: Removed session 6. Dec 13 13:33:14.157111 sshd[1811]: Accepted publickey for core from 139.178.89.65 port 57050 ssh2: RSA SHA256:EtsYlJgVzDm1FrvTYRxrQroWNl1HlwlEKuzBOqrtR6c Dec 13 13:33:14.158094 sshd-session[1811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:33:14.160887 systemd-logind[1520]: New session 7 of user core. Dec 13 13:33:14.166333 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 13:33:14.262191 sudo[1814]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 13:33:14.262385 sudo[1814]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:33:14.271609 sudo[1814]: pam_unix(sudo:session): session closed for user root Dec 13 13:33:14.272334 sshd[1813]: Connection closed by 139.178.89.65 port 57050 Dec 13 13:33:14.273177 sshd-session[1811]: pam_unix(sshd:session): session closed for user core Dec 13 13:33:14.281940 systemd[1]: sshd@4-139.178.70.100:22-139.178.89.65:57050.service: Deactivated successfully. Dec 13 13:33:14.282914 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 13:33:14.283930 systemd-logind[1520]: Session 7 logged out. Waiting for processes to exit. Dec 13 13:33:14.288621 systemd[1]: Started sshd@5-139.178.70.100:22-139.178.89.65:57052.service - OpenSSH per-connection server daemon (139.178.89.65:57052). Dec 13 13:33:14.289412 systemd-logind[1520]: Removed session 7. Dec 13 13:33:14.317421 sshd[1819]: Accepted publickey for core from 139.178.89.65 port 57052 ssh2: RSA SHA256:EtsYlJgVzDm1FrvTYRxrQroWNl1HlwlEKuzBOqrtR6c Dec 13 13:33:14.318635 sshd-session[1819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:33:14.322477 systemd-logind[1520]: New session 8 of user core. Dec 13 13:33:14.330436 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 13:33:14.379399 sudo[1823]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 13:33:14.379585 sudo[1823]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:33:14.382026 sudo[1823]: pam_unix(sudo:session): session closed for user root Dec 13 13:33:14.385219 sudo[1822]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 13 13:33:14.385401 sudo[1822]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:33:14.400807 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:33:14.418143 augenrules[1845]: No rules Dec 13 13:33:14.418582 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:33:14.418728 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:33:14.419616 sudo[1822]: pam_unix(sudo:session): session closed for user root Dec 13 13:33:14.420525 sshd[1821]: Connection closed by 139.178.89.65 port 57052 Dec 13 13:33:14.421525 sshd-session[1819]: pam_unix(sshd:session): session closed for user core Dec 13 13:33:14.426180 systemd[1]: sshd@5-139.178.70.100:22-139.178.89.65:57052.service: Deactivated successfully. Dec 13 13:33:14.427478 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 13:33:14.428397 systemd-logind[1520]: Session 8 logged out. Waiting for processes to exit. Dec 13 13:33:14.431458 systemd[1]: Started sshd@6-139.178.70.100:22-139.178.89.65:57060.service - OpenSSH per-connection server daemon (139.178.89.65:57060). Dec 13 13:33:14.432438 systemd-logind[1520]: Removed session 8. Dec 13 13:33:14.462084 sshd[1853]: Accepted publickey for core from 139.178.89.65 port 57060 ssh2: RSA SHA256:EtsYlJgVzDm1FrvTYRxrQroWNl1HlwlEKuzBOqrtR6c Dec 13 13:33:14.462896 sshd-session[1853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:33:14.466148 systemd-logind[1520]: New session 9 of user core. Dec 13 13:33:14.478436 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 13:33:14.526824 sudo[1856]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 13:33:14.527614 sudo[1856]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:33:15.221430 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 13:33:15.221514 (dockerd)[1874]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 13:33:15.487980 dockerd[1874]: time="2024-12-13T13:33:15.487861159Z" level=info msg="Starting up" Dec 13 13:33:15.574962 dockerd[1874]: time="2024-12-13T13:33:15.574928004Z" level=info msg="Loading containers: start." Dec 13 13:33:15.683253 kernel: Initializing XFRM netlink socket Dec 13 13:33:15.729054 systemd-networkd[1439]: docker0: Link UP Dec 13 13:33:15.746937 dockerd[1874]: time="2024-12-13T13:33:15.746877493Z" level=info msg="Loading containers: done." Dec 13 13:33:15.757078 dockerd[1874]: time="2024-12-13T13:33:15.757049568Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 13:33:15.757161 dockerd[1874]: time="2024-12-13T13:33:15.757106725Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Dec 13 13:33:15.757161 dockerd[1874]: time="2024-12-13T13:33:15.757155882Z" level=info msg="Daemon has completed initialization" Dec 13 13:33:15.773392 dockerd[1874]: time="2024-12-13T13:33:15.772297807Z" level=info msg="API listen on /run/docker.sock" Dec 13 13:33:15.773284 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 13:33:16.565485 containerd[1540]: time="2024-12-13T13:33:16.565458413Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 13:33:17.089914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1706584383.mount: Deactivated successfully. Dec 13 13:33:18.221428 containerd[1540]: time="2024-12-13T13:33:18.220859393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:18.222650 containerd[1540]: time="2024-12-13T13:33:18.222632870Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139254" Dec 13 13:33:18.224248 containerd[1540]: time="2024-12-13T13:33:18.224224079Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:18.226124 containerd[1540]: time="2024-12-13T13:33:18.226103164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:18.226666 containerd[1540]: time="2024-12-13T13:33:18.226648890Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 1.661166469s" Dec 13 13:33:18.226702 containerd[1540]: time="2024-12-13T13:33:18.226669579Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 13:33:18.239331 containerd[1540]: time="2024-12-13T13:33:18.239307251Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 13:33:19.174308 update_engine[1522]: I20241213 13:33:19.174232 1522 update_attempter.cc:509] Updating boot flags... Dec 13 13:33:19.214347 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2136) Dec 13 13:33:19.311304 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2138) Dec 13 13:33:19.860696 containerd[1540]: time="2024-12-13T13:33:19.860664655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:19.865314 containerd[1540]: time="2024-12-13T13:33:19.865269891Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217732" Dec 13 13:33:19.870317 containerd[1540]: time="2024-12-13T13:33:19.870287966Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:19.875671 containerd[1540]: time="2024-12-13T13:33:19.875632712Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:19.876431 containerd[1540]: time="2024-12-13T13:33:19.876349501Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 1.637016848s" Dec 13 13:33:19.876431 containerd[1540]: time="2024-12-13T13:33:19.876370580Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 13:33:19.895455 containerd[1540]: time="2024-12-13T13:33:19.895227359Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 13:33:20.915147 containerd[1540]: time="2024-12-13T13:33:20.914426900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:20.919473 containerd[1540]: time="2024-12-13T13:33:20.919408392Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332822" Dec 13 13:33:20.927282 containerd[1540]: time="2024-12-13T13:33:20.927229061Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:20.929617 containerd[1540]: time="2024-12-13T13:33:20.929586915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:20.930453 containerd[1540]: time="2024-12-13T13:33:20.930338821Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 1.035065362s" Dec 13 13:33:20.930453 containerd[1540]: time="2024-12-13T13:33:20.930367536Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 13:33:20.943962 containerd[1540]: time="2024-12-13T13:33:20.943751901Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 13:33:21.506100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 13:33:21.512378 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:33:21.747339 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:33:21.749946 (kubelet)[2170]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:33:21.791564 kubelet[2170]: E1213 13:33:21.791325 2170 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:33:21.793376 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:33:21.794045 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:33:22.503858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2340059824.mount: Deactivated successfully. Dec 13 13:33:22.740400 containerd[1540]: time="2024-12-13T13:33:22.740358510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:22.740809 containerd[1540]: time="2024-12-13T13:33:22.740746927Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619958" Dec 13 13:33:22.741104 containerd[1540]: time="2024-12-13T13:33:22.741087623Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:22.742132 containerd[1540]: time="2024-12-13T13:33:22.742116870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:22.742903 containerd[1540]: time="2024-12-13T13:33:22.742880393Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 1.799104219s" Dec 13 13:33:22.743134 containerd[1540]: time="2024-12-13T13:33:22.742951309Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 13:33:22.757932 containerd[1540]: time="2024-12-13T13:33:22.757897524Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 13:33:23.329280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount146311100.mount: Deactivated successfully. Dec 13 13:33:24.223209 containerd[1540]: time="2024-12-13T13:33:24.222697761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:24.223209 containerd[1540]: time="2024-12-13T13:33:24.223089755Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Dec 13 13:33:24.224126 containerd[1540]: time="2024-12-13T13:33:24.223313270Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:24.225045 containerd[1540]: time="2024-12-13T13:33:24.225017434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:24.225788 containerd[1540]: time="2024-12-13T13:33:24.225704410Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.467773495s" Dec 13 13:33:24.225788 containerd[1540]: time="2024-12-13T13:33:24.225724142Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 13:33:24.238945 containerd[1540]: time="2024-12-13T13:33:24.238903572Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 13:33:24.837468 containerd[1540]: time="2024-12-13T13:33:24.836847299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:24.837291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1118065770.mount: Deactivated successfully. Dec 13 13:33:24.837891 containerd[1540]: time="2024-12-13T13:33:24.837871737Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Dec 13 13:33:24.838118 containerd[1540]: time="2024-12-13T13:33:24.838102234Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:24.841247 containerd[1540]: time="2024-12-13T13:33:24.840336882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:24.841247 containerd[1540]: time="2024-12-13T13:33:24.841085492Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 602.073748ms" Dec 13 13:33:24.841247 containerd[1540]: time="2024-12-13T13:33:24.841100842Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 13:33:24.855877 containerd[1540]: time="2024-12-13T13:33:24.855853066Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 13:33:25.376397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1064352087.mount: Deactivated successfully. Dec 13 13:33:28.800990 containerd[1540]: time="2024-12-13T13:33:28.800953626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:28.801815 containerd[1540]: time="2024-12-13T13:33:28.801794120Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Dec 13 13:33:28.802288 containerd[1540]: time="2024-12-13T13:33:28.802272081Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:28.803658 containerd[1540]: time="2024-12-13T13:33:28.803643724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:28.804418 containerd[1540]: time="2024-12-13T13:33:28.804330149Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.948434869s" Dec 13 13:33:28.804418 containerd[1540]: time="2024-12-13T13:33:28.804346944Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 13:33:30.734697 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:33:30.743371 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:33:30.758272 systemd[1]: Reloading requested from client PID 2356 ('systemctl') (unit session-9.scope)... Dec 13 13:33:30.758286 systemd[1]: Reloading... Dec 13 13:33:30.818254 zram_generator::config[2394]: No configuration found. Dec 13 13:33:30.875549 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Dec 13 13:33:30.891494 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:33:30.935756 systemd[1]: Reloading finished in 177 ms. Dec 13 13:33:30.960322 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 13:33:30.960444 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 13:33:30.960682 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:33:30.967395 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:33:31.446961 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:33:31.450702 (kubelet)[2460]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:33:31.507353 kubelet[2460]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:33:31.507353 kubelet[2460]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:33:31.507353 kubelet[2460]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:33:31.524001 kubelet[2460]: I1213 13:33:31.523616 2460 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:33:31.727161 kubelet[2460]: I1213 13:33:31.727106 2460 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 13:33:31.727161 kubelet[2460]: I1213 13:33:31.727125 2460 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:33:31.727403 kubelet[2460]: I1213 13:33:31.727302 2460 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 13:33:31.879790 kubelet[2460]: I1213 13:33:31.879542 2460 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:33:31.888521 kubelet[2460]: E1213 13:33:31.888457 2460 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.100:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.100:6443: connect: connection refused Dec 13 13:33:31.948760 kubelet[2460]: I1213 13:33:31.948728 2460 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:33:31.952593 kubelet[2460]: I1213 13:33:31.952574 2460 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:33:31.957916 kubelet[2460]: I1213 13:33:31.957891 2460 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 13:33:31.958017 kubelet[2460]: I1213 13:33:31.957919 2460 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:33:31.958017 kubelet[2460]: I1213 13:33:31.957929 2460 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 13:33:31.958069 kubelet[2460]: I1213 13:33:31.958026 2460 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:33:31.958109 kubelet[2460]: I1213 13:33:31.958100 2460 kubelet.go:396] "Attempting to sync node with API server" Dec 13 13:33:31.958142 kubelet[2460]: I1213 13:33:31.958114 2460 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:33:31.958171 kubelet[2460]: I1213 13:33:31.958148 2460 kubelet.go:312] "Adding apiserver pod source" Dec 13 13:33:31.967406 kubelet[2460]: I1213 13:33:31.967222 2460 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:33:31.967406 kubelet[2460]: W1213 13:33:31.967224 2460 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://139.178.70.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused Dec 13 13:33:31.967406 kubelet[2460]: E1213 13:33:31.967295 2460 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused Dec 13 13:33:31.967561 kubelet[2460]: W1213 13:33:31.967531 2460 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://139.178.70.100:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused Dec 13 13:33:31.967561 kubelet[2460]: E1213 13:33:31.967562 2460 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.100:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused Dec 13 13:33:31.972254 kubelet[2460]: I1213 13:33:31.972109 2460 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 13:33:31.994608 kubelet[2460]: I1213 13:33:31.994544 2460 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:33:32.006738 kubelet[2460]: W1213 13:33:32.006717 2460 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 13:33:32.007140 kubelet[2460]: I1213 13:33:32.007122 2460 server.go:1256] "Started kubelet" Dec 13 13:33:32.008036 kubelet[2460]: I1213 13:33:32.007305 2460 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:33:32.008036 kubelet[2460]: I1213 13:33:32.007940 2460 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:33:32.008036 kubelet[2460]: I1213 13:33:32.007973 2460 server.go:461] "Adding debug handlers to kubelet server" Dec 13 13:33:32.008758 kubelet[2460]: I1213 13:33:32.008739 2460 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:33:32.008874 kubelet[2460]: I1213 13:33:32.008858 2460 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:33:32.018693 kubelet[2460]: I1213 13:33:32.018671 2460 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 13:33:32.020666 kubelet[2460]: I1213 13:33:32.020645 2460 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 13:33:32.020705 kubelet[2460]: I1213 13:33:32.020689 2460 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 13:33:32.040450 kubelet[2460]: W1213 13:33:32.040274 2460 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://139.178.70.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused Dec 13 13:33:32.040450 kubelet[2460]: E1213 13:33:32.040326 2460 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.100:6443: connect: connection refused" interval="200ms" Dec 13 13:33:32.040450 kubelet[2460]: E1213 13:33:32.040329 2460 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused Dec 13 13:33:32.045160 kubelet[2460]: I1213 13:33:32.045109 2460 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:33:32.045160 kubelet[2460]: I1213 13:33:32.045158 2460 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:33:32.046396 kubelet[2460]: I1213 13:33:32.046385 2460 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:33:32.054602 kubelet[2460]: E1213 13:33:32.054591 2460 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.100:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.100:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1810bfdce5d227a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 13:33:32.007106465 +0000 UTC m=+0.554197124,LastTimestamp:2024-12-13 13:33:32.007106465 +0000 UTC m=+0.554197124,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 13:33:32.056796 kubelet[2460]: I1213 13:33:32.056522 2460 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:33:32.056796 kubelet[2460]: I1213 13:33:32.056537 2460 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:33:32.056796 kubelet[2460]: I1213 13:33:32.056547 2460 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 13:33:32.056796 kubelet[2460]: E1213 13:33:32.056570 2460 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 13:33:32.058263 kubelet[2460]: W1213 13:33:32.057834 2460 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://139.178.70.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused Dec 13 13:33:32.058263 kubelet[2460]: E1213 13:33:32.057982 2460 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused Dec 13 13:33:32.058263 kubelet[2460]: I1213 13:33:32.058047 2460 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:33:32.072424 kubelet[2460]: E1213 13:33:32.072407 2460 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 13:33:32.075349 kubelet[2460]: I1213 13:33:32.075335 2460 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:33:32.075349 kubelet[2460]: I1213 13:33:32.075347 2460 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:33:32.075419 kubelet[2460]: I1213 13:33:32.075357 2460 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:33:32.120499 kubelet[2460]: I1213 13:33:32.120444 2460 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:33:32.214350 kubelet[2460]: E1213 13:33:32.120699 2460 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.100:6443/api/v1/nodes\": dial tcp 139.178.70.100:6443: connect: connection refused" node="localhost" Dec 13 13:33:32.214350 kubelet[2460]: E1213 13:33:32.157063 2460 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 13:33:32.226454 kubelet[2460]: I1213 13:33:32.226432 2460 policy_none.go:49] "None policy: Start" Dec 13 13:33:32.227064 kubelet[2460]: I1213 13:33:32.227048 2460 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:33:32.227064 kubelet[2460]: I1213 13:33:32.227065 2460 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:33:32.241475 kubelet[2460]: E1213 13:33:32.241458 2460 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.100:6443: connect: connection refused" interval="400ms" Dec 13 13:33:32.255067 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 13:33:32.262257 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 13:33:32.274719 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 13:33:32.275533 kubelet[2460]: I1213 13:33:32.275519 2460 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:33:32.275975 kubelet[2460]: I1213 13:33:32.275739 2460 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:33:32.276567 kubelet[2460]: E1213 13:33:32.276557 2460 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 13:33:32.322320 kubelet[2460]: I1213 13:33:32.322286 2460 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:33:32.322685 kubelet[2460]: E1213 13:33:32.322656 2460 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.100:6443/api/v1/nodes\": dial tcp 139.178.70.100:6443: connect: connection refused" node="localhost" Dec 13 13:33:32.358289 kubelet[2460]: I1213 13:33:32.357856 2460 topology_manager.go:215] "Topology Admit Handler" podUID="6bfbeb7afbc1a925145e3a2f3a18b1b6" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 13:33:32.358724 kubelet[2460]: I1213 13:33:32.358714 2460 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 13:33:32.359719 kubelet[2460]: I1213 13:33:32.359707 2460 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 13:33:32.364614 systemd[1]: Created slice kubepods-burstable-pod6bfbeb7afbc1a925145e3a2f3a18b1b6.slice - libcontainer container kubepods-burstable-pod6bfbeb7afbc1a925145e3a2f3a18b1b6.slice. Dec 13 13:33:32.382201 systemd[1]: Created slice kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice - libcontainer container kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice. Dec 13 13:33:32.386488 systemd[1]: Created slice kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice - libcontainer container kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice. Dec 13 13:33:32.423077 kubelet[2460]: I1213 13:33:32.422944 2460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6bfbeb7afbc1a925145e3a2f3a18b1b6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6bfbeb7afbc1a925145e3a2f3a18b1b6\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:33:32.423077 kubelet[2460]: I1213 13:33:32.422980 2460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6bfbeb7afbc1a925145e3a2f3a18b1b6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6bfbeb7afbc1a925145e3a2f3a18b1b6\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:33:32.423077 kubelet[2460]: I1213 13:33:32.422998 2460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:33:32.423077 kubelet[2460]: I1213 13:33:32.423012 2460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:33:32.423077 kubelet[2460]: I1213 13:33:32.423039 2460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6bfbeb7afbc1a925145e3a2f3a18b1b6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6bfbeb7afbc1a925145e3a2f3a18b1b6\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:33:32.423217 kubelet[2460]: I1213 13:33:32.423056 2460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:33:32.423217 kubelet[2460]: I1213 13:33:32.423090 2460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:33:32.423217 kubelet[2460]: I1213 13:33:32.423114 2460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:33:32.423217 kubelet[2460]: I1213 13:33:32.423135 2460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 13:33:32.642491 kubelet[2460]: E1213 13:33:32.642464 2460 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.100:6443: connect: connection refused" interval="800ms" Dec 13 13:33:32.681612 containerd[1540]: time="2024-12-13T13:33:32.681217929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6bfbeb7afbc1a925145e3a2f3a18b1b6,Namespace:kube-system,Attempt:0,}" Dec 13 13:33:32.685433 containerd[1540]: time="2024-12-13T13:33:32.685412096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,}" Dec 13 13:33:32.689089 containerd[1540]: time="2024-12-13T13:33:32.689064762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,}" Dec 13 13:33:32.724587 kubelet[2460]: I1213 13:33:32.724331 2460 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:33:32.724587 kubelet[2460]: E1213 13:33:32.724540 2460 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.100:6443/api/v1/nodes\": dial tcp 139.178.70.100:6443: connect: connection refused" node="localhost" Dec 13 13:33:33.175596 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount218024710.mount: Deactivated successfully. Dec 13 13:33:33.179414 containerd[1540]: time="2024-12-13T13:33:33.179319465Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Dec 13 13:33:33.179558 containerd[1540]: time="2024-12-13T13:33:33.179470877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:33:33.180641 containerd[1540]: time="2024-12-13T13:33:33.180624184Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 13:33:33.181117 containerd[1540]: time="2024-12-13T13:33:33.181046825Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:33:33.181958 containerd[1540]: time="2024-12-13T13:33:33.181866951Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 13:33:33.182169 containerd[1540]: time="2024-12-13T13:33:33.182156871Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:33:33.183873 containerd[1540]: time="2024-12-13T13:33:33.183861477Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:33:33.184680 containerd[1540]: time="2024-12-13T13:33:33.184664576Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 503.349468ms" Dec 13 13:33:33.186271 containerd[1540]: time="2024-12-13T13:33:33.186206222Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 500.741659ms" Dec 13 13:33:33.189955 containerd[1540]: time="2024-12-13T13:33:33.189607342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:33:33.189955 containerd[1540]: time="2024-12-13T13:33:33.189809894Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 500.698452ms" Dec 13 13:33:33.202290 kubelet[2460]: W1213 13:33:33.201949 2460 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://139.178.70.100:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused Dec 13 13:33:33.202290 kubelet[2460]: E1213 13:33:33.201982 2460 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.100:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused Dec 13 13:33:33.282841 containerd[1540]: time="2024-12-13T13:33:33.282779100Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:33:33.282971 containerd[1540]: time="2024-12-13T13:33:33.282842408Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:33:33.282971 containerd[1540]: time="2024-12-13T13:33:33.282852860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:33.282971 containerd[1540]: time="2024-12-13T13:33:33.282930206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:33.284434 containerd[1540]: time="2024-12-13T13:33:33.282225555Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:33:33.284473 containerd[1540]: time="2024-12-13T13:33:33.284422925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:33:33.284513 containerd[1540]: time="2024-12-13T13:33:33.284503650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:33.284621 containerd[1540]: time="2024-12-13T13:33:33.284590253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:33.285549 containerd[1540]: time="2024-12-13T13:33:33.285509351Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:33:33.285592 containerd[1540]: time="2024-12-13T13:33:33.285549281Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:33:33.285662 containerd[1540]: time="2024-12-13T13:33:33.285622430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:33.288163 containerd[1540]: time="2024-12-13T13:33:33.287568660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:33.306351 systemd[1]: Started cri-containerd-078fdbc0f6ef58eff3b8cd83fe91fa056de2ead161b95efa677ad69e4212ad18.scope - libcontainer container 078fdbc0f6ef58eff3b8cd83fe91fa056de2ead161b95efa677ad69e4212ad18. Dec 13 13:33:33.311015 systemd[1]: Started cri-containerd-2c80f48268a371e6245ddeedfc7139f3cc27bc8d493ffa18ca2b852bfa23bf30.scope - libcontainer container 2c80f48268a371e6245ddeedfc7139f3cc27bc8d493ffa18ca2b852bfa23bf30. Dec 13 13:33:33.311825 systemd[1]: Started cri-containerd-706aea0f2ae2af2582091f4c6b782881ec1e37544fe1156caa6deb80171962c2.scope - libcontainer container 706aea0f2ae2af2582091f4c6b782881ec1e37544fe1156caa6deb80171962c2. Dec 13 13:33:33.363817 containerd[1540]: time="2024-12-13T13:33:33.363756179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,} returns sandbox id \"078fdbc0f6ef58eff3b8cd83fe91fa056de2ead161b95efa677ad69e4212ad18\"" Dec 13 13:33:33.364000 containerd[1540]: time="2024-12-13T13:33:33.363989587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c80f48268a371e6245ddeedfc7139f3cc27bc8d493ffa18ca2b852bfa23bf30\"" Dec 13 13:33:33.367374 containerd[1540]: time="2024-12-13T13:33:33.367201324Z" level=info msg="CreateContainer within sandbox \"2c80f48268a371e6245ddeedfc7139f3cc27bc8d493ffa18ca2b852bfa23bf30\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 13:33:33.367374 containerd[1540]: time="2024-12-13T13:33:33.367342650Z" level=info msg="CreateContainer within sandbox \"078fdbc0f6ef58eff3b8cd83fe91fa056de2ead161b95efa677ad69e4212ad18\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 13:33:33.380676 containerd[1540]: time="2024-12-13T13:33:33.380653742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6bfbeb7afbc1a925145e3a2f3a18b1b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"706aea0f2ae2af2582091f4c6b782881ec1e37544fe1156caa6deb80171962c2\"" Dec 13 13:33:33.382252 containerd[1540]: time="2024-12-13T13:33:33.382214117Z" level=info msg="CreateContainer within sandbox \"706aea0f2ae2af2582091f4c6b782881ec1e37544fe1156caa6deb80171962c2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 13:33:33.443852 kubelet[2460]: E1213 13:33:33.443158 2460 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.100:6443: connect: connection refused" interval="1.6s" Dec 13 13:33:33.444390 containerd[1540]: time="2024-12-13T13:33:33.443323662Z" level=info msg="CreateContainer within sandbox \"078fdbc0f6ef58eff3b8cd83fe91fa056de2ead161b95efa677ad69e4212ad18\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ee260835882b8ffa9e2f3363b426d8f2d9cb2b64ee1f06efccca90e836ba3801\"" Dec 13 13:33:33.444430 containerd[1540]: time="2024-12-13T13:33:33.444402211Z" level=info msg="CreateContainer within sandbox \"2c80f48268a371e6245ddeedfc7139f3cc27bc8d493ffa18ca2b852bfa23bf30\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d09f98ebe6dfb8a89fc22b98dc7ca9953d727df48c3a31b571241b1e0e11f37c\"" Dec 13 13:33:33.444564 containerd[1540]: time="2024-12-13T13:33:33.444533837Z" level=info msg="StartContainer for \"ee260835882b8ffa9e2f3363b426d8f2d9cb2b64ee1f06efccca90e836ba3801\"" Dec 13 13:33:33.444930 containerd[1540]: time="2024-12-13T13:33:33.444866440Z" level=info msg="CreateContainer within sandbox \"706aea0f2ae2af2582091f4c6b782881ec1e37544fe1156caa6deb80171962c2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"120f20c319bff5bb2d4c7a92720ca04b327a68c2be62b640a51741aae0721da2\"" Dec 13 13:33:33.446170 containerd[1540]: time="2024-12-13T13:33:33.445572910Z" level=info msg="StartContainer for \"d09f98ebe6dfb8a89fc22b98dc7ca9953d727df48c3a31b571241b1e0e11f37c\"" Dec 13 13:33:33.455840 containerd[1540]: time="2024-12-13T13:33:33.455754895Z" level=info msg="StartContainer for \"120f20c319bff5bb2d4c7a92720ca04b327a68c2be62b640a51741aae0721da2\"" Dec 13 13:33:33.457224 kubelet[2460]: W1213 13:33:33.456608 2460 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://139.178.70.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused Dec 13 13:33:33.457224 kubelet[2460]: E1213 13:33:33.456648 2460 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused Dec 13 13:33:33.474686 systemd[1]: Started cri-containerd-d09f98ebe6dfb8a89fc22b98dc7ca9953d727df48c3a31b571241b1e0e11f37c.scope - libcontainer container d09f98ebe6dfb8a89fc22b98dc7ca9953d727df48c3a31b571241b1e0e11f37c. Dec 13 13:33:33.478230 systemd[1]: Started cri-containerd-ee260835882b8ffa9e2f3363b426d8f2d9cb2b64ee1f06efccca90e836ba3801.scope - libcontainer container ee260835882b8ffa9e2f3363b426d8f2d9cb2b64ee1f06efccca90e836ba3801. Dec 13 13:33:33.478930 kubelet[2460]: W1213 13:33:33.478874 2460 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://139.178.70.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused Dec 13 13:33:33.479117 kubelet[2460]: E1213 13:33:33.479102 2460 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused Dec 13 13:33:33.481835 systemd[1]: Started cri-containerd-120f20c319bff5bb2d4c7a92720ca04b327a68c2be62b640a51741aae0721da2.scope - libcontainer container 120f20c319bff5bb2d4c7a92720ca04b327a68c2be62b640a51741aae0721da2. Dec 13 13:33:33.522014 containerd[1540]: time="2024-12-13T13:33:33.521984465Z" level=info msg="StartContainer for \"120f20c319bff5bb2d4c7a92720ca04b327a68c2be62b640a51741aae0721da2\" returns successfully" Dec 13 13:33:33.528141 kubelet[2460]: I1213 13:33:33.526409 2460 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:33:33.528141 kubelet[2460]: E1213 13:33:33.526625 2460 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.100:6443/api/v1/nodes\": dial tcp 139.178.70.100:6443: connect: connection refused" node="localhost" Dec 13 13:33:33.528299 containerd[1540]: time="2024-12-13T13:33:33.525874814Z" level=info msg="StartContainer for \"d09f98ebe6dfb8a89fc22b98dc7ca9953d727df48c3a31b571241b1e0e11f37c\" returns successfully" Dec 13 13:33:33.547494 containerd[1540]: time="2024-12-13T13:33:33.547419285Z" level=info msg="StartContainer for \"ee260835882b8ffa9e2f3363b426d8f2d9cb2b64ee1f06efccca90e836ba3801\" returns successfully" Dec 13 13:33:33.629371 kubelet[2460]: W1213 13:33:33.629333 2460 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://139.178.70.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused Dec 13 13:33:33.629371 kubelet[2460]: E1213 13:33:33.629372 2460 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused Dec 13 13:33:33.942785 kubelet[2460]: E1213 13:33:33.942750 2460 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.100:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.100:6443: connect: connection refused Dec 13 13:33:35.074231 kubelet[2460]: E1213 13:33:35.074205 2460 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 13:33:35.128771 kubelet[2460]: I1213 13:33:35.128740 2460 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:33:35.135634 kubelet[2460]: I1213 13:33:35.135610 2460 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 13:33:35.144381 kubelet[2460]: E1213 13:33:35.144359 2460 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:33:35.244539 kubelet[2460]: E1213 13:33:35.244480 2460 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:33:35.345229 kubelet[2460]: E1213 13:33:35.344994 2460 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:33:35.445640 kubelet[2460]: E1213 13:33:35.445581 2460 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:33:35.546304 kubelet[2460]: E1213 13:33:35.546280 2460 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:33:35.646788 kubelet[2460]: E1213 13:33:35.646706 2460 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:33:35.747250 kubelet[2460]: E1213 13:33:35.747211 2460 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:33:35.847433 kubelet[2460]: E1213 13:33:35.847372 2460 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:33:35.947921 kubelet[2460]: E1213 13:33:35.947847 2460 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:33:36.048519 kubelet[2460]: E1213 13:33:36.048488 2460 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:33:36.969898 kubelet[2460]: I1213 13:33:36.969879 2460 apiserver.go:52] "Watching apiserver" Dec 13 13:33:37.021508 kubelet[2460]: I1213 13:33:37.021459 2460 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 13:33:37.604353 systemd[1]: Reloading requested from client PID 2730 ('systemctl') (unit session-9.scope)... Dec 13 13:33:37.604364 systemd[1]: Reloading... Dec 13 13:33:37.659254 zram_generator::config[2777]: No configuration found. Dec 13 13:33:37.713692 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Dec 13 13:33:37.729606 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:33:37.781886 systemd[1]: Reloading finished in 177 ms. Dec 13 13:33:37.808890 kubelet[2460]: I1213 13:33:37.808869 2460 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:33:37.809052 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:33:37.818779 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 13:33:37.818929 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:33:37.823406 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:33:38.179383 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:33:38.184035 (kubelet)[2835]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:33:38.238071 kubelet[2835]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:33:38.238071 kubelet[2835]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:33:38.238071 kubelet[2835]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:33:38.238302 kubelet[2835]: I1213 13:33:38.238067 2835 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:33:38.242072 kubelet[2835]: I1213 13:33:38.241861 2835 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 13:33:38.242072 kubelet[2835]: I1213 13:33:38.241874 2835 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:33:38.242072 kubelet[2835]: I1213 13:33:38.242062 2835 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 13:33:38.247482 kubelet[2835]: I1213 13:33:38.246797 2835 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 13:33:38.249483 kubelet[2835]: I1213 13:33:38.249454 2835 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:33:38.255656 kubelet[2835]: I1213 13:33:38.255623 2835 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:33:38.256635 kubelet[2835]: I1213 13:33:38.256361 2835 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:33:38.256635 kubelet[2835]: I1213 13:33:38.256464 2835 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 13:33:38.256635 kubelet[2835]: I1213 13:33:38.256476 2835 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:33:38.256635 kubelet[2835]: I1213 13:33:38.256482 2835 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 13:33:38.256635 kubelet[2835]: I1213 13:33:38.256501 2835 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:33:38.256635 kubelet[2835]: I1213 13:33:38.256553 2835 kubelet.go:396] "Attempting to sync node with API server" Dec 13 13:33:38.259603 kubelet[2835]: I1213 13:33:38.259217 2835 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:33:38.259603 kubelet[2835]: I1213 13:33:38.259272 2835 kubelet.go:312] "Adding apiserver pod source" Dec 13 13:33:38.259603 kubelet[2835]: I1213 13:33:38.259286 2835 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:33:38.262242 kubelet[2835]: I1213 13:33:38.262174 2835 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 13:33:38.262640 kubelet[2835]: I1213 13:33:38.262477 2835 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:33:38.263361 kubelet[2835]: I1213 13:33:38.263043 2835 server.go:1256] "Started kubelet" Dec 13 13:33:38.265879 kubelet[2835]: I1213 13:33:38.265871 2835 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:33:38.268129 kubelet[2835]: I1213 13:33:38.267976 2835 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:33:38.269052 kubelet[2835]: I1213 13:33:38.268990 2835 server.go:461] "Adding debug handlers to kubelet server" Dec 13 13:33:38.269364 kubelet[2835]: I1213 13:33:38.269197 2835 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 13:33:38.272413 kubelet[2835]: I1213 13:33:38.270992 2835 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:33:38.272413 kubelet[2835]: I1213 13:33:38.272000 2835 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:33:38.272413 kubelet[2835]: I1213 13:33:38.272077 2835 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 13:33:38.272741 kubelet[2835]: I1213 13:33:38.272531 2835 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 13:33:38.280669 kubelet[2835]: I1213 13:33:38.280023 2835 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:33:38.280669 kubelet[2835]: I1213 13:33:38.280075 2835 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:33:38.282073 kubelet[2835]: I1213 13:33:38.282063 2835 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:33:38.283626 kubelet[2835]: I1213 13:33:38.283612 2835 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:33:38.285285 kubelet[2835]: I1213 13:33:38.285165 2835 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:33:38.285285 kubelet[2835]: I1213 13:33:38.285178 2835 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:33:38.285285 kubelet[2835]: I1213 13:33:38.285189 2835 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 13:33:38.285285 kubelet[2835]: E1213 13:33:38.285213 2835 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 13:33:38.285911 kubelet[2835]: E1213 13:33:38.285898 2835 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 13:33:38.326642 kubelet[2835]: I1213 13:33:38.326422 2835 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:33:38.326642 kubelet[2835]: I1213 13:33:38.326435 2835 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:33:38.326642 kubelet[2835]: I1213 13:33:38.326444 2835 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:33:38.326642 kubelet[2835]: I1213 13:33:38.326547 2835 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 13:33:38.326642 kubelet[2835]: I1213 13:33:38.326560 2835 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 13:33:38.326642 kubelet[2835]: I1213 13:33:38.326564 2835 policy_none.go:49] "None policy: Start" Dec 13 13:33:38.326847 kubelet[2835]: I1213 13:33:38.326836 2835 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:33:38.326847 kubelet[2835]: I1213 13:33:38.326847 2835 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:33:38.326959 kubelet[2835]: I1213 13:33:38.326930 2835 state_mem.go:75] "Updated machine memory state" Dec 13 13:33:38.330394 kubelet[2835]: I1213 13:33:38.330383 2835 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:33:38.330504 kubelet[2835]: I1213 13:33:38.330493 2835 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:33:38.375708 kubelet[2835]: I1213 13:33:38.375691 2835 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:33:38.379591 kubelet[2835]: I1213 13:33:38.379573 2835 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Dec 13 13:33:38.379661 kubelet[2835]: I1213 13:33:38.379618 2835 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 13:33:38.385858 kubelet[2835]: I1213 13:33:38.385741 2835 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 13:33:38.385858 kubelet[2835]: I1213 13:33:38.385784 2835 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 13:33:38.385858 kubelet[2835]: I1213 13:33:38.385804 2835 topology_manager.go:215] "Topology Admit Handler" podUID="6bfbeb7afbc1a925145e3a2f3a18b1b6" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 13:33:38.389617 kubelet[2835]: E1213 13:33:38.389583 2835 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 13:33:38.473080 kubelet[2835]: I1213 13:33:38.472987 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:33:38.573411 kubelet[2835]: I1213 13:33:38.573307 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:33:38.573411 kubelet[2835]: I1213 13:33:38.573386 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:33:38.573825 kubelet[2835]: I1213 13:33:38.573494 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6bfbeb7afbc1a925145e3a2f3a18b1b6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6bfbeb7afbc1a925145e3a2f3a18b1b6\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:33:38.573825 kubelet[2835]: I1213 13:33:38.573524 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6bfbeb7afbc1a925145e3a2f3a18b1b6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6bfbeb7afbc1a925145e3a2f3a18b1b6\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:33:38.573825 kubelet[2835]: I1213 13:33:38.573550 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:33:38.573825 kubelet[2835]: I1213 13:33:38.573673 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:33:38.573825 kubelet[2835]: I1213 13:33:38.573702 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 13:33:38.573948 kubelet[2835]: I1213 13:33:38.573716 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6bfbeb7afbc1a925145e3a2f3a18b1b6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6bfbeb7afbc1a925145e3a2f3a18b1b6\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:33:39.262437 kubelet[2835]: I1213 13:33:39.262283 2835 apiserver.go:52] "Watching apiserver" Dec 13 13:33:39.273318 kubelet[2835]: I1213 13:33:39.273280 2835 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 13:33:39.350362 kubelet[2835]: E1213 13:33:39.350049 2835 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 13:33:39.384711 kubelet[2835]: I1213 13:33:39.384689 2835 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.384655259 podStartE2EDuration="3.384655259s" podCreationTimestamp="2024-12-13 13:33:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:33:39.364085689 +0000 UTC m=+1.166806130" watchObservedRunningTime="2024-12-13 13:33:39.384655259 +0000 UTC m=+1.187375691" Dec 13 13:33:39.416175 kubelet[2835]: I1213 13:33:39.416156 2835 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.41612249 podStartE2EDuration="1.41612249s" podCreationTimestamp="2024-12-13 13:33:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:33:39.385333608 +0000 UTC m=+1.188054048" watchObservedRunningTime="2024-12-13 13:33:39.41612249 +0000 UTC m=+1.218842930" Dec 13 13:33:41.986668 sudo[1856]: pam_unix(sudo:session): session closed for user root Dec 13 13:33:41.987358 sshd[1855]: Connection closed by 139.178.89.65 port 57060 Dec 13 13:33:41.987849 sshd-session[1853]: pam_unix(sshd:session): session closed for user core Dec 13 13:33:41.989955 systemd[1]: sshd@6-139.178.70.100:22-139.178.89.65:57060.service: Deactivated successfully. Dec 13 13:33:41.991081 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 13:33:41.991212 systemd[1]: session-9.scope: Consumed 2.960s CPU time, 183.6M memory peak, 0B memory swap peak. Dec 13 13:33:41.991583 systemd-logind[1520]: Session 9 logged out. Waiting for processes to exit. Dec 13 13:33:41.992219 systemd-logind[1520]: Removed session 9. Dec 13 13:33:44.887429 kubelet[2835]: I1213 13:33:44.887367 2835 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=6.887333357 podStartE2EDuration="6.887333357s" podCreationTimestamp="2024-12-13 13:33:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:33:39.41702081 +0000 UTC m=+1.219741244" watchObservedRunningTime="2024-12-13 13:33:44.887333357 +0000 UTC m=+6.690053811" Dec 13 13:33:51.322326 kubelet[2835]: I1213 13:33:51.322301 2835 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 13:33:51.322631 kubelet[2835]: I1213 13:33:51.322603 2835 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 13:33:51.322655 containerd[1540]: time="2024-12-13T13:33:51.322513627Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 13:33:52.186029 kubelet[2835]: I1213 13:33:52.185981 2835 topology_manager.go:215] "Topology Admit Handler" podUID="a7ef3a29-3474-4e67-8963-06de7d503098" podNamespace="kube-system" podName="kube-proxy-pvzqx" Dec 13 13:33:52.192484 systemd[1]: Created slice kubepods-besteffort-poda7ef3a29_3474_4e67_8963_06de7d503098.slice - libcontainer container kubepods-besteffort-poda7ef3a29_3474_4e67_8963_06de7d503098.slice. Dec 13 13:33:52.298912 kubelet[2835]: I1213 13:33:52.298427 2835 topology_manager.go:215] "Topology Admit Handler" podUID="8f8c198a-184b-4bd2-8d96-137937326a25" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-rzznq" Dec 13 13:33:52.304106 systemd[1]: Created slice kubepods-besteffort-pod8f8c198a_184b_4bd2_8d96_137937326a25.slice - libcontainer container kubepods-besteffort-pod8f8c198a_184b_4bd2_8d96_137937326a25.slice. Dec 13 13:33:52.348034 kubelet[2835]: I1213 13:33:52.348001 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a7ef3a29-3474-4e67-8963-06de7d503098-kube-proxy\") pod \"kube-proxy-pvzqx\" (UID: \"a7ef3a29-3474-4e67-8963-06de7d503098\") " pod="kube-system/kube-proxy-pvzqx" Dec 13 13:33:52.348034 kubelet[2835]: I1213 13:33:52.348031 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a7ef3a29-3474-4e67-8963-06de7d503098-lib-modules\") pod \"kube-proxy-pvzqx\" (UID: \"a7ef3a29-3474-4e67-8963-06de7d503098\") " pod="kube-system/kube-proxy-pvzqx" Dec 13 13:33:52.348317 kubelet[2835]: I1213 13:33:52.348047 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a7ef3a29-3474-4e67-8963-06de7d503098-xtables-lock\") pod \"kube-proxy-pvzqx\" (UID: \"a7ef3a29-3474-4e67-8963-06de7d503098\") " pod="kube-system/kube-proxy-pvzqx" Dec 13 13:33:52.348317 kubelet[2835]: I1213 13:33:52.348060 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82ngs\" (UniqueName: \"kubernetes.io/projected/a7ef3a29-3474-4e67-8963-06de7d503098-kube-api-access-82ngs\") pod \"kube-proxy-pvzqx\" (UID: \"a7ef3a29-3474-4e67-8963-06de7d503098\") " pod="kube-system/kube-proxy-pvzqx" Dec 13 13:33:52.448685 kubelet[2835]: I1213 13:33:52.448477 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8f8c198a-184b-4bd2-8d96-137937326a25-var-lib-calico\") pod \"tigera-operator-c7ccbd65-rzznq\" (UID: \"8f8c198a-184b-4bd2-8d96-137937326a25\") " pod="tigera-operator/tigera-operator-c7ccbd65-rzznq" Dec 13 13:33:52.448685 kubelet[2835]: I1213 13:33:52.448504 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfbxk\" (UniqueName: \"kubernetes.io/projected/8f8c198a-184b-4bd2-8d96-137937326a25-kube-api-access-bfbxk\") pod \"tigera-operator-c7ccbd65-rzznq\" (UID: \"8f8c198a-184b-4bd2-8d96-137937326a25\") " pod="tigera-operator/tigera-operator-c7ccbd65-rzznq" Dec 13 13:33:52.498802 containerd[1540]: time="2024-12-13T13:33:52.498760151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pvzqx,Uid:a7ef3a29-3474-4e67-8963-06de7d503098,Namespace:kube-system,Attempt:0,}" Dec 13 13:33:52.510356 containerd[1540]: time="2024-12-13T13:33:52.510310734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:33:52.510356 containerd[1540]: time="2024-12-13T13:33:52.510338657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:33:52.510356 containerd[1540]: time="2024-12-13T13:33:52.510345530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:52.510527 containerd[1540]: time="2024-12-13T13:33:52.510381862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:52.526322 systemd[1]: Started cri-containerd-52aca580715a20c9d10d63ce06be1b62595cf3439079edacefd9046d114ad153.scope - libcontainer container 52aca580715a20c9d10d63ce06be1b62595cf3439079edacefd9046d114ad153. Dec 13 13:33:52.539178 containerd[1540]: time="2024-12-13T13:33:52.539154821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pvzqx,Uid:a7ef3a29-3474-4e67-8963-06de7d503098,Namespace:kube-system,Attempt:0,} returns sandbox id \"52aca580715a20c9d10d63ce06be1b62595cf3439079edacefd9046d114ad153\"" Dec 13 13:33:52.541864 containerd[1540]: time="2024-12-13T13:33:52.541834538Z" level=info msg="CreateContainer within sandbox \"52aca580715a20c9d10d63ce06be1b62595cf3439079edacefd9046d114ad153\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 13:33:52.548602 containerd[1540]: time="2024-12-13T13:33:52.548579938Z" level=info msg="CreateContainer within sandbox \"52aca580715a20c9d10d63ce06be1b62595cf3439079edacefd9046d114ad153\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"219c5f060bb378dd36ad234c8fd90762c5393c9bc07c72c467805a84053fadec\"" Dec 13 13:33:52.549360 containerd[1540]: time="2024-12-13T13:33:52.549289738Z" level=info msg="StartContainer for \"219c5f060bb378dd36ad234c8fd90762c5393c9bc07c72c467805a84053fadec\"" Dec 13 13:33:52.568319 systemd[1]: Started cri-containerd-219c5f060bb378dd36ad234c8fd90762c5393c9bc07c72c467805a84053fadec.scope - libcontainer container 219c5f060bb378dd36ad234c8fd90762c5393c9bc07c72c467805a84053fadec. Dec 13 13:33:52.583354 containerd[1540]: time="2024-12-13T13:33:52.583330117Z" level=info msg="StartContainer for \"219c5f060bb378dd36ad234c8fd90762c5393c9bc07c72c467805a84053fadec\" returns successfully" Dec 13 13:33:52.607668 containerd[1540]: time="2024-12-13T13:33:52.607644960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-rzznq,Uid:8f8c198a-184b-4bd2-8d96-137937326a25,Namespace:tigera-operator,Attempt:0,}" Dec 13 13:33:52.621129 containerd[1540]: time="2024-12-13T13:33:52.620979879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:33:52.621129 containerd[1540]: time="2024-12-13T13:33:52.621020800Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:33:52.621129 containerd[1540]: time="2024-12-13T13:33:52.621038608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:52.621129 containerd[1540]: time="2024-12-13T13:33:52.621080427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:52.635628 systemd[1]: Started cri-containerd-921e0a094a89d06143b0b606de1f2d44aa4421ae22e98860f5a4dc6fcb360a91.scope - libcontainer container 921e0a094a89d06143b0b606de1f2d44aa4421ae22e98860f5a4dc6fcb360a91. Dec 13 13:33:52.661805 containerd[1540]: time="2024-12-13T13:33:52.661517005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-rzznq,Uid:8f8c198a-184b-4bd2-8d96-137937326a25,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"921e0a094a89d06143b0b606de1f2d44aa4421ae22e98860f5a4dc6fcb360a91\"" Dec 13 13:33:52.669117 containerd[1540]: time="2024-12-13T13:33:52.669051528Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 13:33:53.362671 kubelet[2835]: I1213 13:33:53.362490 2835 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-pvzqx" podStartSLOduration=1.362463117 podStartE2EDuration="1.362463117s" podCreationTimestamp="2024-12-13 13:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:33:53.36207343 +0000 UTC m=+15.164793871" watchObservedRunningTime="2024-12-13 13:33:53.362463117 +0000 UTC m=+15.165183552" Dec 13 13:33:54.498718 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2525480353.mount: Deactivated successfully. Dec 13 13:33:54.913262 containerd[1540]: time="2024-12-13T13:33:54.912980882Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:54.919721 containerd[1540]: time="2024-12-13T13:33:54.919671686Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764285" Dec 13 13:33:54.925382 containerd[1540]: time="2024-12-13T13:33:54.925353088Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:54.934847 containerd[1540]: time="2024-12-13T13:33:54.934813883Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:54.935454 containerd[1540]: time="2024-12-13T13:33:54.935155742Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.266083278s" Dec 13 13:33:54.935454 containerd[1540]: time="2024-12-13T13:33:54.935174887Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Dec 13 13:33:54.961665 containerd[1540]: time="2024-12-13T13:33:54.961579356Z" level=info msg="CreateContainer within sandbox \"921e0a094a89d06143b0b606de1f2d44aa4421ae22e98860f5a4dc6fcb360a91\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 13:33:55.024077 containerd[1540]: time="2024-12-13T13:33:55.024053983Z" level=info msg="CreateContainer within sandbox \"921e0a094a89d06143b0b606de1f2d44aa4421ae22e98860f5a4dc6fcb360a91\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a946a4ee814cd0a3e7b6b1cb0d71869e18a6e2d9c83f1e8d965a816e6344209d\"" Dec 13 13:33:55.035224 containerd[1540]: time="2024-12-13T13:33:55.035205844Z" level=info msg="StartContainer for \"a946a4ee814cd0a3e7b6b1cb0d71869e18a6e2d9c83f1e8d965a816e6344209d\"" Dec 13 13:33:55.053348 systemd[1]: Started cri-containerd-a946a4ee814cd0a3e7b6b1cb0d71869e18a6e2d9c83f1e8d965a816e6344209d.scope - libcontainer container a946a4ee814cd0a3e7b6b1cb0d71869e18a6e2d9c83f1e8d965a816e6344209d. Dec 13 13:33:55.078781 containerd[1540]: time="2024-12-13T13:33:55.078740183Z" level=info msg="StartContainer for \"a946a4ee814cd0a3e7b6b1cb0d71869e18a6e2d9c83f1e8d965a816e6344209d\" returns successfully" Dec 13 13:33:57.957317 kubelet[2835]: I1213 13:33:57.957298 2835 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-rzznq" podStartSLOduration=3.684973757 podStartE2EDuration="5.95726961s" podCreationTimestamp="2024-12-13 13:33:52 +0000 UTC" firstStartedPulling="2024-12-13 13:33:52.663099564 +0000 UTC m=+14.465819996" lastFinishedPulling="2024-12-13 13:33:54.935395416 +0000 UTC m=+16.738115849" observedRunningTime="2024-12-13 13:33:55.441272584 +0000 UTC m=+17.243993027" watchObservedRunningTime="2024-12-13 13:33:57.95726961 +0000 UTC m=+19.759990044" Dec 13 13:33:57.957621 kubelet[2835]: I1213 13:33:57.957379 2835 topology_manager.go:215] "Topology Admit Handler" podUID="b890ded4-8444-4757-b8e1-be4c7f8d48dc" podNamespace="calico-system" podName="calico-typha-77c8876d5c-q4l8h" Dec 13 13:33:57.973754 systemd[1]: Created slice kubepods-besteffort-podb890ded4_8444_4757_b8e1_be4c7f8d48dc.slice - libcontainer container kubepods-besteffort-podb890ded4_8444_4757_b8e1_be4c7f8d48dc.slice. Dec 13 13:33:57.991389 kubelet[2835]: I1213 13:33:57.991118 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xbk7\" (UniqueName: \"kubernetes.io/projected/b890ded4-8444-4757-b8e1-be4c7f8d48dc-kube-api-access-9xbk7\") pod \"calico-typha-77c8876d5c-q4l8h\" (UID: \"b890ded4-8444-4757-b8e1-be4c7f8d48dc\") " pod="calico-system/calico-typha-77c8876d5c-q4l8h" Dec 13 13:33:57.991389 kubelet[2835]: I1213 13:33:57.991151 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b890ded4-8444-4757-b8e1-be4c7f8d48dc-tigera-ca-bundle\") pod \"calico-typha-77c8876d5c-q4l8h\" (UID: \"b890ded4-8444-4757-b8e1-be4c7f8d48dc\") " pod="calico-system/calico-typha-77c8876d5c-q4l8h" Dec 13 13:33:57.991389 kubelet[2835]: I1213 13:33:57.991165 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b890ded4-8444-4757-b8e1-be4c7f8d48dc-typha-certs\") pod \"calico-typha-77c8876d5c-q4l8h\" (UID: \"b890ded4-8444-4757-b8e1-be4c7f8d48dc\") " pod="calico-system/calico-typha-77c8876d5c-q4l8h" Dec 13 13:33:58.053333 kubelet[2835]: I1213 13:33:58.053304 2835 topology_manager.go:215] "Topology Admit Handler" podUID="55f59816-94be-4da7-a386-cc3c24c4a3d8" podNamespace="calico-system" podName="calico-node-x4vxm" Dec 13 13:33:58.059272 systemd[1]: Created slice kubepods-besteffort-pod55f59816_94be_4da7_a386_cc3c24c4a3d8.slice - libcontainer container kubepods-besteffort-pod55f59816_94be_4da7_a386_cc3c24c4a3d8.slice. Dec 13 13:33:58.092134 kubelet[2835]: I1213 13:33:58.092093 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9d8r\" (UniqueName: \"kubernetes.io/projected/55f59816-94be-4da7-a386-cc3c24c4a3d8-kube-api-access-q9d8r\") pod \"calico-node-x4vxm\" (UID: \"55f59816-94be-4da7-a386-cc3c24c4a3d8\") " pod="calico-system/calico-node-x4vxm" Dec 13 13:33:58.092230 kubelet[2835]: I1213 13:33:58.092156 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/55f59816-94be-4da7-a386-cc3c24c4a3d8-cni-net-dir\") pod \"calico-node-x4vxm\" (UID: \"55f59816-94be-4da7-a386-cc3c24c4a3d8\") " pod="calico-system/calico-node-x4vxm" Dec 13 13:33:58.092230 kubelet[2835]: I1213 13:33:58.092180 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/55f59816-94be-4da7-a386-cc3c24c4a3d8-flexvol-driver-host\") pod \"calico-node-x4vxm\" (UID: \"55f59816-94be-4da7-a386-cc3c24c4a3d8\") " pod="calico-system/calico-node-x4vxm" Dec 13 13:33:58.092230 kubelet[2835]: I1213 13:33:58.092195 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/55f59816-94be-4da7-a386-cc3c24c4a3d8-cni-log-dir\") pod \"calico-node-x4vxm\" (UID: \"55f59816-94be-4da7-a386-cc3c24c4a3d8\") " pod="calico-system/calico-node-x4vxm" Dec 13 13:33:58.092230 kubelet[2835]: I1213 13:33:58.092209 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/55f59816-94be-4da7-a386-cc3c24c4a3d8-node-certs\") pod \"calico-node-x4vxm\" (UID: \"55f59816-94be-4da7-a386-cc3c24c4a3d8\") " pod="calico-system/calico-node-x4vxm" Dec 13 13:33:58.092230 kubelet[2835]: I1213 13:33:58.092229 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55f59816-94be-4da7-a386-cc3c24c4a3d8-tigera-ca-bundle\") pod \"calico-node-x4vxm\" (UID: \"55f59816-94be-4da7-a386-cc3c24c4a3d8\") " pod="calico-system/calico-node-x4vxm" Dec 13 13:33:58.092349 kubelet[2835]: I1213 13:33:58.092258 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/55f59816-94be-4da7-a386-cc3c24c4a3d8-var-run-calico\") pod \"calico-node-x4vxm\" (UID: \"55f59816-94be-4da7-a386-cc3c24c4a3d8\") " pod="calico-system/calico-node-x4vxm" Dec 13 13:33:58.092349 kubelet[2835]: I1213 13:33:58.092274 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/55f59816-94be-4da7-a386-cc3c24c4a3d8-cni-bin-dir\") pod \"calico-node-x4vxm\" (UID: \"55f59816-94be-4da7-a386-cc3c24c4a3d8\") " pod="calico-system/calico-node-x4vxm" Dec 13 13:33:58.092349 kubelet[2835]: I1213 13:33:58.092293 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/55f59816-94be-4da7-a386-cc3c24c4a3d8-policysync\") pod \"calico-node-x4vxm\" (UID: \"55f59816-94be-4da7-a386-cc3c24c4a3d8\") " pod="calico-system/calico-node-x4vxm" Dec 13 13:33:58.092349 kubelet[2835]: I1213 13:33:58.092306 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55f59816-94be-4da7-a386-cc3c24c4a3d8-lib-modules\") pod \"calico-node-x4vxm\" (UID: \"55f59816-94be-4da7-a386-cc3c24c4a3d8\") " pod="calico-system/calico-node-x4vxm" Dec 13 13:33:58.092349 kubelet[2835]: I1213 13:33:58.092319 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/55f59816-94be-4da7-a386-cc3c24c4a3d8-xtables-lock\") pod \"calico-node-x4vxm\" (UID: \"55f59816-94be-4da7-a386-cc3c24c4a3d8\") " pod="calico-system/calico-node-x4vxm" Dec 13 13:33:58.092453 kubelet[2835]: I1213 13:33:58.092344 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/55f59816-94be-4da7-a386-cc3c24c4a3d8-var-lib-calico\") pod \"calico-node-x4vxm\" (UID: \"55f59816-94be-4da7-a386-cc3c24c4a3d8\") " pod="calico-system/calico-node-x4vxm" Dec 13 13:33:58.167530 kubelet[2835]: I1213 13:33:58.165815 2835 topology_manager.go:215] "Topology Admit Handler" podUID="6d6a0031-7d0e-4f38-97a2-6db6c2123341" podNamespace="calico-system" podName="csi-node-driver-t28ks" Dec 13 13:33:58.167530 kubelet[2835]: E1213 13:33:58.167379 2835 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t28ks" podUID="6d6a0031-7d0e-4f38-97a2-6db6c2123341" Dec 13 13:33:58.193641 kubelet[2835]: I1213 13:33:58.192944 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6d6a0031-7d0e-4f38-97a2-6db6c2123341-kubelet-dir\") pod \"csi-node-driver-t28ks\" (UID: \"6d6a0031-7d0e-4f38-97a2-6db6c2123341\") " pod="calico-system/csi-node-driver-t28ks" Dec 13 13:33:58.193641 kubelet[2835]: I1213 13:33:58.192977 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnz7z\" (UniqueName: \"kubernetes.io/projected/6d6a0031-7d0e-4f38-97a2-6db6c2123341-kube-api-access-dnz7z\") pod \"csi-node-driver-t28ks\" (UID: \"6d6a0031-7d0e-4f38-97a2-6db6c2123341\") " pod="calico-system/csi-node-driver-t28ks" Dec 13 13:33:58.193641 kubelet[2835]: I1213 13:33:58.193039 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6d6a0031-7d0e-4f38-97a2-6db6c2123341-socket-dir\") pod \"csi-node-driver-t28ks\" (UID: \"6d6a0031-7d0e-4f38-97a2-6db6c2123341\") " pod="calico-system/csi-node-driver-t28ks" Dec 13 13:33:58.193641 kubelet[2835]: I1213 13:33:58.193070 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6d6a0031-7d0e-4f38-97a2-6db6c2123341-varrun\") pod \"csi-node-driver-t28ks\" (UID: \"6d6a0031-7d0e-4f38-97a2-6db6c2123341\") " pod="calico-system/csi-node-driver-t28ks" Dec 13 13:33:58.193641 kubelet[2835]: I1213 13:33:58.193099 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6d6a0031-7d0e-4f38-97a2-6db6c2123341-registration-dir\") pod \"csi-node-driver-t28ks\" (UID: \"6d6a0031-7d0e-4f38-97a2-6db6c2123341\") " pod="calico-system/csi-node-driver-t28ks" Dec 13 13:33:58.198269 kubelet[2835]: E1213 13:33:58.198119 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:58.198858 kubelet[2835]: W1213 13:33:58.198801 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:58.199775 kubelet[2835]: E1213 13:33:58.199765 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:58.200175 kubelet[2835]: E1213 13:33:58.200164 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:58.200224 kubelet[2835]: W1213 13:33:58.200217 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:58.200307 kubelet[2835]: E1213 13:33:58.200298 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:58.201343 kubelet[2835]: E1213 13:33:58.201331 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:58.201343 kubelet[2835]: W1213 13:33:58.201340 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:58.201444 kubelet[2835]: E1213 13:33:58.201436 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:58.201486 kubelet[2835]: W1213 13:33:58.201443 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:58.201593 kubelet[2835]: E1213 13:33:58.201530 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:58.201593 kubelet[2835]: W1213 13:33:58.201536 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:58.201645 kubelet[2835]: E1213 13:33:58.201614 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:58.201645 kubelet[2835]: W1213 13:33:58.201620 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:58.201810 kubelet[2835]: E1213 13:33:58.201694 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:58.201810 kubelet[2835]: W1213 13:33:58.201700 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:58.201810 kubelet[2835]: E1213 13:33:58.201773 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:58.201810 kubelet[2835]: W1213 13:33:58.201777 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:58.201810 kubelet[2835]: E1213 13:33:58.201787 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:58.201893 kubelet[2835]: E1213 13:33:58.201864 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:58.201893 kubelet[2835]: W1213 13:33:58.201871 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:58.201893 kubelet[2835]: E1213 13:33:58.201878 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:58.202029 kubelet[2835]: E1213 13:33:58.201973 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:58.202029 kubelet[2835]: W1213 13:33:58.201982 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:58.202029 kubelet[2835]: E1213 13:33:58.201991 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:58.203135 kubelet[2835]: E1213 13:33:58.202923 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:58.204260 kubelet[2835]: E1213 13:33:58.203475 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:58.204260 kubelet[2835]: W1213 13:33:58.203492 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:58.204260 kubelet[2835]: E1213 13:33:58.203507 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:58.204464 kubelet[2835]: E1213 13:33:58.204277 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:58.204464 kubelet[2835]: E1213 13:33:58.204432 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:58.204464 kubelet[2835]: W1213 13:33:58.204438 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:58.204464 kubelet[2835]: E1213 13:33:58.204446 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:58.205591 kubelet[2835]: E1213 13:33:58.205356 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:58.205591 kubelet[2835]: E1213 13:33:58.205364 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:58.205591 kubelet[2835]: W1213 13:33:58.205460 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:58.205591 kubelet[2835]: E1213 13:33:58.205471 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:58.205886 kubelet[2835]: E1213 13:33:58.205778 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:58.206139 kubelet[2835]: E1213 13:33:58.206117 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:58.206330 kubelet[2835]: E1213 13:33:58.206323 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:58.206475 kubelet[2835]: W1213 13:33:58.206367 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:58.206475 kubelet[2835]: E1213 13:33:58.206385 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:58.210075 kubelet[2835]: E1213 13:33:58.209663 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:58.210075 kubelet[2835]: W1213 13:33:58.209677 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:58.210075 kubelet[2835]: E1213 13:33:58.209699 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:58.210075 kubelet[2835]: E1213 13:33:58.209867 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:58.210075 kubelet[2835]: W1213 13:33:58.209873 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:58.210075 kubelet[2835]: E1213 13:33:58.209881 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:58.211817 kubelet[2835]: E1213 13:33:58.211707 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:58.211817 kubelet[2835]: W1213 13:33:58.211740 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:58.211817 kubelet[2835]: E1213 13:33:58.211754 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:58.212381 kubelet[2835]: E1213 13:33:58.212321 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:58.212381 kubelet[2835]: W1213 13:33:58.212329 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:58.212381 kubelet[2835]: E1213 13:33:58.212338 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:58.212520 kubelet[2835]: E1213 13:33:58.212514 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:58.212560 kubelet[2835]: W1213 13:33:58.212555 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:58.212595 kubelet[2835]: E1213 13:33:58.212590 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:58.215476 kubelet[2835]: E1213 13:33:58.215328 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:58.215566 kubelet[2835]: W1213 13:33:58.215552 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:58.216343 kubelet[2835]: E1213 13:33:58.216265 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:58.216766 kubelet[2835]: E1213 13:33:58.216729 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:58.216766 kubelet[2835]: W1213 13:33:58.216737 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:58.216766 kubelet[2835]: E1213 13:33:58.216748 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:58.277647 containerd[1540]: time="2024-12-13T13:33:58.277583425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77c8876d5c-q4l8h,Uid:b890ded4-8444-4757-b8e1-be4c7f8d48dc,Namespace:calico-system,Attempt:0,}" Dec 13 13:33:58.294076 kubelet[2835]: E1213 13:33:58.294051 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:58.294601 kubelet[2835]: W1213 13:33:58.294275 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:58.294601 kubelet[2835]: E1213 13:33:58.294583 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:58.295057 kubelet[2835]: E1213 13:33:58.294907 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:58.295057 kubelet[2835]: W1213 13:33:58.294914 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:58.295376 kubelet[2835]: E1213 13:33:58.295368 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:58.295824 kubelet[2835]: E1213 13:33:58.295798 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:58.295824 kubelet[2835]: W1213 13:33:58.295808 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:58.295824 kubelet[2835]: E1213 13:33:58.295821 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:58.296145 kubelet[2835]: E1213 13:33:58.296081 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:58.296145 kubelet[2835]: W1213 13:33:58.296089 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:58.296145 kubelet[2835]: E1213 13:33:58.296101 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:58.296604 kubelet[2835]: E1213 13:33:58.296485 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:58.296604 kubelet[2835]: W1213 13:33:58.296492 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:58.296604 kubelet[2835]: E1213 13:33:58.296504 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:58.296853 kubelet[2835]: E1213 13:33:58.296738 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:58.296853 kubelet[2835]: W1213 13:33:58.296758 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:58.296853 kubelet[2835]: E1213 13:33:58.296770 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:58.297058 kubelet[2835]: E1213 13:33:58.296997 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:58.297058 kubelet[2835]: W1213 13:33:58.297003 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:58.297106 kubelet[2835]: E1213 13:33:58.297100 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:58.297313 kubelet[2835]: E1213 13:33:58.297229 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:58.297345 kubelet[2835]: W1213 13:33:58.297337 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:58.297785 kubelet[2835]: E1213 13:33:58.297411 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:58.297785 kubelet[2835]: E1213 13:33:58.297503 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:58.297785 kubelet[2835]: W1213 13:33:58.297508 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:58.297785 kubelet[2835]: E1213 13:33:58.297517 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:58.297785 kubelet[2835]: E1213 13:33:58.297634 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:58.297785 kubelet[2835]: W1213 13:33:58.297639 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:58.297785 kubelet[2835]: E1213 13:33:58.297647 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:58.297785 kubelet[2835]: E1213 13:33:58.297775 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:58.297923 kubelet[2835]: W1213 13:33:58.297781 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:58.297923 kubelet[2835]: E1213 13:33:58.297799 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:58.297923 kubelet[2835]: E1213 13:33:58.297892 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:58.297923 kubelet[2835]: W1213 13:33:58.297896 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:58.297923 kubelet[2835]: E1213 13:33:58.297902 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:58.298439 kubelet[2835]: E1213 13:33:58.298201 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:58.298439 kubelet[2835]: W1213 13:33:58.298208 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:58.298439 kubelet[2835]: E1213 13:33:58.298217 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:58.298606 kubelet[2835]: E1213 13:33:58.298580 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:58.298606 kubelet[2835]: W1213 13:33:58.298587 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:58.299018 kubelet[2835]: E1213 13:33:58.298664 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:58.299018 kubelet[2835]: E1213 13:33:58.298857 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:58.299018 kubelet[2835]: W1213 13:33:58.298862 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:58.299018 kubelet[2835]: E1213 13:33:58.298917 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:58.299090 kubelet[2835]: E1213 13:33:58.299085 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:58.299121 kubelet[2835]: W1213 13:33:58.299090 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:58.299405 kubelet[2835]: E1213 13:33:58.299331 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:58.299713 kubelet[2835]: E1213 13:33:58.299701 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:58.299713 kubelet[2835]: W1213 13:33:58.299709 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:58.299842 kubelet[2835]: E1213 13:33:58.299741 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:58.299842 kubelet[2835]: E1213 13:33:58.299808 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:58.299842 kubelet[2835]: W1213 13:33:58.299812 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:58.300183 kubelet[2835]: E1213 13:33:58.299966 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:58.300183 kubelet[2835]: E1213 13:33:58.300080 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:58.300183 kubelet[2835]: W1213 13:33:58.300095 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:58.300183 kubelet[2835]: E1213 13:33:58.300105 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:58.300293 containerd[1540]: time="2024-12-13T13:33:58.299374286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:33:58.300293 containerd[1540]: time="2024-12-13T13:33:58.299484339Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:33:58.300293 containerd[1540]: time="2024-12-13T13:33:58.299496626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:58.300293 containerd[1540]: time="2024-12-13T13:33:58.299557368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:58.300511 kubelet[2835]: E1213 13:33:58.300495 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:58.300511 kubelet[2835]: W1213 13:33:58.300507 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:58.300565 kubelet[2835]: E1213 13:33:58.300518 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:58.300652 kubelet[2835]: E1213 13:33:58.300643 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:58.300652 kubelet[2835]: W1213 13:33:58.300647 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:58.300741 kubelet[2835]: E1213 13:33:58.300656 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:58.301056 kubelet[2835]: E1213 13:33:58.301042 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:58.301056 kubelet[2835]: W1213 13:33:58.301051 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:58.301298 kubelet[2835]: E1213 13:33:58.301117 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:58.301298 kubelet[2835]: E1213 13:33:58.301148 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:58.301298 kubelet[2835]: W1213 13:33:58.301153 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:58.301298 kubelet[2835]: E1213 13:33:58.301159 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:58.301298 kubelet[2835]: E1213 13:33:58.301294 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:58.301298 kubelet[2835]: W1213 13:33:58.301299 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:58.301426 kubelet[2835]: E1213 13:33:58.301305 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:58.302771 kubelet[2835]: E1213 13:33:58.301523 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:58.302771 kubelet[2835]: W1213 13:33:58.301530 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:58.302771 kubelet[2835]: E1213 13:33:58.301537 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:58.308610 kubelet[2835]: E1213 13:33:58.308346 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:58.308610 kubelet[2835]: W1213 13:33:58.308358 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:58.308610 kubelet[2835]: E1213 13:33:58.308371 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:58.321405 systemd[1]: Started cri-containerd-02f75777f1e5b5580aac398a66ef58d2fb60be13d1c3df8ff119aebddcfc605e.scope - libcontainer container 02f75777f1e5b5580aac398a66ef58d2fb60be13d1c3df8ff119aebddcfc605e. Dec 13 13:33:58.350502 containerd[1540]: time="2024-12-13T13:33:58.350477091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77c8876d5c-q4l8h,Uid:b890ded4-8444-4757-b8e1-be4c7f8d48dc,Namespace:calico-system,Attempt:0,} returns sandbox id \"02f75777f1e5b5580aac398a66ef58d2fb60be13d1c3df8ff119aebddcfc605e\"" Dec 13 13:33:58.351497 containerd[1540]: time="2024-12-13T13:33:58.351475833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 13:33:58.362559 containerd[1540]: time="2024-12-13T13:33:58.362281794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x4vxm,Uid:55f59816-94be-4da7-a386-cc3c24c4a3d8,Namespace:calico-system,Attempt:0,}" Dec 13 13:33:58.398261 containerd[1540]: time="2024-12-13T13:33:58.398201071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:33:58.398401 containerd[1540]: time="2024-12-13T13:33:58.398350007Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:33:58.398401 containerd[1540]: time="2024-12-13T13:33:58.398367308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:58.398804 containerd[1540]: time="2024-12-13T13:33:58.398629464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:58.411334 systemd[1]: Started cri-containerd-0b3768967d0d4fcfda53542b9d3f73f0562e05ced3d927c533f6bfd56cb27159.scope - libcontainer container 0b3768967d0d4fcfda53542b9d3f73f0562e05ced3d927c533f6bfd56cb27159. Dec 13 13:33:58.425654 containerd[1540]: time="2024-12-13T13:33:58.425606209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x4vxm,Uid:55f59816-94be-4da7-a386-cc3c24c4a3d8,Namespace:calico-system,Attempt:0,} returns sandbox id \"0b3768967d0d4fcfda53542b9d3f73f0562e05ced3d927c533f6bfd56cb27159\"" Dec 13 13:33:59.762431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount677049885.mount: Deactivated successfully. Dec 13 13:34:00.287251 kubelet[2835]: E1213 13:34:00.286338 2835 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t28ks" podUID="6d6a0031-7d0e-4f38-97a2-6db6c2123341" Dec 13 13:34:00.299112 containerd[1540]: time="2024-12-13T13:34:00.299056148Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:00.299820 containerd[1540]: time="2024-12-13T13:34:00.299693259Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Dec 13 13:34:00.300793 containerd[1540]: time="2024-12-13T13:34:00.300216263Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:00.302322 containerd[1540]: time="2024-12-13T13:34:00.302020389Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:00.303196 containerd[1540]: time="2024-12-13T13:34:00.303171339Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 1.95165548s" Dec 13 13:34:00.303302 containerd[1540]: time="2024-12-13T13:34:00.303289344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Dec 13 13:34:00.304225 containerd[1540]: time="2024-12-13T13:34:00.304056472Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 13:34:00.314251 containerd[1540]: time="2024-12-13T13:34:00.314176516Z" level=info msg="CreateContainer within sandbox \"02f75777f1e5b5580aac398a66ef58d2fb60be13d1c3df8ff119aebddcfc605e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 13:34:00.333092 containerd[1540]: time="2024-12-13T13:34:00.333067254Z" level=info msg="CreateContainer within sandbox \"02f75777f1e5b5580aac398a66ef58d2fb60be13d1c3df8ff119aebddcfc605e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ef3ee3bf36e2b4c951f1a9f5a36c5550ab4c930eca5ae99026158a9e1a35bd17\"" Dec 13 13:34:00.333530 containerd[1540]: time="2024-12-13T13:34:00.333512915Z" level=info msg="StartContainer for \"ef3ee3bf36e2b4c951f1a9f5a36c5550ab4c930eca5ae99026158a9e1a35bd17\"" Dec 13 13:34:00.376327 systemd[1]: Started cri-containerd-ef3ee3bf36e2b4c951f1a9f5a36c5550ab4c930eca5ae99026158a9e1a35bd17.scope - libcontainer container ef3ee3bf36e2b4c951f1a9f5a36c5550ab4c930eca5ae99026158a9e1a35bd17. Dec 13 13:34:00.404899 containerd[1540]: time="2024-12-13T13:34:00.404443457Z" level=info msg="StartContainer for \"ef3ee3bf36e2b4c951f1a9f5a36c5550ab4c930eca5ae99026158a9e1a35bd17\" returns successfully" Dec 13 13:34:01.380464 kubelet[2835]: I1213 13:34:01.380377 2835 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-77c8876d5c-q4l8h" podStartSLOduration=2.427831062 podStartE2EDuration="4.379994061s" podCreationTimestamp="2024-12-13 13:33:57 +0000 UTC" firstStartedPulling="2024-12-13 13:33:58.351342349 +0000 UTC m=+20.154062781" lastFinishedPulling="2024-12-13 13:34:00.303505342 +0000 UTC m=+22.106225780" observedRunningTime="2024-12-13 13:34:01.379478633 +0000 UTC m=+23.182199082" watchObservedRunningTime="2024-12-13 13:34:01.379994061 +0000 UTC m=+23.182714509" Dec 13 13:34:01.405440 kubelet[2835]: E1213 13:34:01.405416 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:01.405440 kubelet[2835]: W1213 13:34:01.405435 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:01.406050 kubelet[2835]: E1213 13:34:01.405450 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:01.406050 kubelet[2835]: E1213 13:34:01.405576 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:01.406050 kubelet[2835]: W1213 13:34:01.405582 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:01.406050 kubelet[2835]: E1213 13:34:01.405591 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:01.406050 kubelet[2835]: E1213 13:34:01.405692 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:01.406050 kubelet[2835]: W1213 13:34:01.405697 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:01.406050 kubelet[2835]: E1213 13:34:01.405705 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:01.406050 kubelet[2835]: E1213 13:34:01.405828 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:01.406050 kubelet[2835]: W1213 13:34:01.405837 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:01.406050 kubelet[2835]: E1213 13:34:01.405850 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:01.406891 kubelet[2835]: E1213 13:34:01.405977 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:01.406891 kubelet[2835]: W1213 13:34:01.405983 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:01.406891 kubelet[2835]: E1213 13:34:01.405991 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:01.406891 kubelet[2835]: E1213 13:34:01.406092 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:01.406891 kubelet[2835]: W1213 13:34:01.406098 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:01.406891 kubelet[2835]: E1213 13:34:01.406112 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:01.406891 kubelet[2835]: E1213 13:34:01.406245 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:01.406891 kubelet[2835]: W1213 13:34:01.406254 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:01.406891 kubelet[2835]: E1213 13:34:01.406262 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:01.406891 kubelet[2835]: E1213 13:34:01.406383 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:01.407294 kubelet[2835]: W1213 13:34:01.406389 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:01.407294 kubelet[2835]: E1213 13:34:01.406397 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:01.407294 kubelet[2835]: E1213 13:34:01.406519 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:01.407294 kubelet[2835]: W1213 13:34:01.406526 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:01.407294 kubelet[2835]: E1213 13:34:01.406534 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:01.407294 kubelet[2835]: E1213 13:34:01.406674 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:01.407294 kubelet[2835]: W1213 13:34:01.406679 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:01.407294 kubelet[2835]: E1213 13:34:01.406687 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:01.407294 kubelet[2835]: E1213 13:34:01.406791 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:01.407294 kubelet[2835]: W1213 13:34:01.406796 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:01.407547 kubelet[2835]: E1213 13:34:01.406804 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:01.407547 kubelet[2835]: E1213 13:34:01.406922 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:01.407547 kubelet[2835]: W1213 13:34:01.406928 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:01.407547 kubelet[2835]: E1213 13:34:01.406936 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:01.407547 kubelet[2835]: E1213 13:34:01.407041 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:01.407547 kubelet[2835]: W1213 13:34:01.407049 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:01.407547 kubelet[2835]: E1213 13:34:01.407057 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:01.407547 kubelet[2835]: E1213 13:34:01.407173 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:01.407547 kubelet[2835]: W1213 13:34:01.407180 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:01.407547 kubelet[2835]: E1213 13:34:01.407188 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:01.407759 kubelet[2835]: E1213 13:34:01.407302 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:01.407759 kubelet[2835]: W1213 13:34:01.407308 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:01.407759 kubelet[2835]: E1213 13:34:01.407322 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:01.420638 kubelet[2835]: E1213 13:34:01.420575 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:01.420638 kubelet[2835]: W1213 13:34:01.420587 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:01.420638 kubelet[2835]: E1213 13:34:01.420598 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:01.420802 kubelet[2835]: E1213 13:34:01.420787 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:01.420836 kubelet[2835]: W1213 13:34:01.420806 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:01.420836 kubelet[2835]: E1213 13:34:01.420822 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:01.420994 kubelet[2835]: E1213 13:34:01.420980 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:01.420994 kubelet[2835]: W1213 13:34:01.420987 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:01.421055 kubelet[2835]: E1213 13:34:01.420996 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:01.421885 kubelet[2835]: E1213 13:34:01.421194 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:01.421885 kubelet[2835]: W1213 13:34:01.421202 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:01.421885 kubelet[2835]: E1213 13:34:01.421222 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:01.421885 kubelet[2835]: E1213 13:34:01.421412 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:01.421885 kubelet[2835]: W1213 13:34:01.421419 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:01.421885 kubelet[2835]: E1213 13:34:01.421428 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:01.421885 kubelet[2835]: E1213 13:34:01.421543 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:01.421885 kubelet[2835]: W1213 13:34:01.421549 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:01.421885 kubelet[2835]: E1213 13:34:01.421558 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:01.421885 kubelet[2835]: E1213 13:34:01.421754 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:01.422414 kubelet[2835]: W1213 13:34:01.421760 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:01.422414 kubelet[2835]: E1213 13:34:01.421769 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:01.422414 kubelet[2835]: E1213 13:34:01.422136 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:01.422414 kubelet[2835]: W1213 13:34:01.422148 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:01.422414 kubelet[2835]: E1213 13:34:01.422179 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:01.422545 kubelet[2835]: E1213 13:34:01.422476 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:01.422545 kubelet[2835]: W1213 13:34:01.422483 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:01.422545 kubelet[2835]: E1213 13:34:01.422493 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:01.422642 kubelet[2835]: E1213 13:34:01.422628 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:01.422642 kubelet[2835]: W1213 13:34:01.422638 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:01.422704 kubelet[2835]: E1213 13:34:01.422647 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:01.422768 kubelet[2835]: E1213 13:34:01.422752 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:01.422768 kubelet[2835]: W1213 13:34:01.422760 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:01.422833 kubelet[2835]: E1213 13:34:01.422775 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:01.422951 kubelet[2835]: E1213 13:34:01.422938 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:01.422951 kubelet[2835]: W1213 13:34:01.422949 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:01.423009 kubelet[2835]: E1213 13:34:01.422961 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:01.423429 kubelet[2835]: E1213 13:34:01.423270 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:01.423429 kubelet[2835]: W1213 13:34:01.423289 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:01.423429 kubelet[2835]: E1213 13:34:01.423315 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:01.423632 kubelet[2835]: E1213 13:34:01.423611 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:01.423632 kubelet[2835]: W1213 13:34:01.423617 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:01.423751 kubelet[2835]: E1213 13:34:01.423698 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:01.423940 kubelet[2835]: E1213 13:34:01.423899 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:01.423940 kubelet[2835]: W1213 13:34:01.423907 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:01.423940 kubelet[2835]: E1213 13:34:01.423923 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:01.424044 kubelet[2835]: E1213 13:34:01.424031 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:01.424044 kubelet[2835]: W1213 13:34:01.424040 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:01.424118 kubelet[2835]: E1213 13:34:01.424053 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:01.424193 kubelet[2835]: E1213 13:34:01.424180 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:01.424193 kubelet[2835]: W1213 13:34:01.424190 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:01.424258 kubelet[2835]: E1213 13:34:01.424199 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:01.424501 kubelet[2835]: E1213 13:34:01.424486 2835 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:01.424501 kubelet[2835]: W1213 13:34:01.424497 2835 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:01.424557 kubelet[2835]: E1213 13:34:01.424507 2835 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:01.585229 containerd[1540]: time="2024-12-13T13:34:01.584362628Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:01.585229 containerd[1540]: time="2024-12-13T13:34:01.584776281Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Dec 13 13:34:01.585229 containerd[1540]: time="2024-12-13T13:34:01.584813399Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:01.586427 containerd[1540]: time="2024-12-13T13:34:01.586413515Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:01.587079 containerd[1540]: time="2024-12-13T13:34:01.586818921Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.28273647s" Dec 13 13:34:01.587079 containerd[1540]: time="2024-12-13T13:34:01.586837398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 13:34:01.588000 containerd[1540]: time="2024-12-13T13:34:01.587969332Z" level=info msg="CreateContainer within sandbox \"0b3768967d0d4fcfda53542b9d3f73f0562e05ced3d927c533f6bfd56cb27159\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 13:34:01.594698 containerd[1540]: time="2024-12-13T13:34:01.594615212Z" level=info msg="CreateContainer within sandbox \"0b3768967d0d4fcfda53542b9d3f73f0562e05ced3d927c533f6bfd56cb27159\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"62f832ec272b8e817d2bfd145454b43ee0313ce6905b1f477ba1e410d7b75f7c\"" Dec 13 13:34:01.595013 containerd[1540]: time="2024-12-13T13:34:01.594997303Z" level=info msg="StartContainer for \"62f832ec272b8e817d2bfd145454b43ee0313ce6905b1f477ba1e410d7b75f7c\"" Dec 13 13:34:01.617388 systemd[1]: Started cri-containerd-62f832ec272b8e817d2bfd145454b43ee0313ce6905b1f477ba1e410d7b75f7c.scope - libcontainer container 62f832ec272b8e817d2bfd145454b43ee0313ce6905b1f477ba1e410d7b75f7c. Dec 13 13:34:01.634563 containerd[1540]: time="2024-12-13T13:34:01.634459634Z" level=info msg="StartContainer for \"62f832ec272b8e817d2bfd145454b43ee0313ce6905b1f477ba1e410d7b75f7c\" returns successfully" Dec 13 13:34:01.644591 systemd[1]: cri-containerd-62f832ec272b8e817d2bfd145454b43ee0313ce6905b1f477ba1e410d7b75f7c.scope: Deactivated successfully. Dec 13 13:34:01.659045 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62f832ec272b8e817d2bfd145454b43ee0313ce6905b1f477ba1e410d7b75f7c-rootfs.mount: Deactivated successfully. Dec 13 13:34:02.126793 containerd[1540]: time="2024-12-13T13:34:02.116150874Z" level=info msg="shim disconnected" id=62f832ec272b8e817d2bfd145454b43ee0313ce6905b1f477ba1e410d7b75f7c namespace=k8s.io Dec 13 13:34:02.126793 containerd[1540]: time="2024-12-13T13:34:02.126792540Z" level=warning msg="cleaning up after shim disconnected" id=62f832ec272b8e817d2bfd145454b43ee0313ce6905b1f477ba1e410d7b75f7c namespace=k8s.io Dec 13 13:34:02.127550 containerd[1540]: time="2024-12-13T13:34:02.126802401Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:34:02.286612 kubelet[2835]: E1213 13:34:02.286409 2835 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t28ks" podUID="6d6a0031-7d0e-4f38-97a2-6db6c2123341" Dec 13 13:34:02.374989 kubelet[2835]: I1213 13:34:02.374914 2835 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 13:34:02.375942 containerd[1540]: time="2024-12-13T13:34:02.375922624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 13:34:04.286126 kubelet[2835]: E1213 13:34:04.285929 2835 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t28ks" podUID="6d6a0031-7d0e-4f38-97a2-6db6c2123341" Dec 13 13:34:06.286396 kubelet[2835]: E1213 13:34:06.286160 2835 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t28ks" podUID="6d6a0031-7d0e-4f38-97a2-6db6c2123341" Dec 13 13:34:06.337759 containerd[1540]: time="2024-12-13T13:34:06.337519690Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:06.338563 containerd[1540]: time="2024-12-13T13:34:06.338108068Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Dec 13 13:34:06.338563 containerd[1540]: time="2024-12-13T13:34:06.338150291Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:06.339644 containerd[1540]: time="2024-12-13T13:34:06.339605887Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:06.340742 containerd[1540]: time="2024-12-13T13:34:06.340375399Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 3.964427651s" Dec 13 13:34:06.340742 containerd[1540]: time="2024-12-13T13:34:06.340398844Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 13:34:06.342490 containerd[1540]: time="2024-12-13T13:34:06.342378781Z" level=info msg="CreateContainer within sandbox \"0b3768967d0d4fcfda53542b9d3f73f0562e05ced3d927c533f6bfd56cb27159\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 13:34:06.351278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount393603451.mount: Deactivated successfully. Dec 13 13:34:06.359708 containerd[1540]: time="2024-12-13T13:34:06.359674553Z" level=info msg="CreateContainer within sandbox \"0b3768967d0d4fcfda53542b9d3f73f0562e05ced3d927c533f6bfd56cb27159\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"737790f34d3f70f8f376f0c7f8268c4699bddfbf77330741daad7e1b5df3f139\"" Dec 13 13:34:06.361193 containerd[1540]: time="2024-12-13T13:34:06.360155332Z" level=info msg="StartContainer for \"737790f34d3f70f8f376f0c7f8268c4699bddfbf77330741daad7e1b5df3f139\"" Dec 13 13:34:06.420370 systemd[1]: Started cri-containerd-737790f34d3f70f8f376f0c7f8268c4699bddfbf77330741daad7e1b5df3f139.scope - libcontainer container 737790f34d3f70f8f376f0c7f8268c4699bddfbf77330741daad7e1b5df3f139. Dec 13 13:34:06.437855 containerd[1540]: time="2024-12-13T13:34:06.437742315Z" level=info msg="StartContainer for \"737790f34d3f70f8f376f0c7f8268c4699bddfbf77330741daad7e1b5df3f139\" returns successfully" Dec 13 13:34:07.676874 systemd[1]: cri-containerd-737790f34d3f70f8f376f0c7f8268c4699bddfbf77330741daad7e1b5df3f139.scope: Deactivated successfully. Dec 13 13:34:07.694352 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-737790f34d3f70f8f376f0c7f8268c4699bddfbf77330741daad7e1b5df3f139-rootfs.mount: Deactivated successfully. Dec 13 13:34:07.741483 kubelet[2835]: I1213 13:34:07.741427 2835 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 13:34:07.850205 kubelet[2835]: I1213 13:34:07.849563 2835 topology_manager.go:215] "Topology Admit Handler" podUID="95ca7197-6257-4f74-a945-b91e5b94e808" podNamespace="kube-system" podName="coredns-76f75df574-vvl2m" Dec 13 13:34:07.855233 kubelet[2835]: I1213 13:34:07.854507 2835 topology_manager.go:215] "Topology Admit Handler" podUID="2b3c7f26-9f5a-42bd-b1a3-f7dc58c816f3" podNamespace="calico-apiserver" podName="calico-apiserver-55476b5df-rctdv" Dec 13 13:34:07.858662 systemd[1]: Created slice kubepods-burstable-pod95ca7197_6257_4f74_a945_b91e5b94e808.slice - libcontainer container kubepods-burstable-pod95ca7197_6257_4f74_a945_b91e5b94e808.slice. Dec 13 13:34:07.859306 containerd[1540]: time="2024-12-13T13:34:07.859251840Z" level=info msg="shim disconnected" id=737790f34d3f70f8f376f0c7f8268c4699bddfbf77330741daad7e1b5df3f139 namespace=k8s.io Dec 13 13:34:07.859306 containerd[1540]: time="2024-12-13T13:34:07.859285675Z" level=warning msg="cleaning up after shim disconnected" id=737790f34d3f70f8f376f0c7f8268c4699bddfbf77330741daad7e1b5df3f139 namespace=k8s.io Dec 13 13:34:07.859306 containerd[1540]: time="2024-12-13T13:34:07.859291551Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:34:07.864215 systemd[1]: Created slice kubepods-besteffort-pod2b3c7f26_9f5a_42bd_b1a3_f7dc58c816f3.slice - libcontainer container kubepods-besteffort-pod2b3c7f26_9f5a_42bd_b1a3_f7dc58c816f3.slice. Dec 13 13:34:07.866755 kubelet[2835]: I1213 13:34:07.866717 2835 topology_manager.go:215] "Topology Admit Handler" podUID="1a917722-7626-420c-b334-c54df1962ff7" podNamespace="calico-system" podName="calico-kube-controllers-74874996db-hwhtg" Dec 13 13:34:07.868018 kubelet[2835]: I1213 13:34:07.868007 2835 topology_manager.go:215] "Topology Admit Handler" podUID="044a2624-b2b5-4815-a892-56a4b4e7678a" podNamespace="kube-system" podName="coredns-76f75df574-57pkw" Dec 13 13:34:07.870601 kubelet[2835]: I1213 13:34:07.870586 2835 topology_manager.go:215] "Topology Admit Handler" podUID="594d5ede-fccb-4fa8-b157-5696299be69a" podNamespace="calico-apiserver" podName="calico-apiserver-55476b5df-txx74" Dec 13 13:34:07.873969 systemd[1]: Created slice kubepods-besteffort-pod1a917722_7626_420c_b334_c54df1962ff7.slice - libcontainer container kubepods-besteffort-pod1a917722_7626_420c_b334_c54df1962ff7.slice. Dec 13 13:34:07.880998 systemd[1]: Created slice kubepods-burstable-pod044a2624_b2b5_4815_a892_56a4b4e7678a.slice - libcontainer container kubepods-burstable-pod044a2624_b2b5_4815_a892_56a4b4e7678a.slice. Dec 13 13:34:07.887365 systemd[1]: Created slice kubepods-besteffort-pod594d5ede_fccb_4fa8_b157_5696299be69a.slice - libcontainer container kubepods-besteffort-pod594d5ede_fccb_4fa8_b157_5696299be69a.slice. Dec 13 13:34:07.963171 kubelet[2835]: I1213 13:34:07.963083 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z59d7\" (UniqueName: \"kubernetes.io/projected/2b3c7f26-9f5a-42bd-b1a3-f7dc58c816f3-kube-api-access-z59d7\") pod \"calico-apiserver-55476b5df-rctdv\" (UID: \"2b3c7f26-9f5a-42bd-b1a3-f7dc58c816f3\") " pod="calico-apiserver/calico-apiserver-55476b5df-rctdv" Dec 13 13:34:07.963171 kubelet[2835]: I1213 13:34:07.963149 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmr95\" (UniqueName: \"kubernetes.io/projected/1a917722-7626-420c-b334-c54df1962ff7-kube-api-access-bmr95\") pod \"calico-kube-controllers-74874996db-hwhtg\" (UID: \"1a917722-7626-420c-b334-c54df1962ff7\") " pod="calico-system/calico-kube-controllers-74874996db-hwhtg" Dec 13 13:34:07.963412 kubelet[2835]: I1213 13:34:07.963185 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a917722-7626-420c-b334-c54df1962ff7-tigera-ca-bundle\") pod \"calico-kube-controllers-74874996db-hwhtg\" (UID: \"1a917722-7626-420c-b334-c54df1962ff7\") " pod="calico-system/calico-kube-controllers-74874996db-hwhtg" Dec 13 13:34:07.963412 kubelet[2835]: I1213 13:34:07.963201 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkl75\" (UniqueName: \"kubernetes.io/projected/044a2624-b2b5-4815-a892-56a4b4e7678a-kube-api-access-kkl75\") pod \"coredns-76f75df574-57pkw\" (UID: \"044a2624-b2b5-4815-a892-56a4b4e7678a\") " pod="kube-system/coredns-76f75df574-57pkw" Dec 13 13:34:07.963412 kubelet[2835]: I1213 13:34:07.963215 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/594d5ede-fccb-4fa8-b157-5696299be69a-calico-apiserver-certs\") pod \"calico-apiserver-55476b5df-txx74\" (UID: \"594d5ede-fccb-4fa8-b157-5696299be69a\") " pod="calico-apiserver/calico-apiserver-55476b5df-txx74" Dec 13 13:34:07.963412 kubelet[2835]: I1213 13:34:07.963230 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lffp5\" (UniqueName: \"kubernetes.io/projected/594d5ede-fccb-4fa8-b157-5696299be69a-kube-api-access-lffp5\") pod \"calico-apiserver-55476b5df-txx74\" (UID: \"594d5ede-fccb-4fa8-b157-5696299be69a\") " pod="calico-apiserver/calico-apiserver-55476b5df-txx74" Dec 13 13:34:07.963412 kubelet[2835]: I1213 13:34:07.963286 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95ca7197-6257-4f74-a945-b91e5b94e808-config-volume\") pod \"coredns-76f75df574-vvl2m\" (UID: \"95ca7197-6257-4f74-a945-b91e5b94e808\") " pod="kube-system/coredns-76f75df574-vvl2m" Dec 13 13:34:07.963507 kubelet[2835]: I1213 13:34:07.963307 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwcrx\" (UniqueName: \"kubernetes.io/projected/95ca7197-6257-4f74-a945-b91e5b94e808-kube-api-access-cwcrx\") pod \"coredns-76f75df574-vvl2m\" (UID: \"95ca7197-6257-4f74-a945-b91e5b94e808\") " pod="kube-system/coredns-76f75df574-vvl2m" Dec 13 13:34:07.963507 kubelet[2835]: I1213 13:34:07.963325 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/044a2624-b2b5-4815-a892-56a4b4e7678a-config-volume\") pod \"coredns-76f75df574-57pkw\" (UID: \"044a2624-b2b5-4815-a892-56a4b4e7678a\") " pod="kube-system/coredns-76f75df574-57pkw" Dec 13 13:34:07.963507 kubelet[2835]: I1213 13:34:07.963339 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2b3c7f26-9f5a-42bd-b1a3-f7dc58c816f3-calico-apiserver-certs\") pod \"calico-apiserver-55476b5df-rctdv\" (UID: \"2b3c7f26-9f5a-42bd-b1a3-f7dc58c816f3\") " pod="calico-apiserver/calico-apiserver-55476b5df-rctdv" Dec 13 13:34:08.163429 containerd[1540]: time="2024-12-13T13:34:08.163396674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vvl2m,Uid:95ca7197-6257-4f74-a945-b91e5b94e808,Namespace:kube-system,Attempt:0,}" Dec 13 13:34:08.169796 containerd[1540]: time="2024-12-13T13:34:08.168309456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55476b5df-rctdv,Uid:2b3c7f26-9f5a-42bd-b1a3-f7dc58c816f3,Namespace:calico-apiserver,Attempt:0,}" Dec 13 13:34:08.179540 containerd[1540]: time="2024-12-13T13:34:08.179520266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74874996db-hwhtg,Uid:1a917722-7626-420c-b334-c54df1962ff7,Namespace:calico-system,Attempt:0,}" Dec 13 13:34:08.201704 containerd[1540]: time="2024-12-13T13:34:08.201423288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55476b5df-txx74,Uid:594d5ede-fccb-4fa8-b157-5696299be69a,Namespace:calico-apiserver,Attempt:0,}" Dec 13 13:34:08.204428 containerd[1540]: time="2024-12-13T13:34:08.204406680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-57pkw,Uid:044a2624-b2b5-4815-a892-56a4b4e7678a,Namespace:kube-system,Attempt:0,}" Dec 13 13:34:08.295937 systemd[1]: Created slice kubepods-besteffort-pod6d6a0031_7d0e_4f38_97a2_6db6c2123341.slice - libcontainer container kubepods-besteffort-pod6d6a0031_7d0e_4f38_97a2_6db6c2123341.slice. Dec 13 13:34:08.297964 containerd[1540]: time="2024-12-13T13:34:08.297943684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t28ks,Uid:6d6a0031-7d0e-4f38-97a2-6db6c2123341,Namespace:calico-system,Attempt:0,}" Dec 13 13:34:08.480577 containerd[1540]: time="2024-12-13T13:34:08.480553933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 13:34:08.551189 containerd[1540]: time="2024-12-13T13:34:08.550532425Z" level=error msg="Failed to destroy network for sandbox \"9c73e609d1e184c6a8d67a7f217dec6cfe2db47a60b8873672c6aea8607a4de1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:08.558763 containerd[1540]: time="2024-12-13T13:34:08.558491490Z" level=error msg="Failed to destroy network for sandbox \"380b70df06034fce2ee4e2503b62611323260cf3362cc30533c33ca8924f6498\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:08.560425 containerd[1540]: time="2024-12-13T13:34:08.560392817Z" level=error msg="encountered an error cleaning up failed sandbox \"380b70df06034fce2ee4e2503b62611323260cf3362cc30533c33ca8924f6498\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:08.560499 containerd[1540]: time="2024-12-13T13:34:08.560450513Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-57pkw,Uid:044a2624-b2b5-4815-a892-56a4b4e7678a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"380b70df06034fce2ee4e2503b62611323260cf3362cc30533c33ca8924f6498\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:08.561083 containerd[1540]: time="2024-12-13T13:34:08.561065336Z" level=error msg="encountered an error cleaning up failed sandbox \"9c73e609d1e184c6a8d67a7f217dec6cfe2db47a60b8873672c6aea8607a4de1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:08.561151 containerd[1540]: time="2024-12-13T13:34:08.561140154Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t28ks,Uid:6d6a0031-7d0e-4f38-97a2-6db6c2123341,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9c73e609d1e184c6a8d67a7f217dec6cfe2db47a60b8873672c6aea8607a4de1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:08.570248 containerd[1540]: time="2024-12-13T13:34:08.561267557Z" level=error msg="Failed to destroy network for sandbox \"cdd50c9de02b78e81c8129b164d9902caeccd7a5f6cf84cfbc17155ba9403a75\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:08.570248 containerd[1540]: time="2024-12-13T13:34:08.569661657Z" level=error msg="encountered an error cleaning up failed sandbox \"cdd50c9de02b78e81c8129b164d9902caeccd7a5f6cf84cfbc17155ba9403a75\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:08.570248 containerd[1540]: time="2024-12-13T13:34:08.569695643Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55476b5df-txx74,Uid:594d5ede-fccb-4fa8-b157-5696299be69a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cdd50c9de02b78e81c8129b164d9902caeccd7a5f6cf84cfbc17155ba9403a75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:08.570248 containerd[1540]: time="2024-12-13T13:34:08.561291028Z" level=error msg="Failed to destroy network for sandbox \"de2a3220a48c9e9cc9b729ae9f0eb51a9e87590aadfcb088a4ac339b6d2a4bcb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:08.570248 containerd[1540]: time="2024-12-13T13:34:08.569874502Z" level=error msg="encountered an error cleaning up failed sandbox \"de2a3220a48c9e9cc9b729ae9f0eb51a9e87590aadfcb088a4ac339b6d2a4bcb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:08.570248 containerd[1540]: time="2024-12-13T13:34:08.569894263Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vvl2m,Uid:95ca7197-6257-4f74-a945-b91e5b94e808,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"de2a3220a48c9e9cc9b729ae9f0eb51a9e87590aadfcb088a4ac339b6d2a4bcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:08.570248 containerd[1540]: time="2024-12-13T13:34:08.569984897Z" level=error msg="Failed to destroy network for sandbox \"00f697d42c2d50b3139e5b825119ed91816dde91442ca398a78122299baeac17\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:08.570852 containerd[1540]: time="2024-12-13T13:34:08.570836248Z" level=error msg="encountered an error cleaning up failed sandbox \"00f697d42c2d50b3139e5b825119ed91816dde91442ca398a78122299baeac17\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:08.570895 containerd[1540]: time="2024-12-13T13:34:08.570859089Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55476b5df-rctdv,Uid:2b3c7f26-9f5a-42bd-b1a3-f7dc58c816f3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"00f697d42c2d50b3139e5b825119ed91816dde91442ca398a78122299baeac17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:08.573411 containerd[1540]: time="2024-12-13T13:34:08.573377512Z" level=error msg="Failed to destroy network for sandbox \"ae609dfc2992fdb2c11066c2b9c3abd85054910235b779057a4858a878ea95d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:08.573592 containerd[1540]: time="2024-12-13T13:34:08.573575497Z" level=error msg="encountered an error cleaning up failed sandbox \"ae609dfc2992fdb2c11066c2b9c3abd85054910235b779057a4858a878ea95d7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:08.573622 containerd[1540]: time="2024-12-13T13:34:08.573614161Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74874996db-hwhtg,Uid:1a917722-7626-420c-b334-c54df1962ff7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ae609dfc2992fdb2c11066c2b9c3abd85054910235b779057a4858a878ea95d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:08.615482 kubelet[2835]: E1213 13:34:08.615092 2835 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c73e609d1e184c6a8d67a7f217dec6cfe2db47a60b8873672c6aea8607a4de1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:08.615482 kubelet[2835]: E1213 13:34:08.615139 2835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c73e609d1e184c6a8d67a7f217dec6cfe2db47a60b8873672c6aea8607a4de1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t28ks" Dec 13 13:34:08.615482 kubelet[2835]: E1213 13:34:08.615153 2835 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c73e609d1e184c6a8d67a7f217dec6cfe2db47a60b8873672c6aea8607a4de1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t28ks" Dec 13 13:34:08.615624 kubelet[2835]: E1213 13:34:08.615191 2835 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-t28ks_calico-system(6d6a0031-7d0e-4f38-97a2-6db6c2123341)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-t28ks_calico-system(6d6a0031-7d0e-4f38-97a2-6db6c2123341)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9c73e609d1e184c6a8d67a7f217dec6cfe2db47a60b8873672c6aea8607a4de1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t28ks" podUID="6d6a0031-7d0e-4f38-97a2-6db6c2123341" Dec 13 13:34:08.615624 kubelet[2835]: E1213 13:34:08.615265 2835 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae609dfc2992fdb2c11066c2b9c3abd85054910235b779057a4858a878ea95d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:08.615624 kubelet[2835]: E1213 13:34:08.615285 2835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae609dfc2992fdb2c11066c2b9c3abd85054910235b779057a4858a878ea95d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74874996db-hwhtg" Dec 13 13:34:08.615704 kubelet[2835]: E1213 13:34:08.615297 2835 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae609dfc2992fdb2c11066c2b9c3abd85054910235b779057a4858a878ea95d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74874996db-hwhtg" Dec 13 13:34:08.615704 kubelet[2835]: E1213 13:34:08.615326 2835 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-74874996db-hwhtg_calico-system(1a917722-7626-420c-b334-c54df1962ff7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-74874996db-hwhtg_calico-system(1a917722-7626-420c-b334-c54df1962ff7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae609dfc2992fdb2c11066c2b9c3abd85054910235b779057a4858a878ea95d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74874996db-hwhtg" podUID="1a917722-7626-420c-b334-c54df1962ff7" Dec 13 13:34:08.615704 kubelet[2835]: E1213 13:34:08.615337 2835 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdd50c9de02b78e81c8129b164d9902caeccd7a5f6cf84cfbc17155ba9403a75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:08.615775 kubelet[2835]: E1213 13:34:08.615348 2835 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de2a3220a48c9e9cc9b729ae9f0eb51a9e87590aadfcb088a4ac339b6d2a4bcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:08.615775 kubelet[2835]: E1213 13:34:08.615351 2835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdd50c9de02b78e81c8129b164d9902caeccd7a5f6cf84cfbc17155ba9403a75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55476b5df-txx74" Dec 13 13:34:08.615775 kubelet[2835]: E1213 13:34:08.615364 2835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de2a3220a48c9e9cc9b729ae9f0eb51a9e87590aadfcb088a4ac339b6d2a4bcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-vvl2m" Dec 13 13:34:08.615775 kubelet[2835]: E1213 13:34:08.615372 2835 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdd50c9de02b78e81c8129b164d9902caeccd7a5f6cf84cfbc17155ba9403a75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55476b5df-txx74" Dec 13 13:34:08.615845 kubelet[2835]: E1213 13:34:08.615376 2835 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de2a3220a48c9e9cc9b729ae9f0eb51a9e87590aadfcb088a4ac339b6d2a4bcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-vvl2m" Dec 13 13:34:08.615845 kubelet[2835]: E1213 13:34:08.615396 2835 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-vvl2m_kube-system(95ca7197-6257-4f74-a945-b91e5b94e808)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-vvl2m_kube-system(95ca7197-6257-4f74-a945-b91e5b94e808)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de2a3220a48c9e9cc9b729ae9f0eb51a9e87590aadfcb088a4ac339b6d2a4bcb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-vvl2m" podUID="95ca7197-6257-4f74-a945-b91e5b94e808" Dec 13 13:34:08.615845 kubelet[2835]: E1213 13:34:08.615393 2835 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55476b5df-txx74_calico-apiserver(594d5ede-fccb-4fa8-b157-5696299be69a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55476b5df-txx74_calico-apiserver(594d5ede-fccb-4fa8-b157-5696299be69a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cdd50c9de02b78e81c8129b164d9902caeccd7a5f6cf84cfbc17155ba9403a75\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55476b5df-txx74" podUID="594d5ede-fccb-4fa8-b157-5696299be69a" Dec 13 13:34:08.615920 kubelet[2835]: E1213 13:34:08.615414 2835 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00f697d42c2d50b3139e5b825119ed91816dde91442ca398a78122299baeac17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:08.615920 kubelet[2835]: E1213 13:34:08.615417 2835 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"380b70df06034fce2ee4e2503b62611323260cf3362cc30533c33ca8924f6498\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:08.615920 kubelet[2835]: E1213 13:34:08.615426 2835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00f697d42c2d50b3139e5b825119ed91816dde91442ca398a78122299baeac17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55476b5df-rctdv" Dec 13 13:34:08.615920 kubelet[2835]: E1213 13:34:08.615430 2835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"380b70df06034fce2ee4e2503b62611323260cf3362cc30533c33ca8924f6498\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-57pkw" Dec 13 13:34:08.615992 kubelet[2835]: E1213 13:34:08.615436 2835 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00f697d42c2d50b3139e5b825119ed91816dde91442ca398a78122299baeac17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55476b5df-rctdv" Dec 13 13:34:08.615992 kubelet[2835]: E1213 13:34:08.615440 2835 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"380b70df06034fce2ee4e2503b62611323260cf3362cc30533c33ca8924f6498\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-57pkw" Dec 13 13:34:08.615992 kubelet[2835]: E1213 13:34:08.615457 2835 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55476b5df-rctdv_calico-apiserver(2b3c7f26-9f5a-42bd-b1a3-f7dc58c816f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55476b5df-rctdv_calico-apiserver(2b3c7f26-9f5a-42bd-b1a3-f7dc58c816f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"00f697d42c2d50b3139e5b825119ed91816dde91442ca398a78122299baeac17\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55476b5df-rctdv" podUID="2b3c7f26-9f5a-42bd-b1a3-f7dc58c816f3" Dec 13 13:34:08.616056 kubelet[2835]: E1213 13:34:08.615459 2835 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-57pkw_kube-system(044a2624-b2b5-4815-a892-56a4b4e7678a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-57pkw_kube-system(044a2624-b2b5-4815-a892-56a4b4e7678a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"380b70df06034fce2ee4e2503b62611323260cf3362cc30533c33ca8924f6498\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-57pkw" podUID="044a2624-b2b5-4815-a892-56a4b4e7678a" Dec 13 13:34:08.696356 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-de2a3220a48c9e9cc9b729ae9f0eb51a9e87590aadfcb088a4ac339b6d2a4bcb-shm.mount: Deactivated successfully. Dec 13 13:34:09.480351 kubelet[2835]: I1213 13:34:09.480335 2835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdd50c9de02b78e81c8129b164d9902caeccd7a5f6cf84cfbc17155ba9403a75" Dec 13 13:34:09.481065 kubelet[2835]: I1213 13:34:09.481023 2835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de2a3220a48c9e9cc9b729ae9f0eb51a9e87590aadfcb088a4ac339b6d2a4bcb" Dec 13 13:34:09.488422 kubelet[2835]: I1213 13:34:09.488244 2835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c73e609d1e184c6a8d67a7f217dec6cfe2db47a60b8873672c6aea8607a4de1" Dec 13 13:34:09.489563 kubelet[2835]: I1213 13:34:09.489538 2835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00f697d42c2d50b3139e5b825119ed91816dde91442ca398a78122299baeac17" Dec 13 13:34:09.490579 kubelet[2835]: I1213 13:34:09.490514 2835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae609dfc2992fdb2c11066c2b9c3abd85054910235b779057a4858a878ea95d7" Dec 13 13:34:09.490948 containerd[1540]: time="2024-12-13T13:34:09.490923437Z" level=info msg="StopPodSandbox for \"9c73e609d1e184c6a8d67a7f217dec6cfe2db47a60b8873672c6aea8607a4de1\"" Dec 13 13:34:09.491283 containerd[1540]: time="2024-12-13T13:34:09.491028697Z" level=info msg="StopPodSandbox for \"00f697d42c2d50b3139e5b825119ed91816dde91442ca398a78122299baeac17\"" Dec 13 13:34:09.499744 containerd[1540]: time="2024-12-13T13:34:09.496739842Z" level=info msg="Ensure that sandbox 9c73e609d1e184c6a8d67a7f217dec6cfe2db47a60b8873672c6aea8607a4de1 in task-service has been cleanup successfully" Dec 13 13:34:09.499744 containerd[1540]: time="2024-12-13T13:34:09.498117110Z" level=info msg="TearDown network for sandbox \"9c73e609d1e184c6a8d67a7f217dec6cfe2db47a60b8873672c6aea8607a4de1\" successfully" Dec 13 13:34:09.499744 containerd[1540]: time="2024-12-13T13:34:09.498128757Z" level=info msg="StopPodSandbox for \"9c73e609d1e184c6a8d67a7f217dec6cfe2db47a60b8873672c6aea8607a4de1\" returns successfully" Dec 13 13:34:09.499744 containerd[1540]: time="2024-12-13T13:34:09.498203739Z" level=info msg="StopPodSandbox for \"cdd50c9de02b78e81c8129b164d9902caeccd7a5f6cf84cfbc17155ba9403a75\"" Dec 13 13:34:09.499744 containerd[1540]: time="2024-12-13T13:34:09.498281019Z" level=info msg="Ensure that sandbox cdd50c9de02b78e81c8129b164d9902caeccd7a5f6cf84cfbc17155ba9403a75 in task-service has been cleanup successfully" Dec 13 13:34:09.499744 containerd[1540]: time="2024-12-13T13:34:09.498394358Z" level=info msg="TearDown network for sandbox \"cdd50c9de02b78e81c8129b164d9902caeccd7a5f6cf84cfbc17155ba9403a75\" successfully" Dec 13 13:34:09.499744 containerd[1540]: time="2024-12-13T13:34:09.498401920Z" level=info msg="StopPodSandbox for \"cdd50c9de02b78e81c8129b164d9902caeccd7a5f6cf84cfbc17155ba9403a75\" returns successfully" Dec 13 13:34:09.499744 containerd[1540]: time="2024-12-13T13:34:09.498435674Z" level=info msg="StopPodSandbox for \"de2a3220a48c9e9cc9b729ae9f0eb51a9e87590aadfcb088a4ac339b6d2a4bcb\"" Dec 13 13:34:09.499744 containerd[1540]: time="2024-12-13T13:34:09.498525281Z" level=info msg="Ensure that sandbox de2a3220a48c9e9cc9b729ae9f0eb51a9e87590aadfcb088a4ac339b6d2a4bcb in task-service has been cleanup successfully" Dec 13 13:34:09.499744 containerd[1540]: time="2024-12-13T13:34:09.498614692Z" level=info msg="TearDown network for sandbox \"de2a3220a48c9e9cc9b729ae9f0eb51a9e87590aadfcb088a4ac339b6d2a4bcb\" successfully" Dec 13 13:34:09.499744 containerd[1540]: time="2024-12-13T13:34:09.498622079Z" level=info msg="StopPodSandbox for \"de2a3220a48c9e9cc9b729ae9f0eb51a9e87590aadfcb088a4ac339b6d2a4bcb\" returns successfully" Dec 13 13:34:09.499744 containerd[1540]: time="2024-12-13T13:34:09.498648615Z" level=info msg="StopPodSandbox for \"ae609dfc2992fdb2c11066c2b9c3abd85054910235b779057a4858a878ea95d7\"" Dec 13 13:34:09.499744 containerd[1540]: time="2024-12-13T13:34:09.498713408Z" level=info msg="Ensure that sandbox ae609dfc2992fdb2c11066c2b9c3abd85054910235b779057a4858a878ea95d7 in task-service has been cleanup successfully" Dec 13 13:34:09.499744 containerd[1540]: time="2024-12-13T13:34:09.499066050Z" level=info msg="TearDown network for sandbox \"ae609dfc2992fdb2c11066c2b9c3abd85054910235b779057a4858a878ea95d7\" successfully" Dec 13 13:34:09.499744 containerd[1540]: time="2024-12-13T13:34:09.499074103Z" level=info msg="StopPodSandbox for \"ae609dfc2992fdb2c11066c2b9c3abd85054910235b779057a4858a878ea95d7\" returns successfully" Dec 13 13:34:09.498365 systemd[1]: run-netns-cni\x2d3b9706e5\x2d87c6\x2dd3e7\x2d6993\x2df18fb361f851.mount: Deactivated successfully. Dec 13 13:34:09.501136 kubelet[2835]: I1213 13:34:09.500181 2835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="380b70df06034fce2ee4e2503b62611323260cf3362cc30533c33ca8924f6498" Dec 13 13:34:09.501471 containerd[1540]: time="2024-12-13T13:34:09.501457235Z" level=info msg="Ensure that sandbox 00f697d42c2d50b3139e5b825119ed91816dde91442ca398a78122299baeac17 in task-service has been cleanup successfully" Dec 13 13:34:09.501773 containerd[1540]: time="2024-12-13T13:34:09.501763756Z" level=info msg="TearDown network for sandbox \"00f697d42c2d50b3139e5b825119ed91816dde91442ca398a78122299baeac17\" successfully" Dec 13 13:34:09.501833 containerd[1540]: time="2024-12-13T13:34:09.501815107Z" level=info msg="StopPodSandbox for \"00f697d42c2d50b3139e5b825119ed91816dde91442ca398a78122299baeac17\" returns successfully" Dec 13 13:34:09.501986 containerd[1540]: time="2024-12-13T13:34:09.501584007Z" level=info msg="StopPodSandbox for \"380b70df06034fce2ee4e2503b62611323260cf3362cc30533c33ca8924f6498\"" Dec 13 13:34:09.502101 containerd[1540]: time="2024-12-13T13:34:09.502033257Z" level=info msg="Ensure that sandbox 380b70df06034fce2ee4e2503b62611323260cf3362cc30533c33ca8924f6498 in task-service has been cleanup successfully" Dec 13 13:34:09.502325 containerd[1540]: time="2024-12-13T13:34:09.502156994Z" level=info msg="TearDown network for sandbox \"380b70df06034fce2ee4e2503b62611323260cf3362cc30533c33ca8924f6498\" successfully" Dec 13 13:34:09.502325 containerd[1540]: time="2024-12-13T13:34:09.502165366Z" level=info msg="StopPodSandbox for \"380b70df06034fce2ee4e2503b62611323260cf3362cc30533c33ca8924f6498\" returns successfully" Dec 13 13:34:09.502943 systemd[1]: run-netns-cni\x2d45c33d1e\x2d5841\x2d5965\x2dcd03\x2d043044e614a3.mount: Deactivated successfully. Dec 13 13:34:09.503155 systemd[1]: run-netns-cni\x2da9932971\x2d9079\x2dba27\x2df222\x2d7c88443d9fe9.mount: Deactivated successfully. Dec 13 13:34:09.503653 systemd[1]: run-netns-cni\x2d42c566ed\x2d3b5e\x2d067b\x2ddb2d\x2d23da1e3c7c79.mount: Deactivated successfully. Dec 13 13:34:09.504444 containerd[1540]: time="2024-12-13T13:34:09.504430475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74874996db-hwhtg,Uid:1a917722-7626-420c-b334-c54df1962ff7,Namespace:calico-system,Attempt:1,}" Dec 13 13:34:09.506098 containerd[1540]: time="2024-12-13T13:34:09.506023503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t28ks,Uid:6d6a0031-7d0e-4f38-97a2-6db6c2123341,Namespace:calico-system,Attempt:1,}" Dec 13 13:34:09.506401 containerd[1540]: time="2024-12-13T13:34:09.506297470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55476b5df-txx74,Uid:594d5ede-fccb-4fa8-b157-5696299be69a,Namespace:calico-apiserver,Attempt:1,}" Dec 13 13:34:09.507151 containerd[1540]: time="2024-12-13T13:34:09.506996856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vvl2m,Uid:95ca7197-6257-4f74-a945-b91e5b94e808,Namespace:kube-system,Attempt:1,}" Dec 13 13:34:09.507501 systemd[1]: run-netns-cni\x2d59bfda9b\x2dec48\x2dfacf\x2d3471\x2df96e5bddecfe.mount: Deactivated successfully. Dec 13 13:34:09.507555 systemd[1]: run-netns-cni\x2d7b190a53\x2db149\x2d1e2d\x2d4036\x2d59afbdab53f7.mount: Deactivated successfully. Dec 13 13:34:09.507748 containerd[1540]: time="2024-12-13T13:34:09.507640840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-57pkw,Uid:044a2624-b2b5-4815-a892-56a4b4e7678a,Namespace:kube-system,Attempt:1,}" Dec 13 13:34:09.508263 containerd[1540]: time="2024-12-13T13:34:09.508200371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55476b5df-rctdv,Uid:2b3c7f26-9f5a-42bd-b1a3-f7dc58c816f3,Namespace:calico-apiserver,Attempt:1,}" Dec 13 13:34:09.617858 containerd[1540]: time="2024-12-13T13:34:09.617820271Z" level=error msg="Failed to destroy network for sandbox \"c6df07d0815948047f4d6cc2c7af0ea066838f1e44745877608753226f54289c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:09.618434 containerd[1540]: time="2024-12-13T13:34:09.618416332Z" level=error msg="encountered an error cleaning up failed sandbox \"c6df07d0815948047f4d6cc2c7af0ea066838f1e44745877608753226f54289c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:09.618834 containerd[1540]: time="2024-12-13T13:34:09.618740844Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55476b5df-rctdv,Uid:2b3c7f26-9f5a-42bd-b1a3-f7dc58c816f3,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"c6df07d0815948047f4d6cc2c7af0ea066838f1e44745877608753226f54289c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:09.618834 containerd[1540]: time="2024-12-13T13:34:09.618698073Z" level=error msg="Failed to destroy network for sandbox \"31b9e46dd01ac8c3e96533dd860cad6d5ba5c4f5c8e5c992dc17ab64757b4ac1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:09.619313 kubelet[2835]: E1213 13:34:09.619039 2835 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6df07d0815948047f4d6cc2c7af0ea066838f1e44745877608753226f54289c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:09.619313 kubelet[2835]: E1213 13:34:09.619076 2835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6df07d0815948047f4d6cc2c7af0ea066838f1e44745877608753226f54289c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55476b5df-rctdv" Dec 13 13:34:09.619313 kubelet[2835]: E1213 13:34:09.619091 2835 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6df07d0815948047f4d6cc2c7af0ea066838f1e44745877608753226f54289c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55476b5df-rctdv" Dec 13 13:34:09.619387 kubelet[2835]: E1213 13:34:09.619132 2835 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55476b5df-rctdv_calico-apiserver(2b3c7f26-9f5a-42bd-b1a3-f7dc58c816f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55476b5df-rctdv_calico-apiserver(2b3c7f26-9f5a-42bd-b1a3-f7dc58c816f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c6df07d0815948047f4d6cc2c7af0ea066838f1e44745877608753226f54289c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55476b5df-rctdv" podUID="2b3c7f26-9f5a-42bd-b1a3-f7dc58c816f3" Dec 13 13:34:09.620425 containerd[1540]: time="2024-12-13T13:34:09.620374825Z" level=error msg="encountered an error cleaning up failed sandbox \"31b9e46dd01ac8c3e96533dd860cad6d5ba5c4f5c8e5c992dc17ab64757b4ac1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:09.621806 containerd[1540]: time="2024-12-13T13:34:09.620404516Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vvl2m,Uid:95ca7197-6257-4f74-a945-b91e5b94e808,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"31b9e46dd01ac8c3e96533dd860cad6d5ba5c4f5c8e5c992dc17ab64757b4ac1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:09.622269 kubelet[2835]: E1213 13:34:09.621920 2835 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31b9e46dd01ac8c3e96533dd860cad6d5ba5c4f5c8e5c992dc17ab64757b4ac1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:09.622269 kubelet[2835]: E1213 13:34:09.622134 2835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31b9e46dd01ac8c3e96533dd860cad6d5ba5c4f5c8e5c992dc17ab64757b4ac1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-vvl2m" Dec 13 13:34:09.622269 kubelet[2835]: E1213 13:34:09.622151 2835 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31b9e46dd01ac8c3e96533dd860cad6d5ba5c4f5c8e5c992dc17ab64757b4ac1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-vvl2m" Dec 13 13:34:09.622430 kubelet[2835]: E1213 13:34:09.622417 2835 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-vvl2m_kube-system(95ca7197-6257-4f74-a945-b91e5b94e808)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-vvl2m_kube-system(95ca7197-6257-4f74-a945-b91e5b94e808)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31b9e46dd01ac8c3e96533dd860cad6d5ba5c4f5c8e5c992dc17ab64757b4ac1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-vvl2m" podUID="95ca7197-6257-4f74-a945-b91e5b94e808" Dec 13 13:34:09.625874 containerd[1540]: time="2024-12-13T13:34:09.625848312Z" level=error msg="Failed to destroy network for sandbox \"0dabb5f4b1ea298c91cbde131d95492354ec96009d66dab860652a5ca3101e2a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:09.626262 containerd[1540]: time="2024-12-13T13:34:09.626159545Z" level=error msg="encountered an error cleaning up failed sandbox \"0dabb5f4b1ea298c91cbde131d95492354ec96009d66dab860652a5ca3101e2a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:09.626262 containerd[1540]: time="2024-12-13T13:34:09.626200205Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55476b5df-txx74,Uid:594d5ede-fccb-4fa8-b157-5696299be69a,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"0dabb5f4b1ea298c91cbde131d95492354ec96009d66dab860652a5ca3101e2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:09.626520 kubelet[2835]: E1213 13:34:09.626416 2835 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0dabb5f4b1ea298c91cbde131d95492354ec96009d66dab860652a5ca3101e2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:09.626520 kubelet[2835]: E1213 13:34:09.626460 2835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0dabb5f4b1ea298c91cbde131d95492354ec96009d66dab860652a5ca3101e2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55476b5df-txx74" Dec 13 13:34:09.626520 kubelet[2835]: E1213 13:34:09.626474 2835 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0dabb5f4b1ea298c91cbde131d95492354ec96009d66dab860652a5ca3101e2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55476b5df-txx74" Dec 13 13:34:09.626877 kubelet[2835]: E1213 13:34:09.626691 2835 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55476b5df-txx74_calico-apiserver(594d5ede-fccb-4fa8-b157-5696299be69a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55476b5df-txx74_calico-apiserver(594d5ede-fccb-4fa8-b157-5696299be69a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0dabb5f4b1ea298c91cbde131d95492354ec96009d66dab860652a5ca3101e2a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55476b5df-txx74" podUID="594d5ede-fccb-4fa8-b157-5696299be69a" Dec 13 13:34:09.628819 containerd[1540]: time="2024-12-13T13:34:09.628794364Z" level=error msg="Failed to destroy network for sandbox \"0a82589ee1ce9211a28b404bf05d8a5d55870d00dbaa6f040e19fd5860a3aa2e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:09.629013 containerd[1540]: time="2024-12-13T13:34:09.628998686Z" level=error msg="encountered an error cleaning up failed sandbox \"0a82589ee1ce9211a28b404bf05d8a5d55870d00dbaa6f040e19fd5860a3aa2e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:09.629532 containerd[1540]: time="2024-12-13T13:34:09.629512923Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-57pkw,Uid:044a2624-b2b5-4815-a892-56a4b4e7678a,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"0a82589ee1ce9211a28b404bf05d8a5d55870d00dbaa6f040e19fd5860a3aa2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:09.629840 kubelet[2835]: E1213 13:34:09.629644 2835 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a82589ee1ce9211a28b404bf05d8a5d55870d00dbaa6f040e19fd5860a3aa2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:09.629840 kubelet[2835]: E1213 13:34:09.629684 2835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a82589ee1ce9211a28b404bf05d8a5d55870d00dbaa6f040e19fd5860a3aa2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-57pkw" Dec 13 13:34:09.629840 kubelet[2835]: E1213 13:34:09.629698 2835 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a82589ee1ce9211a28b404bf05d8a5d55870d00dbaa6f040e19fd5860a3aa2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-57pkw" Dec 13 13:34:09.629920 kubelet[2835]: E1213 13:34:09.629733 2835 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-57pkw_kube-system(044a2624-b2b5-4815-a892-56a4b4e7678a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-57pkw_kube-system(044a2624-b2b5-4815-a892-56a4b4e7678a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a82589ee1ce9211a28b404bf05d8a5d55870d00dbaa6f040e19fd5860a3aa2e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-57pkw" podUID="044a2624-b2b5-4815-a892-56a4b4e7678a" Dec 13 13:34:09.630404 containerd[1540]: time="2024-12-13T13:34:09.630386227Z" level=error msg="Failed to destroy network for sandbox \"aa7c49f6c656374edc32cfc35744958459f47fb19807c32786714cc939851a3a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:09.630947 containerd[1540]: time="2024-12-13T13:34:09.630804662Z" level=error msg="encountered an error cleaning up failed sandbox \"aa7c49f6c656374edc32cfc35744958459f47fb19807c32786714cc939851a3a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:09.630947 containerd[1540]: time="2024-12-13T13:34:09.630834647Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74874996db-hwhtg,Uid:1a917722-7626-420c-b334-c54df1962ff7,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"aa7c49f6c656374edc32cfc35744958459f47fb19807c32786714cc939851a3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:09.631144 kubelet[2835]: E1213 13:34:09.631052 2835 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa7c49f6c656374edc32cfc35744958459f47fb19807c32786714cc939851a3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:09.631144 kubelet[2835]: E1213 13:34:09.631088 2835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa7c49f6c656374edc32cfc35744958459f47fb19807c32786714cc939851a3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74874996db-hwhtg" Dec 13 13:34:09.631144 kubelet[2835]: E1213 13:34:09.631100 2835 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa7c49f6c656374edc32cfc35744958459f47fb19807c32786714cc939851a3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74874996db-hwhtg" Dec 13 13:34:09.631279 kubelet[2835]: E1213 13:34:09.631124 2835 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-74874996db-hwhtg_calico-system(1a917722-7626-420c-b334-c54df1962ff7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-74874996db-hwhtg_calico-system(1a917722-7626-420c-b334-c54df1962ff7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aa7c49f6c656374edc32cfc35744958459f47fb19807c32786714cc939851a3a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74874996db-hwhtg" podUID="1a917722-7626-420c-b334-c54df1962ff7" Dec 13 13:34:09.635601 containerd[1540]: time="2024-12-13T13:34:09.635575252Z" level=error msg="Failed to destroy network for sandbox \"1b826aa3b1f3ae55bbafa8172c171a0658192d78431f4568b1172509d1bb6230\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:09.635909 containerd[1540]: time="2024-12-13T13:34:09.635843458Z" level=error msg="encountered an error cleaning up failed sandbox \"1b826aa3b1f3ae55bbafa8172c171a0658192d78431f4568b1172509d1bb6230\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:09.635909 containerd[1540]: time="2024-12-13T13:34:09.635885314Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t28ks,Uid:6d6a0031-7d0e-4f38-97a2-6db6c2123341,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"1b826aa3b1f3ae55bbafa8172c171a0658192d78431f4568b1172509d1bb6230\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:09.636330 kubelet[2835]: E1213 13:34:09.636124 2835 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b826aa3b1f3ae55bbafa8172c171a0658192d78431f4568b1172509d1bb6230\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:09.636330 kubelet[2835]: E1213 13:34:09.636170 2835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b826aa3b1f3ae55bbafa8172c171a0658192d78431f4568b1172509d1bb6230\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t28ks" Dec 13 13:34:09.636330 kubelet[2835]: E1213 13:34:09.636183 2835 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b826aa3b1f3ae55bbafa8172c171a0658192d78431f4568b1172509d1bb6230\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t28ks" Dec 13 13:34:09.636418 kubelet[2835]: E1213 13:34:09.636214 2835 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-t28ks_calico-system(6d6a0031-7d0e-4f38-97a2-6db6c2123341)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-t28ks_calico-system(6d6a0031-7d0e-4f38-97a2-6db6c2123341)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b826aa3b1f3ae55bbafa8172c171a0658192d78431f4568b1172509d1bb6230\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t28ks" podUID="6d6a0031-7d0e-4f38-97a2-6db6c2123341" Dec 13 13:34:09.697090 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-31b9e46dd01ac8c3e96533dd860cad6d5ba5c4f5c8e5c992dc17ab64757b4ac1-shm.mount: Deactivated successfully. Dec 13 13:34:10.502299 kubelet[2835]: I1213 13:34:10.501850 2835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a82589ee1ce9211a28b404bf05d8a5d55870d00dbaa6f040e19fd5860a3aa2e" Dec 13 13:34:10.502575 containerd[1540]: time="2024-12-13T13:34:10.502341897Z" level=info msg="StopPodSandbox for \"0a82589ee1ce9211a28b404bf05d8a5d55870d00dbaa6f040e19fd5860a3aa2e\"" Dec 13 13:34:10.502575 containerd[1540]: time="2024-12-13T13:34:10.502471105Z" level=info msg="Ensure that sandbox 0a82589ee1ce9211a28b404bf05d8a5d55870d00dbaa6f040e19fd5860a3aa2e in task-service has been cleanup successfully" Dec 13 13:34:10.502738 containerd[1540]: time="2024-12-13T13:34:10.502658423Z" level=info msg="TearDown network for sandbox \"0a82589ee1ce9211a28b404bf05d8a5d55870d00dbaa6f040e19fd5860a3aa2e\" successfully" Dec 13 13:34:10.502738 containerd[1540]: time="2024-12-13T13:34:10.502666559Z" level=info msg="StopPodSandbox for \"0a82589ee1ce9211a28b404bf05d8a5d55870d00dbaa6f040e19fd5860a3aa2e\" returns successfully" Dec 13 13:34:10.502782 containerd[1540]: time="2024-12-13T13:34:10.502775409Z" level=info msg="StopPodSandbox for \"380b70df06034fce2ee4e2503b62611323260cf3362cc30533c33ca8924f6498\"" Dec 13 13:34:10.503945 containerd[1540]: time="2024-12-13T13:34:10.502811234Z" level=info msg="TearDown network for sandbox \"380b70df06034fce2ee4e2503b62611323260cf3362cc30533c33ca8924f6498\" successfully" Dec 13 13:34:10.503945 containerd[1540]: time="2024-12-13T13:34:10.502818659Z" level=info msg="StopPodSandbox for \"380b70df06034fce2ee4e2503b62611323260cf3362cc30533c33ca8924f6498\" returns successfully" Dec 13 13:34:10.503945 containerd[1540]: time="2024-12-13T13:34:10.503661452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-57pkw,Uid:044a2624-b2b5-4815-a892-56a4b4e7678a,Namespace:kube-system,Attempt:2,}" Dec 13 13:34:10.504582 kubelet[2835]: I1213 13:34:10.504097 2835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0dabb5f4b1ea298c91cbde131d95492354ec96009d66dab860652a5ca3101e2a" Dec 13 13:34:10.504784 containerd[1540]: time="2024-12-13T13:34:10.504757127Z" level=info msg="StopPodSandbox for \"0dabb5f4b1ea298c91cbde131d95492354ec96009d66dab860652a5ca3101e2a\"" Dec 13 13:34:10.505960 containerd[1540]: time="2024-12-13T13:34:10.505025927Z" level=info msg="Ensure that sandbox 0dabb5f4b1ea298c91cbde131d95492354ec96009d66dab860652a5ca3101e2a in task-service has been cleanup successfully" Dec 13 13:34:10.505960 containerd[1540]: time="2024-12-13T13:34:10.505339031Z" level=info msg="TearDown network for sandbox \"0dabb5f4b1ea298c91cbde131d95492354ec96009d66dab860652a5ca3101e2a\" successfully" Dec 13 13:34:10.505960 containerd[1540]: time="2024-12-13T13:34:10.505347002Z" level=info msg="StopPodSandbox for \"0dabb5f4b1ea298c91cbde131d95492354ec96009d66dab860652a5ca3101e2a\" returns successfully" Dec 13 13:34:10.505771 systemd[1]: run-netns-cni\x2d455a5577\x2dbda7\x2de9bb\x2d9d13\x2d942e30d20acf.mount: Deactivated successfully. Dec 13 13:34:10.506073 kubelet[2835]: I1213 13:34:10.504975 2835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31b9e46dd01ac8c3e96533dd860cad6d5ba5c4f5c8e5c992dc17ab64757b4ac1" Dec 13 13:34:10.506507 containerd[1540]: time="2024-12-13T13:34:10.506196694Z" level=info msg="StopPodSandbox for \"cdd50c9de02b78e81c8129b164d9902caeccd7a5f6cf84cfbc17155ba9403a75\"" Dec 13 13:34:10.506507 containerd[1540]: time="2024-12-13T13:34:10.506268265Z" level=info msg="TearDown network for sandbox \"cdd50c9de02b78e81c8129b164d9902caeccd7a5f6cf84cfbc17155ba9403a75\" successfully" Dec 13 13:34:10.506507 containerd[1540]: time="2024-12-13T13:34:10.506276161Z" level=info msg="StopPodSandbox for \"cdd50c9de02b78e81c8129b164d9902caeccd7a5f6cf84cfbc17155ba9403a75\" returns successfully" Dec 13 13:34:10.506507 containerd[1540]: time="2024-12-13T13:34:10.506351576Z" level=info msg="StopPodSandbox for \"31b9e46dd01ac8c3e96533dd860cad6d5ba5c4f5c8e5c992dc17ab64757b4ac1\"" Dec 13 13:34:10.506507 containerd[1540]: time="2024-12-13T13:34:10.506439442Z" level=info msg="Ensure that sandbox 31b9e46dd01ac8c3e96533dd860cad6d5ba5c4f5c8e5c992dc17ab64757b4ac1 in task-service has been cleanup successfully" Dec 13 13:34:10.508635 containerd[1540]: time="2024-12-13T13:34:10.506978760Z" level=info msg="TearDown network for sandbox \"31b9e46dd01ac8c3e96533dd860cad6d5ba5c4f5c8e5c992dc17ab64757b4ac1\" successfully" Dec 13 13:34:10.508635 containerd[1540]: time="2024-12-13T13:34:10.506988953Z" level=info msg="StopPodSandbox for \"31b9e46dd01ac8c3e96533dd860cad6d5ba5c4f5c8e5c992dc17ab64757b4ac1\" returns successfully" Dec 13 13:34:10.508635 containerd[1540]: time="2024-12-13T13:34:10.508519179Z" level=info msg="StopPodSandbox for \"de2a3220a48c9e9cc9b729ae9f0eb51a9e87590aadfcb088a4ac339b6d2a4bcb\"" Dec 13 13:34:10.508635 containerd[1540]: time="2024-12-13T13:34:10.508556198Z" level=info msg="TearDown network for sandbox \"de2a3220a48c9e9cc9b729ae9f0eb51a9e87590aadfcb088a4ac339b6d2a4bcb\" successfully" Dec 13 13:34:10.508635 containerd[1540]: time="2024-12-13T13:34:10.508562263Z" level=info msg="StopPodSandbox for \"de2a3220a48c9e9cc9b729ae9f0eb51a9e87590aadfcb088a4ac339b6d2a4bcb\" returns successfully" Dec 13 13:34:10.508635 containerd[1540]: time="2024-12-13T13:34:10.508569980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55476b5df-txx74,Uid:594d5ede-fccb-4fa8-b157-5696299be69a,Namespace:calico-apiserver,Attempt:2,}" Dec 13 13:34:10.507909 systemd[1]: run-netns-cni\x2d2e3b6c2d\x2da461\x2d2f15\x2dbe3f\x2d73ad69a969ce.mount: Deactivated successfully. Dec 13 13:34:10.521763 kubelet[2835]: I1213 13:34:10.509685 2835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b826aa3b1f3ae55bbafa8172c171a0658192d78431f4568b1172509d1bb6230" Dec 13 13:34:10.521763 kubelet[2835]: I1213 13:34:10.510667 2835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6df07d0815948047f4d6cc2c7af0ea066838f1e44745877608753226f54289c" Dec 13 13:34:10.521763 kubelet[2835]: I1213 13:34:10.513299 2835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa7c49f6c656374edc32cfc35744958459f47fb19807c32786714cc939851a3a" Dec 13 13:34:10.521839 containerd[1540]: time="2024-12-13T13:34:10.509656938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vvl2m,Uid:95ca7197-6257-4f74-a945-b91e5b94e808,Namespace:kube-system,Attempt:2,}" Dec 13 13:34:10.521839 containerd[1540]: time="2024-12-13T13:34:10.510312316Z" level=info msg="StopPodSandbox for \"1b826aa3b1f3ae55bbafa8172c171a0658192d78431f4568b1172509d1bb6230\"" Dec 13 13:34:10.521839 containerd[1540]: time="2024-12-13T13:34:10.510533783Z" level=info msg="Ensure that sandbox 1b826aa3b1f3ae55bbafa8172c171a0658192d78431f4568b1172509d1bb6230 in task-service has been cleanup successfully" Dec 13 13:34:10.521839 containerd[1540]: time="2024-12-13T13:34:10.511312753Z" level=info msg="StopPodSandbox for \"c6df07d0815948047f4d6cc2c7af0ea066838f1e44745877608753226f54289c\"" Dec 13 13:34:10.521839 containerd[1540]: time="2024-12-13T13:34:10.511337186Z" level=info msg="TearDown network for sandbox \"1b826aa3b1f3ae55bbafa8172c171a0658192d78431f4568b1172509d1bb6230\" successfully" Dec 13 13:34:10.521839 containerd[1540]: time="2024-12-13T13:34:10.511345827Z" level=info msg="StopPodSandbox for \"1b826aa3b1f3ae55bbafa8172c171a0658192d78431f4568b1172509d1bb6230\" returns successfully" Dec 13 13:34:10.521839 containerd[1540]: time="2024-12-13T13:34:10.511402866Z" level=info msg="Ensure that sandbox c6df07d0815948047f4d6cc2c7af0ea066838f1e44745877608753226f54289c in task-service has been cleanup successfully" Dec 13 13:34:10.521839 containerd[1540]: time="2024-12-13T13:34:10.511514566Z" level=info msg="TearDown network for sandbox \"c6df07d0815948047f4d6cc2c7af0ea066838f1e44745877608753226f54289c\" successfully" Dec 13 13:34:10.521839 containerd[1540]: time="2024-12-13T13:34:10.511522635Z" level=info msg="StopPodSandbox for \"c6df07d0815948047f4d6cc2c7af0ea066838f1e44745877608753226f54289c\" returns successfully" Dec 13 13:34:10.521839 containerd[1540]: time="2024-12-13T13:34:10.511534085Z" level=info msg="StopPodSandbox for \"9c73e609d1e184c6a8d67a7f217dec6cfe2db47a60b8873672c6aea8607a4de1\"" Dec 13 13:34:10.521839 containerd[1540]: time="2024-12-13T13:34:10.511668530Z" level=info msg="TearDown network for sandbox \"9c73e609d1e184c6a8d67a7f217dec6cfe2db47a60b8873672c6aea8607a4de1\" successfully" Dec 13 13:34:10.521839 containerd[1540]: time="2024-12-13T13:34:10.511675986Z" level=info msg="StopPodSandbox for \"9c73e609d1e184c6a8d67a7f217dec6cfe2db47a60b8873672c6aea8607a4de1\" returns successfully" Dec 13 13:34:10.521839 containerd[1540]: time="2024-12-13T13:34:10.511826611Z" level=info msg="StopPodSandbox for \"00f697d42c2d50b3139e5b825119ed91816dde91442ca398a78122299baeac17\"" Dec 13 13:34:10.521839 containerd[1540]: time="2024-12-13T13:34:10.511956532Z" level=info msg="TearDown network for sandbox \"00f697d42c2d50b3139e5b825119ed91816dde91442ca398a78122299baeac17\" successfully" Dec 13 13:34:10.521839 containerd[1540]: time="2024-12-13T13:34:10.511963918Z" level=info msg="StopPodSandbox for \"00f697d42c2d50b3139e5b825119ed91816dde91442ca398a78122299baeac17\" returns successfully" Dec 13 13:34:10.521839 containerd[1540]: time="2024-12-13T13:34:10.512523902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55476b5df-rctdv,Uid:2b3c7f26-9f5a-42bd-b1a3-f7dc58c816f3,Namespace:calico-apiserver,Attempt:2,}" Dec 13 13:34:10.521839 containerd[1540]: time="2024-12-13T13:34:10.512539856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t28ks,Uid:6d6a0031-7d0e-4f38-97a2-6db6c2123341,Namespace:calico-system,Attempt:2,}" Dec 13 13:34:10.521839 containerd[1540]: time="2024-12-13T13:34:10.513802771Z" level=info msg="StopPodSandbox for \"aa7c49f6c656374edc32cfc35744958459f47fb19807c32786714cc939851a3a\"" Dec 13 13:34:10.521839 containerd[1540]: time="2024-12-13T13:34:10.513897887Z" level=info msg="Ensure that sandbox aa7c49f6c656374edc32cfc35744958459f47fb19807c32786714cc939851a3a in task-service has been cleanup successfully" Dec 13 13:34:10.521839 containerd[1540]: time="2024-12-13T13:34:10.514263891Z" level=info msg="TearDown network for sandbox \"aa7c49f6c656374edc32cfc35744958459f47fb19807c32786714cc939851a3a\" successfully" Dec 13 13:34:10.521839 containerd[1540]: time="2024-12-13T13:34:10.514272563Z" level=info msg="StopPodSandbox for \"aa7c49f6c656374edc32cfc35744958459f47fb19807c32786714cc939851a3a\" returns successfully" Dec 13 13:34:10.521839 containerd[1540]: time="2024-12-13T13:34:10.514497790Z" level=info msg="StopPodSandbox for \"ae609dfc2992fdb2c11066c2b9c3abd85054910235b779057a4858a878ea95d7\"" Dec 13 13:34:10.521839 containerd[1540]: time="2024-12-13T13:34:10.514552030Z" level=info msg="TearDown network for sandbox \"ae609dfc2992fdb2c11066c2b9c3abd85054910235b779057a4858a878ea95d7\" successfully" Dec 13 13:34:10.521839 containerd[1540]: time="2024-12-13T13:34:10.514558398Z" level=info msg="StopPodSandbox for \"ae609dfc2992fdb2c11066c2b9c3abd85054910235b779057a4858a878ea95d7\" returns successfully" Dec 13 13:34:10.521839 containerd[1540]: time="2024-12-13T13:34:10.514847819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74874996db-hwhtg,Uid:1a917722-7626-420c-b334-c54df1962ff7,Namespace:calico-system,Attempt:2,}" Dec 13 13:34:10.507959 systemd[1]: run-netns-cni\x2d4a999388\x2dc86a\x2d935a\x2d6a51\x2d9f5c1fa14884.mount: Deactivated successfully. Dec 13 13:34:10.513137 systemd[1]: run-netns-cni\x2d52accd62\x2dd08f\x2db70d\x2dfb18\x2d900c8a835182.mount: Deactivated successfully. Dec 13 13:34:10.515487 systemd[1]: run-netns-cni\x2dd71c46b6\x2d9d08\x2ddf66\x2d35ef\x2d91693134c02c.mount: Deactivated successfully. Dec 13 13:34:10.515536 systemd[1]: run-netns-cni\x2dd22efb73\x2df001\x2daf9f\x2d310f\x2de59dbdeb43a4.mount: Deactivated successfully. Dec 13 13:34:10.800498 containerd[1540]: time="2024-12-13T13:34:10.800406584Z" level=error msg="Failed to destroy network for sandbox \"e95ee3e3a3545f69ea30e365faf744111ed0f2cfd0089e3b48f8cf66caab7830\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:10.803742 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e95ee3e3a3545f69ea30e365faf744111ed0f2cfd0089e3b48f8cf66caab7830-shm.mount: Deactivated successfully. Dec 13 13:34:10.804265 containerd[1540]: time="2024-12-13T13:34:10.804062741Z" level=error msg="encountered an error cleaning up failed sandbox \"e95ee3e3a3545f69ea30e365faf744111ed0f2cfd0089e3b48f8cf66caab7830\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:10.804265 containerd[1540]: time="2024-12-13T13:34:10.804112530Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-57pkw,Uid:044a2624-b2b5-4815-a892-56a4b4e7678a,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"e95ee3e3a3545f69ea30e365faf744111ed0f2cfd0089e3b48f8cf66caab7830\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:10.805033 kubelet[2835]: E1213 13:34:10.804782 2835 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e95ee3e3a3545f69ea30e365faf744111ed0f2cfd0089e3b48f8cf66caab7830\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:10.805033 kubelet[2835]: E1213 13:34:10.804833 2835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e95ee3e3a3545f69ea30e365faf744111ed0f2cfd0089e3b48f8cf66caab7830\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-57pkw" Dec 13 13:34:10.805033 kubelet[2835]: E1213 13:34:10.804855 2835 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e95ee3e3a3545f69ea30e365faf744111ed0f2cfd0089e3b48f8cf66caab7830\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-57pkw" Dec 13 13:34:10.805125 kubelet[2835]: E1213 13:34:10.804920 2835 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-57pkw_kube-system(044a2624-b2b5-4815-a892-56a4b4e7678a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-57pkw_kube-system(044a2624-b2b5-4815-a892-56a4b4e7678a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e95ee3e3a3545f69ea30e365faf744111ed0f2cfd0089e3b48f8cf66caab7830\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-57pkw" podUID="044a2624-b2b5-4815-a892-56a4b4e7678a" Dec 13 13:34:10.809116 containerd[1540]: time="2024-12-13T13:34:10.807650276Z" level=error msg="Failed to destroy network for sandbox \"38cd3764dd11dd796f443558251e2d1af831465a7cccb775e4565ed9f9116d58\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:10.810710 containerd[1540]: time="2024-12-13T13:34:10.810040246Z" level=error msg="encountered an error cleaning up failed sandbox \"38cd3764dd11dd796f443558251e2d1af831465a7cccb775e4565ed9f9116d58\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:10.810710 containerd[1540]: time="2024-12-13T13:34:10.810678484Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vvl2m,Uid:95ca7197-6257-4f74-a945-b91e5b94e808,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"38cd3764dd11dd796f443558251e2d1af831465a7cccb775e4565ed9f9116d58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:10.811540 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-38cd3764dd11dd796f443558251e2d1af831465a7cccb775e4565ed9f9116d58-shm.mount: Deactivated successfully. Dec 13 13:34:10.814103 kubelet[2835]: E1213 13:34:10.814090 2835 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38cd3764dd11dd796f443558251e2d1af831465a7cccb775e4565ed9f9116d58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:10.814197 kubelet[2835]: E1213 13:34:10.814191 2835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38cd3764dd11dd796f443558251e2d1af831465a7cccb775e4565ed9f9116d58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-vvl2m" Dec 13 13:34:10.814277 kubelet[2835]: E1213 13:34:10.814270 2835 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38cd3764dd11dd796f443558251e2d1af831465a7cccb775e4565ed9f9116d58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-vvl2m" Dec 13 13:34:10.814364 kubelet[2835]: E1213 13:34:10.814357 2835 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-vvl2m_kube-system(95ca7197-6257-4f74-a945-b91e5b94e808)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-vvl2m_kube-system(95ca7197-6257-4f74-a945-b91e5b94e808)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"38cd3764dd11dd796f443558251e2d1af831465a7cccb775e4565ed9f9116d58\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-vvl2m" podUID="95ca7197-6257-4f74-a945-b91e5b94e808" Dec 13 13:34:10.821857 containerd[1540]: time="2024-12-13T13:34:10.821125541Z" level=error msg="Failed to destroy network for sandbox \"f2395e64add4f1db0b85782980bdcdaf9699429f7df719adb1fd3c24cddc5aab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:10.825447 containerd[1540]: time="2024-12-13T13:34:10.825420333Z" level=error msg="Failed to destroy network for sandbox \"b728d27349f9d125051aedc463cf331f1faf1bcc698455461a967ac0b5d621f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:10.825938 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f2395e64add4f1db0b85782980bdcdaf9699429f7df719adb1fd3c24cddc5aab-shm.mount: Deactivated successfully. Dec 13 13:34:10.827904 kubelet[2835]: E1213 13:34:10.827391 2835 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2395e64add4f1db0b85782980bdcdaf9699429f7df719adb1fd3c24cddc5aab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:10.827904 kubelet[2835]: E1213 13:34:10.827429 2835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2395e64add4f1db0b85782980bdcdaf9699429f7df719adb1fd3c24cddc5aab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55476b5df-txx74" Dec 13 13:34:10.827904 kubelet[2835]: E1213 13:34:10.827443 2835 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2395e64add4f1db0b85782980bdcdaf9699429f7df719adb1fd3c24cddc5aab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55476b5df-txx74" Dec 13 13:34:10.827984 containerd[1540]: time="2024-12-13T13:34:10.827144889Z" level=error msg="encountered an error cleaning up failed sandbox \"f2395e64add4f1db0b85782980bdcdaf9699429f7df719adb1fd3c24cddc5aab\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:10.827984 containerd[1540]: time="2024-12-13T13:34:10.827204676Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55476b5df-txx74,Uid:594d5ede-fccb-4fa8-b157-5696299be69a,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"f2395e64add4f1db0b85782980bdcdaf9699429f7df719adb1fd3c24cddc5aab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:10.828054 kubelet[2835]: E1213 13:34:10.827479 2835 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55476b5df-txx74_calico-apiserver(594d5ede-fccb-4fa8-b157-5696299be69a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55476b5df-txx74_calico-apiserver(594d5ede-fccb-4fa8-b157-5696299be69a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f2395e64add4f1db0b85782980bdcdaf9699429f7df719adb1fd3c24cddc5aab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55476b5df-txx74" podUID="594d5ede-fccb-4fa8-b157-5696299be69a" Dec 13 13:34:10.828586 containerd[1540]: time="2024-12-13T13:34:10.828349014Z" level=error msg="encountered an error cleaning up failed sandbox \"b728d27349f9d125051aedc463cf331f1faf1bcc698455461a967ac0b5d621f1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:10.828586 containerd[1540]: time="2024-12-13T13:34:10.828427555Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55476b5df-rctdv,Uid:2b3c7f26-9f5a-42bd-b1a3-f7dc58c816f3,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"b728d27349f9d125051aedc463cf331f1faf1bcc698455461a967ac0b5d621f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:10.828733 kubelet[2835]: E1213 13:34:10.828623 2835 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b728d27349f9d125051aedc463cf331f1faf1bcc698455461a967ac0b5d621f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:10.828944 kubelet[2835]: E1213 13:34:10.828648 2835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b728d27349f9d125051aedc463cf331f1faf1bcc698455461a967ac0b5d621f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55476b5df-rctdv" Dec 13 13:34:10.828944 kubelet[2835]: E1213 13:34:10.828847 2835 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b728d27349f9d125051aedc463cf331f1faf1bcc698455461a967ac0b5d621f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55476b5df-rctdv" Dec 13 13:34:10.828944 kubelet[2835]: E1213 13:34:10.828885 2835 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55476b5df-rctdv_calico-apiserver(2b3c7f26-9f5a-42bd-b1a3-f7dc58c816f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55476b5df-rctdv_calico-apiserver(2b3c7f26-9f5a-42bd-b1a3-f7dc58c816f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b728d27349f9d125051aedc463cf331f1faf1bcc698455461a967ac0b5d621f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55476b5df-rctdv" podUID="2b3c7f26-9f5a-42bd-b1a3-f7dc58c816f3" Dec 13 13:34:10.829013 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b728d27349f9d125051aedc463cf331f1faf1bcc698455461a967ac0b5d621f1-shm.mount: Deactivated successfully. Dec 13 13:34:10.843378 containerd[1540]: time="2024-12-13T13:34:10.843212344Z" level=error msg="Failed to destroy network for sandbox \"9440f629d3fadd2bec6a0fdf543fbb5d75d841326e7be4bf0c7907196eb63e30\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:10.844712 containerd[1540]: time="2024-12-13T13:34:10.843916938Z" level=error msg="encountered an error cleaning up failed sandbox \"9440f629d3fadd2bec6a0fdf543fbb5d75d841326e7be4bf0c7907196eb63e30\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:10.844712 containerd[1540]: time="2024-12-13T13:34:10.844320605Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t28ks,Uid:6d6a0031-7d0e-4f38-97a2-6db6c2123341,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"9440f629d3fadd2bec6a0fdf543fbb5d75d841326e7be4bf0c7907196eb63e30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:10.845252 kubelet[2835]: E1213 13:34:10.844930 2835 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9440f629d3fadd2bec6a0fdf543fbb5d75d841326e7be4bf0c7907196eb63e30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:10.845252 kubelet[2835]: E1213 13:34:10.844966 2835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9440f629d3fadd2bec6a0fdf543fbb5d75d841326e7be4bf0c7907196eb63e30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t28ks" Dec 13 13:34:10.845252 kubelet[2835]: E1213 13:34:10.844980 2835 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9440f629d3fadd2bec6a0fdf543fbb5d75d841326e7be4bf0c7907196eb63e30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t28ks" Dec 13 13:34:10.845341 kubelet[2835]: E1213 13:34:10.845013 2835 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-t28ks_calico-system(6d6a0031-7d0e-4f38-97a2-6db6c2123341)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-t28ks_calico-system(6d6a0031-7d0e-4f38-97a2-6db6c2123341)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9440f629d3fadd2bec6a0fdf543fbb5d75d841326e7be4bf0c7907196eb63e30\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t28ks" podUID="6d6a0031-7d0e-4f38-97a2-6db6c2123341" Dec 13 13:34:10.851881 containerd[1540]: time="2024-12-13T13:34:10.851783141Z" level=error msg="Failed to destroy network for sandbox \"0b7439b43e35cac25488bd28ac234bf2d14fdc7eb34af6bf4fc4f0e2029647f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:10.852131 containerd[1540]: time="2024-12-13T13:34:10.852061543Z" level=error msg="encountered an error cleaning up failed sandbox \"0b7439b43e35cac25488bd28ac234bf2d14fdc7eb34af6bf4fc4f0e2029647f1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:10.852131 containerd[1540]: time="2024-12-13T13:34:10.852101954Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74874996db-hwhtg,Uid:1a917722-7626-420c-b334-c54df1962ff7,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"0b7439b43e35cac25488bd28ac234bf2d14fdc7eb34af6bf4fc4f0e2029647f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:10.852339 kubelet[2835]: E1213 13:34:10.852310 2835 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b7439b43e35cac25488bd28ac234bf2d14fdc7eb34af6bf4fc4f0e2029647f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:10.852381 kubelet[2835]: E1213 13:34:10.852356 2835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b7439b43e35cac25488bd28ac234bf2d14fdc7eb34af6bf4fc4f0e2029647f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74874996db-hwhtg" Dec 13 13:34:10.852381 kubelet[2835]: E1213 13:34:10.852372 2835 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b7439b43e35cac25488bd28ac234bf2d14fdc7eb34af6bf4fc4f0e2029647f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74874996db-hwhtg" Dec 13 13:34:10.852654 kubelet[2835]: E1213 13:34:10.852411 2835 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-74874996db-hwhtg_calico-system(1a917722-7626-420c-b334-c54df1962ff7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-74874996db-hwhtg_calico-system(1a917722-7626-420c-b334-c54df1962ff7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0b7439b43e35cac25488bd28ac234bf2d14fdc7eb34af6bf4fc4f0e2029647f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74874996db-hwhtg" podUID="1a917722-7626-420c-b334-c54df1962ff7" Dec 13 13:34:11.517681 kubelet[2835]: I1213 13:34:11.516596 2835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2395e64add4f1db0b85782980bdcdaf9699429f7df719adb1fd3c24cddc5aab" Dec 13 13:34:11.517990 containerd[1540]: time="2024-12-13T13:34:11.517907320Z" level=info msg="StopPodSandbox for \"f2395e64add4f1db0b85782980bdcdaf9699429f7df719adb1fd3c24cddc5aab\"" Dec 13 13:34:11.520754 containerd[1540]: time="2024-12-13T13:34:11.520653345Z" level=info msg="Ensure that sandbox f2395e64add4f1db0b85782980bdcdaf9699429f7df719adb1fd3c24cddc5aab in task-service has been cleanup successfully" Dec 13 13:34:11.520812 containerd[1540]: time="2024-12-13T13:34:11.520785855Z" level=info msg="TearDown network for sandbox \"f2395e64add4f1db0b85782980bdcdaf9699429f7df719adb1fd3c24cddc5aab\" successfully" Dec 13 13:34:11.520812 containerd[1540]: time="2024-12-13T13:34:11.520794362Z" level=info msg="StopPodSandbox for \"f2395e64add4f1db0b85782980bdcdaf9699429f7df719adb1fd3c24cddc5aab\" returns successfully" Dec 13 13:34:11.528126 containerd[1540]: time="2024-12-13T13:34:11.528099266Z" level=info msg="StopPodSandbox for \"0dabb5f4b1ea298c91cbde131d95492354ec96009d66dab860652a5ca3101e2a\"" Dec 13 13:34:11.528201 containerd[1540]: time="2024-12-13T13:34:11.528165026Z" level=info msg="TearDown network for sandbox \"0dabb5f4b1ea298c91cbde131d95492354ec96009d66dab860652a5ca3101e2a\" successfully" Dec 13 13:34:11.528201 containerd[1540]: time="2024-12-13T13:34:11.528171614Z" level=info msg="StopPodSandbox for \"0dabb5f4b1ea298c91cbde131d95492354ec96009d66dab860652a5ca3101e2a\" returns successfully" Dec 13 13:34:11.528917 containerd[1540]: time="2024-12-13T13:34:11.528903302Z" level=info msg="StopPodSandbox for \"cdd50c9de02b78e81c8129b164d9902caeccd7a5f6cf84cfbc17155ba9403a75\"" Dec 13 13:34:11.528949 containerd[1540]: time="2024-12-13T13:34:11.528944228Z" level=info msg="TearDown network for sandbox \"cdd50c9de02b78e81c8129b164d9902caeccd7a5f6cf84cfbc17155ba9403a75\" successfully" Dec 13 13:34:11.528973 containerd[1540]: time="2024-12-13T13:34:11.528950255Z" level=info msg="StopPodSandbox for \"cdd50c9de02b78e81c8129b164d9902caeccd7a5f6cf84cfbc17155ba9403a75\" returns successfully" Dec 13 13:34:11.529368 containerd[1540]: time="2024-12-13T13:34:11.529353493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55476b5df-txx74,Uid:594d5ede-fccb-4fa8-b157-5696299be69a,Namespace:calico-apiserver,Attempt:3,}" Dec 13 13:34:11.542090 kubelet[2835]: I1213 13:34:11.542066 2835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38cd3764dd11dd796f443558251e2d1af831465a7cccb775e4565ed9f9116d58" Dec 13 13:34:11.543558 containerd[1540]: time="2024-12-13T13:34:11.543536432Z" level=info msg="StopPodSandbox for \"38cd3764dd11dd796f443558251e2d1af831465a7cccb775e4565ed9f9116d58\"" Dec 13 13:34:11.543758 containerd[1540]: time="2024-12-13T13:34:11.543747006Z" level=info msg="Ensure that sandbox 38cd3764dd11dd796f443558251e2d1af831465a7cccb775e4565ed9f9116d58 in task-service has been cleanup successfully" Dec 13 13:34:11.545190 containerd[1540]: time="2024-12-13T13:34:11.545175304Z" level=info msg="TearDown network for sandbox \"38cd3764dd11dd796f443558251e2d1af831465a7cccb775e4565ed9f9116d58\" successfully" Dec 13 13:34:11.545282 containerd[1540]: time="2024-12-13T13:34:11.545272794Z" level=info msg="StopPodSandbox for \"38cd3764dd11dd796f443558251e2d1af831465a7cccb775e4565ed9f9116d58\" returns successfully" Dec 13 13:34:11.546207 containerd[1540]: time="2024-12-13T13:34:11.546126201Z" level=info msg="StopPodSandbox for \"31b9e46dd01ac8c3e96533dd860cad6d5ba5c4f5c8e5c992dc17ab64757b4ac1\"" Dec 13 13:34:11.546207 containerd[1540]: time="2024-12-13T13:34:11.546173679Z" level=info msg="TearDown network for sandbox \"31b9e46dd01ac8c3e96533dd860cad6d5ba5c4f5c8e5c992dc17ab64757b4ac1\" successfully" Dec 13 13:34:11.546207 containerd[1540]: time="2024-12-13T13:34:11.546183655Z" level=info msg="StopPodSandbox for \"31b9e46dd01ac8c3e96533dd860cad6d5ba5c4f5c8e5c992dc17ab64757b4ac1\" returns successfully" Dec 13 13:34:11.546688 containerd[1540]: time="2024-12-13T13:34:11.546676712Z" level=info msg="StopPodSandbox for \"de2a3220a48c9e9cc9b729ae9f0eb51a9e87590aadfcb088a4ac339b6d2a4bcb\"" Dec 13 13:34:11.546783 containerd[1540]: time="2024-12-13T13:34:11.546774803Z" level=info msg="TearDown network for sandbox \"de2a3220a48c9e9cc9b729ae9f0eb51a9e87590aadfcb088a4ac339b6d2a4bcb\" successfully" Dec 13 13:34:11.546826 containerd[1540]: time="2024-12-13T13:34:11.546818939Z" level=info msg="StopPodSandbox for \"de2a3220a48c9e9cc9b729ae9f0eb51a9e87590aadfcb088a4ac339b6d2a4bcb\" returns successfully" Dec 13 13:34:11.547164 containerd[1540]: time="2024-12-13T13:34:11.547154620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vvl2m,Uid:95ca7197-6257-4f74-a945-b91e5b94e808,Namespace:kube-system,Attempt:3,}" Dec 13 13:34:11.552059 kubelet[2835]: I1213 13:34:11.552034 2835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9440f629d3fadd2bec6a0fdf543fbb5d75d841326e7be4bf0c7907196eb63e30" Dec 13 13:34:11.562254 containerd[1540]: time="2024-12-13T13:34:11.561458292Z" level=info msg="StopPodSandbox for \"9440f629d3fadd2bec6a0fdf543fbb5d75d841326e7be4bf0c7907196eb63e30\"" Dec 13 13:34:11.562254 containerd[1540]: time="2024-12-13T13:34:11.561598337Z" level=info msg="Ensure that sandbox 9440f629d3fadd2bec6a0fdf543fbb5d75d841326e7be4bf0c7907196eb63e30 in task-service has been cleanup successfully" Dec 13 13:34:11.562254 containerd[1540]: time="2024-12-13T13:34:11.561792000Z" level=info msg="TearDown network for sandbox \"9440f629d3fadd2bec6a0fdf543fbb5d75d841326e7be4bf0c7907196eb63e30\" successfully" Dec 13 13:34:11.562254 containerd[1540]: time="2024-12-13T13:34:11.561800638Z" level=info msg="StopPodSandbox for \"9440f629d3fadd2bec6a0fdf543fbb5d75d841326e7be4bf0c7907196eb63e30\" returns successfully" Dec 13 13:34:11.563259 containerd[1540]: time="2024-12-13T13:34:11.562445235Z" level=info msg="StopPodSandbox for \"1b826aa3b1f3ae55bbafa8172c171a0658192d78431f4568b1172509d1bb6230\"" Dec 13 13:34:11.563259 containerd[1540]: time="2024-12-13T13:34:11.562498673Z" level=info msg="TearDown network for sandbox \"1b826aa3b1f3ae55bbafa8172c171a0658192d78431f4568b1172509d1bb6230\" successfully" Dec 13 13:34:11.563259 containerd[1540]: time="2024-12-13T13:34:11.562505706Z" level=info msg="StopPodSandbox for \"1b826aa3b1f3ae55bbafa8172c171a0658192d78431f4568b1172509d1bb6230\" returns successfully" Dec 13 13:34:11.563259 containerd[1540]: time="2024-12-13T13:34:11.562799524Z" level=info msg="StopPodSandbox for \"9c73e609d1e184c6a8d67a7f217dec6cfe2db47a60b8873672c6aea8607a4de1\"" Dec 13 13:34:11.563259 containerd[1540]: time="2024-12-13T13:34:11.562855824Z" level=info msg="TearDown network for sandbox \"9c73e609d1e184c6a8d67a7f217dec6cfe2db47a60b8873672c6aea8607a4de1\" successfully" Dec 13 13:34:11.563259 containerd[1540]: time="2024-12-13T13:34:11.562862035Z" level=info msg="StopPodSandbox for \"9c73e609d1e184c6a8d67a7f217dec6cfe2db47a60b8873672c6aea8607a4de1\" returns successfully" Dec 13 13:34:11.563259 containerd[1540]: time="2024-12-13T13:34:11.563129724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t28ks,Uid:6d6a0031-7d0e-4f38-97a2-6db6c2123341,Namespace:calico-system,Attempt:3,}" Dec 13 13:34:11.564963 kubelet[2835]: I1213 13:34:11.564101 2835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b728d27349f9d125051aedc463cf331f1faf1bcc698455461a967ac0b5d621f1" Dec 13 13:34:11.565045 containerd[1540]: time="2024-12-13T13:34:11.564414639Z" level=info msg="StopPodSandbox for \"b728d27349f9d125051aedc463cf331f1faf1bcc698455461a967ac0b5d621f1\"" Dec 13 13:34:11.565189 containerd[1540]: time="2024-12-13T13:34:11.565162615Z" level=info msg="Ensure that sandbox b728d27349f9d125051aedc463cf331f1faf1bcc698455461a967ac0b5d621f1 in task-service has been cleanup successfully" Dec 13 13:34:11.565627 containerd[1540]: time="2024-12-13T13:34:11.565611787Z" level=info msg="TearDown network for sandbox \"b728d27349f9d125051aedc463cf331f1faf1bcc698455461a967ac0b5d621f1\" successfully" Dec 13 13:34:11.565627 containerd[1540]: time="2024-12-13T13:34:11.565622526Z" level=info msg="StopPodSandbox for \"b728d27349f9d125051aedc463cf331f1faf1bcc698455461a967ac0b5d621f1\" returns successfully" Dec 13 13:34:11.566296 containerd[1540]: time="2024-12-13T13:34:11.566062104Z" level=info msg="StopPodSandbox for \"c6df07d0815948047f4d6cc2c7af0ea066838f1e44745877608753226f54289c\"" Dec 13 13:34:11.566465 containerd[1540]: time="2024-12-13T13:34:11.566451159Z" level=info msg="TearDown network for sandbox \"c6df07d0815948047f4d6cc2c7af0ea066838f1e44745877608753226f54289c\" successfully" Dec 13 13:34:11.566546 kubelet[2835]: I1213 13:34:11.566476 2835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b7439b43e35cac25488bd28ac234bf2d14fdc7eb34af6bf4fc4f0e2029647f1" Dec 13 13:34:11.566602 containerd[1540]: time="2024-12-13T13:34:11.566589429Z" level=info msg="StopPodSandbox for \"c6df07d0815948047f4d6cc2c7af0ea066838f1e44745877608753226f54289c\" returns successfully" Dec 13 13:34:11.568331 containerd[1540]: time="2024-12-13T13:34:11.568312876Z" level=info msg="StopPodSandbox for \"00f697d42c2d50b3139e5b825119ed91816dde91442ca398a78122299baeac17\"" Dec 13 13:34:11.568376 containerd[1540]: time="2024-12-13T13:34:11.568360749Z" level=info msg="TearDown network for sandbox \"00f697d42c2d50b3139e5b825119ed91816dde91442ca398a78122299baeac17\" successfully" Dec 13 13:34:11.568376 containerd[1540]: time="2024-12-13T13:34:11.568367150Z" level=info msg="StopPodSandbox for \"00f697d42c2d50b3139e5b825119ed91816dde91442ca398a78122299baeac17\" returns successfully" Dec 13 13:34:11.568417 containerd[1540]: time="2024-12-13T13:34:11.568405334Z" level=info msg="StopPodSandbox for \"0b7439b43e35cac25488bd28ac234bf2d14fdc7eb34af6bf4fc4f0e2029647f1\"" Dec 13 13:34:11.569871 containerd[1540]: time="2024-12-13T13:34:11.569190072Z" level=info msg="Ensure that sandbox 0b7439b43e35cac25488bd28ac234bf2d14fdc7eb34af6bf4fc4f0e2029647f1 in task-service has been cleanup successfully" Dec 13 13:34:11.570106 containerd[1540]: time="2024-12-13T13:34:11.570083884Z" level=info msg="TearDown network for sandbox \"0b7439b43e35cac25488bd28ac234bf2d14fdc7eb34af6bf4fc4f0e2029647f1\" successfully" Dec 13 13:34:11.570212 containerd[1540]: time="2024-12-13T13:34:11.570201188Z" level=info msg="StopPodSandbox for \"0b7439b43e35cac25488bd28ac234bf2d14fdc7eb34af6bf4fc4f0e2029647f1\" returns successfully" Dec 13 13:34:11.570718 containerd[1540]: time="2024-12-13T13:34:11.570596672Z" level=info msg="StopPodSandbox for \"aa7c49f6c656374edc32cfc35744958459f47fb19807c32786714cc939851a3a\"" Dec 13 13:34:11.570976 containerd[1540]: time="2024-12-13T13:34:11.570964900Z" level=info msg="TearDown network for sandbox \"aa7c49f6c656374edc32cfc35744958459f47fb19807c32786714cc939851a3a\" successfully" Dec 13 13:34:11.571177 containerd[1540]: time="2024-12-13T13:34:11.571116862Z" level=info msg="StopPodSandbox for \"aa7c49f6c656374edc32cfc35744958459f47fb19807c32786714cc939851a3a\" returns successfully" Dec 13 13:34:11.571206 containerd[1540]: time="2024-12-13T13:34:11.570597906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55476b5df-rctdv,Uid:2b3c7f26-9f5a-42bd-b1a3-f7dc58c816f3,Namespace:calico-apiserver,Attempt:3,}" Dec 13 13:34:11.578631 containerd[1540]: time="2024-12-13T13:34:11.578606721Z" level=info msg="StopPodSandbox for \"ae609dfc2992fdb2c11066c2b9c3abd85054910235b779057a4858a878ea95d7\"" Dec 13 13:34:11.578768 containerd[1540]: time="2024-12-13T13:34:11.578668214Z" level=info msg="TearDown network for sandbox \"ae609dfc2992fdb2c11066c2b9c3abd85054910235b779057a4858a878ea95d7\" successfully" Dec 13 13:34:11.578768 containerd[1540]: time="2024-12-13T13:34:11.578675717Z" level=info msg="StopPodSandbox for \"ae609dfc2992fdb2c11066c2b9c3abd85054910235b779057a4858a878ea95d7\" returns successfully" Dec 13 13:34:11.579267 kubelet[2835]: I1213 13:34:11.579164 2835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e95ee3e3a3545f69ea30e365faf744111ed0f2cfd0089e3b48f8cf66caab7830" Dec 13 13:34:11.583595 containerd[1540]: time="2024-12-13T13:34:11.583502622Z" level=info msg="StopPodSandbox for \"e95ee3e3a3545f69ea30e365faf744111ed0f2cfd0089e3b48f8cf66caab7830\"" Dec 13 13:34:11.583924 containerd[1540]: time="2024-12-13T13:34:11.583848054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74874996db-hwhtg,Uid:1a917722-7626-420c-b334-c54df1962ff7,Namespace:calico-system,Attempt:3,}" Dec 13 13:34:11.584856 containerd[1540]: time="2024-12-13T13:34:11.584844003Z" level=info msg="Ensure that sandbox e95ee3e3a3545f69ea30e365faf744111ed0f2cfd0089e3b48f8cf66caab7830 in task-service has been cleanup successfully" Dec 13 13:34:11.585419 containerd[1540]: time="2024-12-13T13:34:11.585349650Z" level=info msg="TearDown network for sandbox \"e95ee3e3a3545f69ea30e365faf744111ed0f2cfd0089e3b48f8cf66caab7830\" successfully" Dec 13 13:34:11.585419 containerd[1540]: time="2024-12-13T13:34:11.585361836Z" level=info msg="StopPodSandbox for \"e95ee3e3a3545f69ea30e365faf744111ed0f2cfd0089e3b48f8cf66caab7830\" returns successfully" Dec 13 13:34:11.608657 containerd[1540]: time="2024-12-13T13:34:11.608529091Z" level=info msg="StopPodSandbox for \"0a82589ee1ce9211a28b404bf05d8a5d55870d00dbaa6f040e19fd5860a3aa2e\"" Dec 13 13:34:11.608657 containerd[1540]: time="2024-12-13T13:34:11.608594838Z" level=info msg="TearDown network for sandbox \"0a82589ee1ce9211a28b404bf05d8a5d55870d00dbaa6f040e19fd5860a3aa2e\" successfully" Dec 13 13:34:11.608657 containerd[1540]: time="2024-12-13T13:34:11.608601677Z" level=info msg="StopPodSandbox for \"0a82589ee1ce9211a28b404bf05d8a5d55870d00dbaa6f040e19fd5860a3aa2e\" returns successfully" Dec 13 13:34:11.609632 containerd[1540]: time="2024-12-13T13:34:11.609618962Z" level=info msg="StopPodSandbox for \"380b70df06034fce2ee4e2503b62611323260cf3362cc30533c33ca8924f6498\"" Dec 13 13:34:11.609792 containerd[1540]: time="2024-12-13T13:34:11.609782271Z" level=info msg="TearDown network for sandbox \"380b70df06034fce2ee4e2503b62611323260cf3362cc30533c33ca8924f6498\" successfully" Dec 13 13:34:11.609937 containerd[1540]: time="2024-12-13T13:34:11.609929060Z" level=info msg="StopPodSandbox for \"380b70df06034fce2ee4e2503b62611323260cf3362cc30533c33ca8924f6498\" returns successfully" Dec 13 13:34:11.610653 containerd[1540]: time="2024-12-13T13:34:11.610641705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-57pkw,Uid:044a2624-b2b5-4815-a892-56a4b4e7678a,Namespace:kube-system,Attempt:3,}" Dec 13 13:34:11.690630 containerd[1540]: time="2024-12-13T13:34:11.689979002Z" level=error msg="Failed to destroy network for sandbox \"d999310267a45ca66e6bc057e30db6969005bfdb7c001c4e75972ba37074eb74\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:11.692016 containerd[1540]: time="2024-12-13T13:34:11.691832420Z" level=error msg="Failed to destroy network for sandbox \"2c10e14c46ce3ec5b997f10afea52ab894699d44aedc127a1a0a5d8708fe4938\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:11.697775 containerd[1540]: time="2024-12-13T13:34:11.697655484Z" level=error msg="encountered an error cleaning up failed sandbox \"2c10e14c46ce3ec5b997f10afea52ab894699d44aedc127a1a0a5d8708fe4938\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:11.697775 containerd[1540]: time="2024-12-13T13:34:11.697717448Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vvl2m,Uid:95ca7197-6257-4f74-a945-b91e5b94e808,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"2c10e14c46ce3ec5b997f10afea52ab894699d44aedc127a1a0a5d8708fe4938\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:11.697875 containerd[1540]: time="2024-12-13T13:34:11.696421623Z" level=error msg="encountered an error cleaning up failed sandbox \"d999310267a45ca66e6bc057e30db6969005bfdb7c001c4e75972ba37074eb74\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:11.697875 containerd[1540]: time="2024-12-13T13:34:11.697828730Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55476b5df-txx74,Uid:594d5ede-fccb-4fa8-b157-5696299be69a,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"d999310267a45ca66e6bc057e30db6969005bfdb7c001c4e75972ba37074eb74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:11.700147 systemd[1]: run-netns-cni\x2d3d1e8934\x2d133b\x2dc1c4\x2de23e\x2dfd748b6cdf06.mount: Deactivated successfully. Dec 13 13:34:11.700204 systemd[1]: run-netns-cni\x2dd9a16d02\x2d2b50\x2d1982\x2de5a5\x2d3a257c7c5fdf.mount: Deactivated successfully. Dec 13 13:34:11.700881 kubelet[2835]: E1213 13:34:11.700731 2835 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c10e14c46ce3ec5b997f10afea52ab894699d44aedc127a1a0a5d8708fe4938\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:11.700881 kubelet[2835]: E1213 13:34:11.700773 2835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c10e14c46ce3ec5b997f10afea52ab894699d44aedc127a1a0a5d8708fe4938\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-vvl2m" Dec 13 13:34:11.700881 kubelet[2835]: E1213 13:34:11.700789 2835 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c10e14c46ce3ec5b997f10afea52ab894699d44aedc127a1a0a5d8708fe4938\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-vvl2m" Dec 13 13:34:11.700881 kubelet[2835]: E1213 13:34:11.700798 2835 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d999310267a45ca66e6bc057e30db6969005bfdb7c001c4e75972ba37074eb74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:11.700995 kubelet[2835]: E1213 13:34:11.700822 2835 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-vvl2m_kube-system(95ca7197-6257-4f74-a945-b91e5b94e808)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-vvl2m_kube-system(95ca7197-6257-4f74-a945-b91e5b94e808)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c10e14c46ce3ec5b997f10afea52ab894699d44aedc127a1a0a5d8708fe4938\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-vvl2m" podUID="95ca7197-6257-4f74-a945-b91e5b94e808" Dec 13 13:34:11.700995 kubelet[2835]: E1213 13:34:11.700821 2835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d999310267a45ca66e6bc057e30db6969005bfdb7c001c4e75972ba37074eb74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55476b5df-txx74" Dec 13 13:34:11.700995 kubelet[2835]: E1213 13:34:11.700841 2835 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d999310267a45ca66e6bc057e30db6969005bfdb7c001c4e75972ba37074eb74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55476b5df-txx74" Dec 13 13:34:11.701078 kubelet[2835]: E1213 13:34:11.700863 2835 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55476b5df-txx74_calico-apiserver(594d5ede-fccb-4fa8-b157-5696299be69a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55476b5df-txx74_calico-apiserver(594d5ede-fccb-4fa8-b157-5696299be69a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d999310267a45ca66e6bc057e30db6969005bfdb7c001c4e75972ba37074eb74\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55476b5df-txx74" podUID="594d5ede-fccb-4fa8-b157-5696299be69a" Dec 13 13:34:11.701479 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0b7439b43e35cac25488bd28ac234bf2d14fdc7eb34af6bf4fc4f0e2029647f1-shm.mount: Deactivated successfully. Dec 13 13:34:11.701599 systemd[1]: run-netns-cni\x2d1bbe2d6c\x2d5c47\x2d3e6a\x2d3503\x2d83f2d0b95020.mount: Deactivated successfully. Dec 13 13:34:11.701639 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9440f629d3fadd2bec6a0fdf543fbb5d75d841326e7be4bf0c7907196eb63e30-shm.mount: Deactivated successfully. Dec 13 13:34:11.701680 systemd[1]: run-netns-cni\x2dc5abe7e5\x2df46a\x2d08b1\x2d2e6b\x2d5b3bc6d3f035.mount: Deactivated successfully. Dec 13 13:34:11.701713 systemd[1]: run-netns-cni\x2d074d033c\x2d02bb\x2d9028\x2d28dd\x2d5952e91930ed.mount: Deactivated successfully. Dec 13 13:34:11.701745 systemd[1]: run-netns-cni\x2db7aff2c0\x2d2289\x2d29a0\x2da582\x2d57027ddd03b6.mount: Deactivated successfully. Dec 13 13:34:11.707194 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2c10e14c46ce3ec5b997f10afea52ab894699d44aedc127a1a0a5d8708fe4938-shm.mount: Deactivated successfully. Dec 13 13:34:11.758869 containerd[1540]: time="2024-12-13T13:34:11.758714287Z" level=error msg="Failed to destroy network for sandbox \"bfb40143397fd98326b98c45dd1f4f82c460a9b0879d7bc1fede2668ca7ef3e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:11.760590 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bfb40143397fd98326b98c45dd1f4f82c460a9b0879d7bc1fede2668ca7ef3e0-shm.mount: Deactivated successfully. Dec 13 13:34:11.769953 containerd[1540]: time="2024-12-13T13:34:11.764421909Z" level=error msg="encountered an error cleaning up failed sandbox \"bfb40143397fd98326b98c45dd1f4f82c460a9b0879d7bc1fede2668ca7ef3e0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:11.769953 containerd[1540]: time="2024-12-13T13:34:11.764474901Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55476b5df-rctdv,Uid:2b3c7f26-9f5a-42bd-b1a3-f7dc58c816f3,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"bfb40143397fd98326b98c45dd1f4f82c460a9b0879d7bc1fede2668ca7ef3e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:11.770083 kubelet[2835]: E1213 13:34:11.764644 2835 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfb40143397fd98326b98c45dd1f4f82c460a9b0879d7bc1fede2668ca7ef3e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:11.770083 kubelet[2835]: E1213 13:34:11.764678 2835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfb40143397fd98326b98c45dd1f4f82c460a9b0879d7bc1fede2668ca7ef3e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55476b5df-rctdv" Dec 13 13:34:11.770083 kubelet[2835]: E1213 13:34:11.764692 2835 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfb40143397fd98326b98c45dd1f4f82c460a9b0879d7bc1fede2668ca7ef3e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55476b5df-rctdv" Dec 13 13:34:11.770582 kubelet[2835]: E1213 13:34:11.764738 2835 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55476b5df-rctdv_calico-apiserver(2b3c7f26-9f5a-42bd-b1a3-f7dc58c816f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55476b5df-rctdv_calico-apiserver(2b3c7f26-9f5a-42bd-b1a3-f7dc58c816f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bfb40143397fd98326b98c45dd1f4f82c460a9b0879d7bc1fede2668ca7ef3e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55476b5df-rctdv" podUID="2b3c7f26-9f5a-42bd-b1a3-f7dc58c816f3" Dec 13 13:34:11.782710 containerd[1540]: time="2024-12-13T13:34:11.782677607Z" level=error msg="Failed to destroy network for sandbox \"d7b4d845515e5010111f72d28fb6298ea56240c3f9e45380884271fce541ca12\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:11.784484 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d7b4d845515e5010111f72d28fb6298ea56240c3f9e45380884271fce541ca12-shm.mount: Deactivated successfully. Dec 13 13:34:11.787370 containerd[1540]: time="2024-12-13T13:34:11.787343789Z" level=error msg="Failed to destroy network for sandbox \"d46b92f868fc17306eeb28b300837c3c680502e5586b50517041c78a1b14f039\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:11.789502 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d46b92f868fc17306eeb28b300837c3c680502e5586b50517041c78a1b14f039-shm.mount: Deactivated successfully. Dec 13 13:34:11.793055 containerd[1540]: time="2024-12-13T13:34:11.792966344Z" level=error msg="Failed to destroy network for sandbox \"e5017c5028ba35b9b6aa1ab31287cb736abfbdd2030510101d39e2a50c82f63d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:11.795562 containerd[1540]: time="2024-12-13T13:34:11.795534152Z" level=error msg="encountered an error cleaning up failed sandbox \"d46b92f868fc17306eeb28b300837c3c680502e5586b50517041c78a1b14f039\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:11.795602 containerd[1540]: time="2024-12-13T13:34:11.795584109Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-57pkw,Uid:044a2624-b2b5-4815-a892-56a4b4e7678a,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"d46b92f868fc17306eeb28b300837c3c680502e5586b50517041c78a1b14f039\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:11.795860 containerd[1540]: time="2024-12-13T13:34:11.795843745Z" level=error msg="encountered an error cleaning up failed sandbox \"e5017c5028ba35b9b6aa1ab31287cb736abfbdd2030510101d39e2a50c82f63d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:11.795894 containerd[1540]: time="2024-12-13T13:34:11.795866547Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74874996db-hwhtg,Uid:1a917722-7626-420c-b334-c54df1962ff7,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"e5017c5028ba35b9b6aa1ab31287cb736abfbdd2030510101d39e2a50c82f63d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:11.799069 kubelet[2835]: E1213 13:34:11.799056 2835 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5017c5028ba35b9b6aa1ab31287cb736abfbdd2030510101d39e2a50c82f63d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:11.799405 kubelet[2835]: E1213 13:34:11.799284 2835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5017c5028ba35b9b6aa1ab31287cb736abfbdd2030510101d39e2a50c82f63d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74874996db-hwhtg" Dec 13 13:34:11.799405 kubelet[2835]: E1213 13:34:11.799304 2835 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5017c5028ba35b9b6aa1ab31287cb736abfbdd2030510101d39e2a50c82f63d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74874996db-hwhtg" Dec 13 13:34:11.799405 kubelet[2835]: E1213 13:34:11.799352 2835 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-74874996db-hwhtg_calico-system(1a917722-7626-420c-b334-c54df1962ff7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-74874996db-hwhtg_calico-system(1a917722-7626-420c-b334-c54df1962ff7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e5017c5028ba35b9b6aa1ab31287cb736abfbdd2030510101d39e2a50c82f63d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74874996db-hwhtg" podUID="1a917722-7626-420c-b334-c54df1962ff7" Dec 13 13:34:11.799527 kubelet[2835]: E1213 13:34:11.799362 2835 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d46b92f868fc17306eeb28b300837c3c680502e5586b50517041c78a1b14f039\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:11.799527 kubelet[2835]: E1213 13:34:11.799383 2835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d46b92f868fc17306eeb28b300837c3c680502e5586b50517041c78a1b14f039\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-57pkw" Dec 13 13:34:11.799527 kubelet[2835]: E1213 13:34:11.799402 2835 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d46b92f868fc17306eeb28b300837c3c680502e5586b50517041c78a1b14f039\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-57pkw" Dec 13 13:34:11.799586 kubelet[2835]: E1213 13:34:11.799426 2835 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-57pkw_kube-system(044a2624-b2b5-4815-a892-56a4b4e7678a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-57pkw_kube-system(044a2624-b2b5-4815-a892-56a4b4e7678a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d46b92f868fc17306eeb28b300837c3c680502e5586b50517041c78a1b14f039\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-57pkw" podUID="044a2624-b2b5-4815-a892-56a4b4e7678a" Dec 13 13:34:11.977298 containerd[1540]: time="2024-12-13T13:34:11.977265379Z" level=error msg="encountered an error cleaning up failed sandbox \"d7b4d845515e5010111f72d28fb6298ea56240c3f9e45380884271fce541ca12\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:11.978085 containerd[1540]: time="2024-12-13T13:34:11.977316654Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t28ks,Uid:6d6a0031-7d0e-4f38-97a2-6db6c2123341,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"d7b4d845515e5010111f72d28fb6298ea56240c3f9e45380884271fce541ca12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:11.978115 kubelet[2835]: E1213 13:34:11.977448 2835 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7b4d845515e5010111f72d28fb6298ea56240c3f9e45380884271fce541ca12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:11.978115 kubelet[2835]: E1213 13:34:11.977479 2835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7b4d845515e5010111f72d28fb6298ea56240c3f9e45380884271fce541ca12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t28ks" Dec 13 13:34:11.978115 kubelet[2835]: E1213 13:34:11.977493 2835 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7b4d845515e5010111f72d28fb6298ea56240c3f9e45380884271fce541ca12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t28ks" Dec 13 13:34:11.978182 kubelet[2835]: E1213 13:34:11.977529 2835 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-t28ks_calico-system(6d6a0031-7d0e-4f38-97a2-6db6c2123341)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-t28ks_calico-system(6d6a0031-7d0e-4f38-97a2-6db6c2123341)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7b4d845515e5010111f72d28fb6298ea56240c3f9e45380884271fce541ca12\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t28ks" podUID="6d6a0031-7d0e-4f38-97a2-6db6c2123341" Dec 13 13:34:12.581903 kubelet[2835]: I1213 13:34:12.581889 2835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d999310267a45ca66e6bc057e30db6969005bfdb7c001c4e75972ba37074eb74" Dec 13 13:34:12.582298 containerd[1540]: time="2024-12-13T13:34:12.582276570Z" level=info msg="StopPodSandbox for \"d999310267a45ca66e6bc057e30db6969005bfdb7c001c4e75972ba37074eb74\"" Dec 13 13:34:12.595296 containerd[1540]: time="2024-12-13T13:34:12.595121243Z" level=info msg="Ensure that sandbox d999310267a45ca66e6bc057e30db6969005bfdb7c001c4e75972ba37074eb74 in task-service has been cleanup successfully" Dec 13 13:34:12.598454 containerd[1540]: time="2024-12-13T13:34:12.596247956Z" level=info msg="TearDown network for sandbox \"d999310267a45ca66e6bc057e30db6969005bfdb7c001c4e75972ba37074eb74\" successfully" Dec 13 13:34:12.598454 containerd[1540]: time="2024-12-13T13:34:12.596257813Z" level=info msg="StopPodSandbox for \"d999310267a45ca66e6bc057e30db6969005bfdb7c001c4e75972ba37074eb74\" returns successfully" Dec 13 13:34:12.598454 containerd[1540]: time="2024-12-13T13:34:12.596340269Z" level=info msg="StopPodSandbox for \"2c10e14c46ce3ec5b997f10afea52ab894699d44aedc127a1a0a5d8708fe4938\"" Dec 13 13:34:12.598454 containerd[1540]: time="2024-12-13T13:34:12.596431383Z" level=info msg="Ensure that sandbox 2c10e14c46ce3ec5b997f10afea52ab894699d44aedc127a1a0a5d8708fe4938 in task-service has been cleanup successfully" Dec 13 13:34:12.598454 containerd[1540]: time="2024-12-13T13:34:12.596689501Z" level=info msg="TearDown network for sandbox \"2c10e14c46ce3ec5b997f10afea52ab894699d44aedc127a1a0a5d8708fe4938\" successfully" Dec 13 13:34:12.598454 containerd[1540]: time="2024-12-13T13:34:12.596697968Z" level=info msg="StopPodSandbox for \"2c10e14c46ce3ec5b997f10afea52ab894699d44aedc127a1a0a5d8708fe4938\" returns successfully" Dec 13 13:34:12.598454 containerd[1540]: time="2024-12-13T13:34:12.596870121Z" level=info msg="StopPodSandbox for \"38cd3764dd11dd796f443558251e2d1af831465a7cccb775e4565ed9f9116d58\"" Dec 13 13:34:12.598454 containerd[1540]: time="2024-12-13T13:34:12.596983764Z" level=info msg="TearDown network for sandbox \"38cd3764dd11dd796f443558251e2d1af831465a7cccb775e4565ed9f9116d58\" successfully" Dec 13 13:34:12.598454 containerd[1540]: time="2024-12-13T13:34:12.596994054Z" level=info msg="StopPodSandbox for \"38cd3764dd11dd796f443558251e2d1af831465a7cccb775e4565ed9f9116d58\" returns successfully" Dec 13 13:34:12.598454 containerd[1540]: time="2024-12-13T13:34:12.597183862Z" level=info msg="StopPodSandbox for \"f2395e64add4f1db0b85782980bdcdaf9699429f7df719adb1fd3c24cddc5aab\"" Dec 13 13:34:12.598454 containerd[1540]: time="2024-12-13T13:34:12.597225222Z" level=info msg="TearDown network for sandbox \"f2395e64add4f1db0b85782980bdcdaf9699429f7df719adb1fd3c24cddc5aab\" successfully" Dec 13 13:34:12.598454 containerd[1540]: time="2024-12-13T13:34:12.597231141Z" level=info msg="StopPodSandbox for \"f2395e64add4f1db0b85782980bdcdaf9699429f7df719adb1fd3c24cddc5aab\" returns successfully" Dec 13 13:34:12.598454 containerd[1540]: time="2024-12-13T13:34:12.597658136Z" level=info msg="StopPodSandbox for \"31b9e46dd01ac8c3e96533dd860cad6d5ba5c4f5c8e5c992dc17ab64757b4ac1\"" Dec 13 13:34:12.598454 containerd[1540]: time="2024-12-13T13:34:12.597695214Z" level=info msg="TearDown network for sandbox \"31b9e46dd01ac8c3e96533dd860cad6d5ba5c4f5c8e5c992dc17ab64757b4ac1\" successfully" Dec 13 13:34:12.598454 containerd[1540]: time="2024-12-13T13:34:12.597701673Z" level=info msg="StopPodSandbox for \"31b9e46dd01ac8c3e96533dd860cad6d5ba5c4f5c8e5c992dc17ab64757b4ac1\" returns successfully" Dec 13 13:34:12.598454 containerd[1540]: time="2024-12-13T13:34:12.597773365Z" level=info msg="StopPodSandbox for \"0dabb5f4b1ea298c91cbde131d95492354ec96009d66dab860652a5ca3101e2a\"" Dec 13 13:34:12.598454 containerd[1540]: time="2024-12-13T13:34:12.597803013Z" level=info msg="TearDown network for sandbox \"0dabb5f4b1ea298c91cbde131d95492354ec96009d66dab860652a5ca3101e2a\" successfully" Dec 13 13:34:12.598454 containerd[1540]: time="2024-12-13T13:34:12.597808133Z" level=info msg="StopPodSandbox for \"0dabb5f4b1ea298c91cbde131d95492354ec96009d66dab860652a5ca3101e2a\" returns successfully" Dec 13 13:34:12.598454 containerd[1540]: time="2024-12-13T13:34:12.597847368Z" level=info msg="StopPodSandbox for \"de2a3220a48c9e9cc9b729ae9f0eb51a9e87590aadfcb088a4ac339b6d2a4bcb\"" Dec 13 13:34:12.598454 containerd[1540]: time="2024-12-13T13:34:12.597877238Z" level=info msg="TearDown network for sandbox \"de2a3220a48c9e9cc9b729ae9f0eb51a9e87590aadfcb088a4ac339b6d2a4bcb\" successfully" Dec 13 13:34:12.598454 containerd[1540]: time="2024-12-13T13:34:12.597881961Z" level=info msg="StopPodSandbox for \"de2a3220a48c9e9cc9b729ae9f0eb51a9e87590aadfcb088a4ac339b6d2a4bcb\" returns successfully" Dec 13 13:34:12.598454 containerd[1540]: time="2024-12-13T13:34:12.597965579Z" level=info msg="StopPodSandbox for \"cdd50c9de02b78e81c8129b164d9902caeccd7a5f6cf84cfbc17155ba9403a75\"" Dec 13 13:34:12.598454 containerd[1540]: time="2024-12-13T13:34:12.597995651Z" level=info msg="TearDown network for sandbox \"cdd50c9de02b78e81c8129b164d9902caeccd7a5f6cf84cfbc17155ba9403a75\" successfully" Dec 13 13:34:12.598454 containerd[1540]: time="2024-12-13T13:34:12.598000602Z" level=info msg="StopPodSandbox for \"cdd50c9de02b78e81c8129b164d9902caeccd7a5f6cf84cfbc17155ba9403a75\" returns successfully" Dec 13 13:34:12.598454 containerd[1540]: time="2024-12-13T13:34:12.598060739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vvl2m,Uid:95ca7197-6257-4f74-a945-b91e5b94e808,Namespace:kube-system,Attempt:4,}" Dec 13 13:34:12.602481 kubelet[2835]: I1213 13:34:12.595464 2835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c10e14c46ce3ec5b997f10afea52ab894699d44aedc127a1a0a5d8708fe4938" Dec 13 13:34:12.602513 containerd[1540]: time="2024-12-13T13:34:12.601879528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55476b5df-txx74,Uid:594d5ede-fccb-4fa8-b157-5696299be69a,Namespace:calico-apiserver,Attempt:4,}" Dec 13 13:34:12.608787 kubelet[2835]: I1213 13:34:12.608546 2835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7b4d845515e5010111f72d28fb6298ea56240c3f9e45380884271fce541ca12" Dec 13 13:34:12.618398 containerd[1540]: time="2024-12-13T13:34:12.618295885Z" level=info msg="StopPodSandbox for \"d7b4d845515e5010111f72d28fb6298ea56240c3f9e45380884271fce541ca12\"" Dec 13 13:34:12.618470 containerd[1540]: time="2024-12-13T13:34:12.618420762Z" level=info msg="Ensure that sandbox d7b4d845515e5010111f72d28fb6298ea56240c3f9e45380884271fce541ca12 in task-service has been cleanup successfully" Dec 13 13:34:12.618884 containerd[1540]: time="2024-12-13T13:34:12.618868852Z" level=info msg="TearDown network for sandbox \"d7b4d845515e5010111f72d28fb6298ea56240c3f9e45380884271fce541ca12\" successfully" Dec 13 13:34:12.618884 containerd[1540]: time="2024-12-13T13:34:12.618881967Z" level=info msg="StopPodSandbox for \"d7b4d845515e5010111f72d28fb6298ea56240c3f9e45380884271fce541ca12\" returns successfully" Dec 13 13:34:12.620047 containerd[1540]: time="2024-12-13T13:34:12.620032861Z" level=info msg="StopPodSandbox for \"9440f629d3fadd2bec6a0fdf543fbb5d75d841326e7be4bf0c7907196eb63e30\"" Dec 13 13:34:12.620278 containerd[1540]: time="2024-12-13T13:34:12.620076159Z" level=info msg="TearDown network for sandbox \"9440f629d3fadd2bec6a0fdf543fbb5d75d841326e7be4bf0c7907196eb63e30\" successfully" Dec 13 13:34:12.620278 containerd[1540]: time="2024-12-13T13:34:12.620084603Z" level=info msg="StopPodSandbox for \"9440f629d3fadd2bec6a0fdf543fbb5d75d841326e7be4bf0c7907196eb63e30\" returns successfully" Dec 13 13:34:12.623714 containerd[1540]: time="2024-12-13T13:34:12.623698162Z" level=info msg="StopPodSandbox for \"1b826aa3b1f3ae55bbafa8172c171a0658192d78431f4568b1172509d1bb6230\"" Dec 13 13:34:12.623753 containerd[1540]: time="2024-12-13T13:34:12.623742007Z" level=info msg="TearDown network for sandbox \"1b826aa3b1f3ae55bbafa8172c171a0658192d78431f4568b1172509d1bb6230\" successfully" Dec 13 13:34:12.623753 containerd[1540]: time="2024-12-13T13:34:12.623748717Z" level=info msg="StopPodSandbox for \"1b826aa3b1f3ae55bbafa8172c171a0658192d78431f4568b1172509d1bb6230\" returns successfully" Dec 13 13:34:12.626048 containerd[1540]: time="2024-12-13T13:34:12.626031754Z" level=info msg="StopPodSandbox for \"9c73e609d1e184c6a8d67a7f217dec6cfe2db47a60b8873672c6aea8607a4de1\"" Dec 13 13:34:12.626087 containerd[1540]: time="2024-12-13T13:34:12.626078978Z" level=info msg="TearDown network for sandbox \"9c73e609d1e184c6a8d67a7f217dec6cfe2db47a60b8873672c6aea8607a4de1\" successfully" Dec 13 13:34:12.626087 containerd[1540]: time="2024-12-13T13:34:12.626085340Z" level=info msg="StopPodSandbox for \"9c73e609d1e184c6a8d67a7f217dec6cfe2db47a60b8873672c6aea8607a4de1\" returns successfully" Dec 13 13:34:12.628164 containerd[1540]: time="2024-12-13T13:34:12.628148148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t28ks,Uid:6d6a0031-7d0e-4f38-97a2-6db6c2123341,Namespace:calico-system,Attempt:4,}" Dec 13 13:34:12.634814 kubelet[2835]: I1213 13:34:12.632601 2835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfb40143397fd98326b98c45dd1f4f82c460a9b0879d7bc1fede2668ca7ef3e0" Dec 13 13:34:12.637425 containerd[1540]: time="2024-12-13T13:34:12.637398172Z" level=info msg="StopPodSandbox for \"bfb40143397fd98326b98c45dd1f4f82c460a9b0879d7bc1fede2668ca7ef3e0\"" Dec 13 13:34:12.638869 containerd[1540]: time="2024-12-13T13:34:12.637531108Z" level=info msg="Ensure that sandbox bfb40143397fd98326b98c45dd1f4f82c460a9b0879d7bc1fede2668ca7ef3e0 in task-service has been cleanup successfully" Dec 13 13:34:12.638869 containerd[1540]: time="2024-12-13T13:34:12.637731385Z" level=info msg="TearDown network for sandbox \"bfb40143397fd98326b98c45dd1f4f82c460a9b0879d7bc1fede2668ca7ef3e0\" successfully" Dec 13 13:34:12.638869 containerd[1540]: time="2024-12-13T13:34:12.637740907Z" level=info msg="StopPodSandbox for \"bfb40143397fd98326b98c45dd1f4f82c460a9b0879d7bc1fede2668ca7ef3e0\" returns successfully" Dec 13 13:34:12.638869 containerd[1540]: time="2024-12-13T13:34:12.638466780Z" level=info msg="StopPodSandbox for \"b728d27349f9d125051aedc463cf331f1faf1bcc698455461a967ac0b5d621f1\"" Dec 13 13:34:12.638869 containerd[1540]: time="2024-12-13T13:34:12.638503659Z" level=info msg="TearDown network for sandbox \"b728d27349f9d125051aedc463cf331f1faf1bcc698455461a967ac0b5d621f1\" successfully" Dec 13 13:34:12.638869 containerd[1540]: time="2024-12-13T13:34:12.638509816Z" level=info msg="StopPodSandbox for \"b728d27349f9d125051aedc463cf331f1faf1bcc698455461a967ac0b5d621f1\" returns successfully" Dec 13 13:34:12.638869 containerd[1540]: time="2024-12-13T13:34:12.638800852Z" level=info msg="StopPodSandbox for \"c6df07d0815948047f4d6cc2c7af0ea066838f1e44745877608753226f54289c\"" Dec 13 13:34:12.638869 containerd[1540]: time="2024-12-13T13:34:12.638834739Z" level=info msg="TearDown network for sandbox \"c6df07d0815948047f4d6cc2c7af0ea066838f1e44745877608753226f54289c\" successfully" Dec 13 13:34:12.638869 containerd[1540]: time="2024-12-13T13:34:12.638840349Z" level=info msg="StopPodSandbox for \"c6df07d0815948047f4d6cc2c7af0ea066838f1e44745877608753226f54289c\" returns successfully" Dec 13 13:34:12.645607 containerd[1540]: time="2024-12-13T13:34:12.639244153Z" level=info msg="StopPodSandbox for \"00f697d42c2d50b3139e5b825119ed91816dde91442ca398a78122299baeac17\"" Dec 13 13:34:12.645607 containerd[1540]: time="2024-12-13T13:34:12.639281669Z" level=info msg="TearDown network for sandbox \"00f697d42c2d50b3139e5b825119ed91816dde91442ca398a78122299baeac17\" successfully" Dec 13 13:34:12.645607 containerd[1540]: time="2024-12-13T13:34:12.639287480Z" level=info msg="StopPodSandbox for \"00f697d42c2d50b3139e5b825119ed91816dde91442ca398a78122299baeac17\" returns successfully" Dec 13 13:34:12.645607 containerd[1540]: time="2024-12-13T13:34:12.639519881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55476b5df-rctdv,Uid:2b3c7f26-9f5a-42bd-b1a3-f7dc58c816f3,Namespace:calico-apiserver,Attempt:4,}" Dec 13 13:34:12.645607 containerd[1540]: time="2024-12-13T13:34:12.641498838Z" level=info msg="StopPodSandbox for \"e5017c5028ba35b9b6aa1ab31287cb736abfbdd2030510101d39e2a50c82f63d\"" Dec 13 13:34:12.645607 containerd[1540]: time="2024-12-13T13:34:12.641622772Z" level=info msg="Ensure that sandbox e5017c5028ba35b9b6aa1ab31287cb736abfbdd2030510101d39e2a50c82f63d in task-service has been cleanup successfully" Dec 13 13:34:12.645607 containerd[1540]: time="2024-12-13T13:34:12.642276399Z" level=info msg="TearDown network for sandbox \"e5017c5028ba35b9b6aa1ab31287cb736abfbdd2030510101d39e2a50c82f63d\" successfully" Dec 13 13:34:12.645607 containerd[1540]: time="2024-12-13T13:34:12.642285969Z" level=info msg="StopPodSandbox for \"e5017c5028ba35b9b6aa1ab31287cb736abfbdd2030510101d39e2a50c82f63d\" returns successfully" Dec 13 13:34:12.645607 containerd[1540]: time="2024-12-13T13:34:12.642577604Z" level=info msg="StopPodSandbox for \"0b7439b43e35cac25488bd28ac234bf2d14fdc7eb34af6bf4fc4f0e2029647f1\"" Dec 13 13:34:12.645607 containerd[1540]: time="2024-12-13T13:34:12.642632191Z" level=info msg="TearDown network for sandbox \"0b7439b43e35cac25488bd28ac234bf2d14fdc7eb34af6bf4fc4f0e2029647f1\" successfully" Dec 13 13:34:12.645607 containerd[1540]: time="2024-12-13T13:34:12.642641679Z" level=info msg="StopPodSandbox for \"0b7439b43e35cac25488bd28ac234bf2d14fdc7eb34af6bf4fc4f0e2029647f1\" returns successfully" Dec 13 13:34:12.645607 containerd[1540]: time="2024-12-13T13:34:12.643655273Z" level=info msg="StopPodSandbox for \"d46b92f868fc17306eeb28b300837c3c680502e5586b50517041c78a1b14f039\"" Dec 13 13:34:12.645607 containerd[1540]: time="2024-12-13T13:34:12.643659232Z" level=info msg="StopPodSandbox for \"aa7c49f6c656374edc32cfc35744958459f47fb19807c32786714cc939851a3a\"" Dec 13 13:34:12.652104 kubelet[2835]: I1213 13:34:12.639953 2835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5017c5028ba35b9b6aa1ab31287cb736abfbdd2030510101d39e2a50c82f63d" Dec 13 13:34:12.652104 kubelet[2835]: I1213 13:34:12.643379 2835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d46b92f868fc17306eeb28b300837c3c680502e5586b50517041c78a1b14f039" Dec 13 13:34:12.655908 containerd[1540]: time="2024-12-13T13:34:12.643757333Z" level=info msg="TearDown network for sandbox \"aa7c49f6c656374edc32cfc35744958459f47fb19807c32786714cc939851a3a\" successfully" Dec 13 13:34:12.655908 containerd[1540]: time="2024-12-13T13:34:12.646140470Z" level=info msg="StopPodSandbox for \"aa7c49f6c656374edc32cfc35744958459f47fb19807c32786714cc939851a3a\" returns successfully" Dec 13 13:34:12.655908 containerd[1540]: time="2024-12-13T13:34:12.643979479Z" level=info msg="Ensure that sandbox d46b92f868fc17306eeb28b300837c3c680502e5586b50517041c78a1b14f039 in task-service has been cleanup successfully" Dec 13 13:34:12.655908 containerd[1540]: time="2024-12-13T13:34:12.646680484Z" level=info msg="TearDown network for sandbox \"d46b92f868fc17306eeb28b300837c3c680502e5586b50517041c78a1b14f039\" successfully" Dec 13 13:34:12.655908 containerd[1540]: time="2024-12-13T13:34:12.646690472Z" level=info msg="StopPodSandbox for \"d46b92f868fc17306eeb28b300837c3c680502e5586b50517041c78a1b14f039\" returns successfully" Dec 13 13:34:12.655908 containerd[1540]: time="2024-12-13T13:34:12.646935807Z" level=info msg="StopPodSandbox for \"ae609dfc2992fdb2c11066c2b9c3abd85054910235b779057a4858a878ea95d7\"" Dec 13 13:34:12.655908 containerd[1540]: time="2024-12-13T13:34:12.647038145Z" level=info msg="StopPodSandbox for \"e95ee3e3a3545f69ea30e365faf744111ed0f2cfd0089e3b48f8cf66caab7830\"" Dec 13 13:34:12.655908 containerd[1540]: time="2024-12-13T13:34:12.647078212Z" level=info msg="TearDown network for sandbox \"e95ee3e3a3545f69ea30e365faf744111ed0f2cfd0089e3b48f8cf66caab7830\" successfully" Dec 13 13:34:12.655908 containerd[1540]: time="2024-12-13T13:34:12.647084287Z" level=info msg="StopPodSandbox for \"e95ee3e3a3545f69ea30e365faf744111ed0f2cfd0089e3b48f8cf66caab7830\" returns successfully" Dec 13 13:34:12.655908 containerd[1540]: time="2024-12-13T13:34:12.647188164Z" level=info msg="TearDown network for sandbox \"ae609dfc2992fdb2c11066c2b9c3abd85054910235b779057a4858a878ea95d7\" successfully" Dec 13 13:34:12.655908 containerd[1540]: time="2024-12-13T13:34:12.647196052Z" level=info msg="StopPodSandbox for \"ae609dfc2992fdb2c11066c2b9c3abd85054910235b779057a4858a878ea95d7\" returns successfully" Dec 13 13:34:12.655908 containerd[1540]: time="2024-12-13T13:34:12.647466682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74874996db-hwhtg,Uid:1a917722-7626-420c-b334-c54df1962ff7,Namespace:calico-system,Attempt:4,}" Dec 13 13:34:12.655908 containerd[1540]: time="2024-12-13T13:34:12.647796768Z" level=info msg="StopPodSandbox for \"0a82589ee1ce9211a28b404bf05d8a5d55870d00dbaa6f040e19fd5860a3aa2e\"" Dec 13 13:34:12.655908 containerd[1540]: time="2024-12-13T13:34:12.647945659Z" level=info msg="TearDown network for sandbox \"0a82589ee1ce9211a28b404bf05d8a5d55870d00dbaa6f040e19fd5860a3aa2e\" successfully" Dec 13 13:34:12.655908 containerd[1540]: time="2024-12-13T13:34:12.647952234Z" level=info msg="StopPodSandbox for \"0a82589ee1ce9211a28b404bf05d8a5d55870d00dbaa6f040e19fd5860a3aa2e\" returns successfully" Dec 13 13:34:12.655908 containerd[1540]: time="2024-12-13T13:34:12.648066470Z" level=info msg="StopPodSandbox for \"380b70df06034fce2ee4e2503b62611323260cf3362cc30533c33ca8924f6498\"" Dec 13 13:34:12.655908 containerd[1540]: time="2024-12-13T13:34:12.648219385Z" level=info msg="TearDown network for sandbox \"380b70df06034fce2ee4e2503b62611323260cf3362cc30533c33ca8924f6498\" successfully" Dec 13 13:34:12.655908 containerd[1540]: time="2024-12-13T13:34:12.649000122Z" level=info msg="StopPodSandbox for \"380b70df06034fce2ee4e2503b62611323260cf3362cc30533c33ca8924f6498\" returns successfully" Dec 13 13:34:12.655908 containerd[1540]: time="2024-12-13T13:34:12.649206006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-57pkw,Uid:044a2624-b2b5-4815-a892-56a4b4e7678a,Namespace:kube-system,Attempt:4,}" Dec 13 13:34:12.696153 systemd[1]: run-netns-cni\x2df1289221\x2d8705\x2dbb00\x2d10df\x2d49700e2f4e41.mount: Deactivated successfully. Dec 13 13:34:12.696482 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e5017c5028ba35b9b6aa1ab31287cb736abfbdd2030510101d39e2a50c82f63d-shm.mount: Deactivated successfully. Dec 13 13:34:12.696537 systemd[1]: run-netns-cni\x2df738eae6\x2d0891\x2dd6e9\x2d790e\x2d0c9a350db0dd.mount: Deactivated successfully. Dec 13 13:34:12.696575 systemd[1]: run-netns-cni\x2dff19aad1\x2dd8ff\x2d4213\x2d9051\x2d37f389e63467.mount: Deactivated successfully. Dec 13 13:34:12.696606 systemd[1]: run-netns-cni\x2d6d3efe55\x2dbb7d\x2d396e\x2d7c06\x2dbfef6d31bc07.mount: Deactivated successfully. Dec 13 13:34:12.696637 systemd[1]: run-netns-cni\x2d6b7e9a14\x2db53e\x2d9199\x2d1097\x2dbfff7b23592d.mount: Deactivated successfully. Dec 13 13:34:12.696667 systemd[1]: run-netns-cni\x2d138f1d1c\x2d0b48\x2da30d\x2d1bf7\x2d712ee8e08b92.mount: Deactivated successfully. Dec 13 13:34:12.950300 containerd[1540]: time="2024-12-13T13:34:12.950209733Z" level=error msg="Failed to destroy network for sandbox \"08f37eb07a7cae4dcef392eb4a16301e2226023737c359d66ef523c02f90517b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:12.951649 containerd[1540]: time="2024-12-13T13:34:12.951631763Z" level=error msg="encountered an error cleaning up failed sandbox \"08f37eb07a7cae4dcef392eb4a16301e2226023737c359d66ef523c02f90517b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:12.951841 containerd[1540]: time="2024-12-13T13:34:12.951726375Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vvl2m,Uid:95ca7197-6257-4f74-a945-b91e5b94e808,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"08f37eb07a7cae4dcef392eb4a16301e2226023737c359d66ef523c02f90517b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:12.958484 kubelet[2835]: E1213 13:34:12.957907 2835 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08f37eb07a7cae4dcef392eb4a16301e2226023737c359d66ef523c02f90517b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:12.958484 kubelet[2835]: E1213 13:34:12.957953 2835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08f37eb07a7cae4dcef392eb4a16301e2226023737c359d66ef523c02f90517b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-vvl2m" Dec 13 13:34:12.958484 kubelet[2835]: E1213 13:34:12.957968 2835 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08f37eb07a7cae4dcef392eb4a16301e2226023737c359d66ef523c02f90517b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-vvl2m" Dec 13 13:34:12.958619 kubelet[2835]: E1213 13:34:12.958004 2835 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-vvl2m_kube-system(95ca7197-6257-4f74-a945-b91e5b94e808)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-vvl2m_kube-system(95ca7197-6257-4f74-a945-b91e5b94e808)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"08f37eb07a7cae4dcef392eb4a16301e2226023737c359d66ef523c02f90517b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-vvl2m" podUID="95ca7197-6257-4f74-a945-b91e5b94e808" Dec 13 13:34:12.973141 containerd[1540]: time="2024-12-13T13:34:12.973058535Z" level=error msg="Failed to destroy network for sandbox \"16d6a944a79031ebe24aa26205b00e27648ec9751873f338c25830e1dd94743b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:12.974145 containerd[1540]: time="2024-12-13T13:34:12.973373566Z" level=error msg="encountered an error cleaning up failed sandbox \"16d6a944a79031ebe24aa26205b00e27648ec9751873f338c25830e1dd94743b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:12.974145 containerd[1540]: time="2024-12-13T13:34:12.973409588Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55476b5df-txx74,Uid:594d5ede-fccb-4fa8-b157-5696299be69a,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"16d6a944a79031ebe24aa26205b00e27648ec9751873f338c25830e1dd94743b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:12.974375 kubelet[2835]: E1213 13:34:12.973534 2835 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16d6a944a79031ebe24aa26205b00e27648ec9751873f338c25830e1dd94743b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:12.974375 kubelet[2835]: E1213 13:34:12.973592 2835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16d6a944a79031ebe24aa26205b00e27648ec9751873f338c25830e1dd94743b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55476b5df-txx74" Dec 13 13:34:12.974375 kubelet[2835]: E1213 13:34:12.973610 2835 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16d6a944a79031ebe24aa26205b00e27648ec9751873f338c25830e1dd94743b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55476b5df-txx74" Dec 13 13:34:12.974447 kubelet[2835]: E1213 13:34:12.973657 2835 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55476b5df-txx74_calico-apiserver(594d5ede-fccb-4fa8-b157-5696299be69a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55476b5df-txx74_calico-apiserver(594d5ede-fccb-4fa8-b157-5696299be69a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16d6a944a79031ebe24aa26205b00e27648ec9751873f338c25830e1dd94743b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55476b5df-txx74" podUID="594d5ede-fccb-4fa8-b157-5696299be69a" Dec 13 13:34:13.002222 containerd[1540]: time="2024-12-13T13:34:13.002139662Z" level=error msg="Failed to destroy network for sandbox \"d44e8ab5c34fff971412b45b7052f7b56af24b5248bf6ec7cce270d854f1032c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:13.003463 containerd[1540]: time="2024-12-13T13:34:13.003417002Z" level=error msg="encountered an error cleaning up failed sandbox \"d44e8ab5c34fff971412b45b7052f7b56af24b5248bf6ec7cce270d854f1032c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:13.003903 containerd[1540]: time="2024-12-13T13:34:13.003869711Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-57pkw,Uid:044a2624-b2b5-4815-a892-56a4b4e7678a,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"d44e8ab5c34fff971412b45b7052f7b56af24b5248bf6ec7cce270d854f1032c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:13.004122 containerd[1540]: time="2024-12-13T13:34:13.003687572Z" level=error msg="Failed to destroy network for sandbox \"4c8f6eabdd280340942ebf86e9998375a1f8276fc220abdd7cf1f763f9862018\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:13.004847 kubelet[2835]: E1213 13:34:13.004226 2835 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d44e8ab5c34fff971412b45b7052f7b56af24b5248bf6ec7cce270d854f1032c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:13.004847 kubelet[2835]: E1213 13:34:13.004286 2835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d44e8ab5c34fff971412b45b7052f7b56af24b5248bf6ec7cce270d854f1032c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-57pkw" Dec 13 13:34:13.004847 kubelet[2835]: E1213 13:34:13.004300 2835 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d44e8ab5c34fff971412b45b7052f7b56af24b5248bf6ec7cce270d854f1032c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-57pkw" Dec 13 13:34:13.004936 kubelet[2835]: E1213 13:34:13.004335 2835 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-57pkw_kube-system(044a2624-b2b5-4815-a892-56a4b4e7678a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-57pkw_kube-system(044a2624-b2b5-4815-a892-56a4b4e7678a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d44e8ab5c34fff971412b45b7052f7b56af24b5248bf6ec7cce270d854f1032c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-57pkw" podUID="044a2624-b2b5-4815-a892-56a4b4e7678a" Dec 13 13:34:13.007035 containerd[1540]: time="2024-12-13T13:34:13.006988472Z" level=error msg="encountered an error cleaning up failed sandbox \"4c8f6eabdd280340942ebf86e9998375a1f8276fc220abdd7cf1f763f9862018\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:13.007220 containerd[1540]: time="2024-12-13T13:34:13.007205588Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74874996db-hwhtg,Uid:1a917722-7626-420c-b334-c54df1962ff7,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"4c8f6eabdd280340942ebf86e9998375a1f8276fc220abdd7cf1f763f9862018\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:13.007834 kubelet[2835]: E1213 13:34:13.007723 2835 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c8f6eabdd280340942ebf86e9998375a1f8276fc220abdd7cf1f763f9862018\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:13.007834 kubelet[2835]: E1213 13:34:13.007766 2835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c8f6eabdd280340942ebf86e9998375a1f8276fc220abdd7cf1f763f9862018\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74874996db-hwhtg" Dec 13 13:34:13.007834 kubelet[2835]: E1213 13:34:13.007780 2835 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c8f6eabdd280340942ebf86e9998375a1f8276fc220abdd7cf1f763f9862018\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74874996db-hwhtg" Dec 13 13:34:13.008017 kubelet[2835]: E1213 13:34:13.007819 2835 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-74874996db-hwhtg_calico-system(1a917722-7626-420c-b334-c54df1962ff7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-74874996db-hwhtg_calico-system(1a917722-7626-420c-b334-c54df1962ff7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c8f6eabdd280340942ebf86e9998375a1f8276fc220abdd7cf1f763f9862018\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74874996db-hwhtg" podUID="1a917722-7626-420c-b334-c54df1962ff7" Dec 13 13:34:13.009254 containerd[1540]: time="2024-12-13T13:34:13.008487402Z" level=error msg="Failed to destroy network for sandbox \"a174934634c9372a6809b18365887d5a7720a83e221795127c2e46157e91c642\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:13.009628 containerd[1540]: time="2024-12-13T13:34:13.009604084Z" level=error msg="encountered an error cleaning up failed sandbox \"a174934634c9372a6809b18365887d5a7720a83e221795127c2e46157e91c642\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:13.009684 containerd[1540]: time="2024-12-13T13:34:13.009657472Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t28ks,Uid:6d6a0031-7d0e-4f38-97a2-6db6c2123341,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"a174934634c9372a6809b18365887d5a7720a83e221795127c2e46157e91c642\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:13.009843 kubelet[2835]: E1213 13:34:13.009817 2835 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a174934634c9372a6809b18365887d5a7720a83e221795127c2e46157e91c642\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:13.009886 kubelet[2835]: E1213 13:34:13.009852 2835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a174934634c9372a6809b18365887d5a7720a83e221795127c2e46157e91c642\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t28ks" Dec 13 13:34:13.009886 kubelet[2835]: E1213 13:34:13.009874 2835 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a174934634c9372a6809b18365887d5a7720a83e221795127c2e46157e91c642\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t28ks" Dec 13 13:34:13.010194 kubelet[2835]: E1213 13:34:13.009918 2835 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-t28ks_calico-system(6d6a0031-7d0e-4f38-97a2-6db6c2123341)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-t28ks_calico-system(6d6a0031-7d0e-4f38-97a2-6db6c2123341)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a174934634c9372a6809b18365887d5a7720a83e221795127c2e46157e91c642\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t28ks" podUID="6d6a0031-7d0e-4f38-97a2-6db6c2123341" Dec 13 13:34:13.014571 containerd[1540]: time="2024-12-13T13:34:13.014509545Z" level=error msg="Failed to destroy network for sandbox \"ef13314c0568659bbaea1088d4d3e9c2a434cb3d9920a493b16a6b9aee714104\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:13.014949 containerd[1540]: time="2024-12-13T13:34:13.014883942Z" level=error msg="encountered an error cleaning up failed sandbox \"ef13314c0568659bbaea1088d4d3e9c2a434cb3d9920a493b16a6b9aee714104\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:13.014949 containerd[1540]: time="2024-12-13T13:34:13.014925899Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55476b5df-rctdv,Uid:2b3c7f26-9f5a-42bd-b1a3-f7dc58c816f3,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"ef13314c0568659bbaea1088d4d3e9c2a434cb3d9920a493b16a6b9aee714104\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:13.022778 kubelet[2835]: E1213 13:34:13.022646 2835 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef13314c0568659bbaea1088d4d3e9c2a434cb3d9920a493b16a6b9aee714104\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:13.022778 kubelet[2835]: E1213 13:34:13.022684 2835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef13314c0568659bbaea1088d4d3e9c2a434cb3d9920a493b16a6b9aee714104\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55476b5df-rctdv" Dec 13 13:34:13.022778 kubelet[2835]: E1213 13:34:13.022698 2835 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef13314c0568659bbaea1088d4d3e9c2a434cb3d9920a493b16a6b9aee714104\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55476b5df-rctdv" Dec 13 13:34:13.022907 kubelet[2835]: E1213 13:34:13.022733 2835 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55476b5df-rctdv_calico-apiserver(2b3c7f26-9f5a-42bd-b1a3-f7dc58c816f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55476b5df-rctdv_calico-apiserver(2b3c7f26-9f5a-42bd-b1a3-f7dc58c816f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ef13314c0568659bbaea1088d4d3e9c2a434cb3d9920a493b16a6b9aee714104\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55476b5df-rctdv" podUID="2b3c7f26-9f5a-42bd-b1a3-f7dc58c816f3" Dec 13 13:34:13.107456 containerd[1540]: time="2024-12-13T13:34:13.106723821Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Dec 13 13:34:13.111816 containerd[1540]: time="2024-12-13T13:34:13.111787007Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 4.627340779s" Dec 13 13:34:13.111931 containerd[1540]: time="2024-12-13T13:34:13.111922727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 13:34:13.115460 containerd[1540]: time="2024-12-13T13:34:13.115053118Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:13.121110 containerd[1540]: time="2024-12-13T13:34:13.121083613Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:13.121674 containerd[1540]: time="2024-12-13T13:34:13.121556964Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:13.172576 containerd[1540]: time="2024-12-13T13:34:13.172546381Z" level=info msg="CreateContainer within sandbox \"0b3768967d0d4fcfda53542b9d3f73f0562e05ced3d927c533f6bfd56cb27159\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 13:34:13.211323 containerd[1540]: time="2024-12-13T13:34:13.211182625Z" level=info msg="CreateContainer within sandbox \"0b3768967d0d4fcfda53542b9d3f73f0562e05ced3d927c533f6bfd56cb27159\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"fb925451c28d035e2f2fb4874031030be9d857fd85c62256109520a2b3ab5cf9\"" Dec 13 13:34:13.218090 containerd[1540]: time="2024-12-13T13:34:13.218005867Z" level=info msg="StartContainer for \"fb925451c28d035e2f2fb4874031030be9d857fd85c62256109520a2b3ab5cf9\"" Dec 13 13:34:13.360317 systemd[1]: Started cri-containerd-fb925451c28d035e2f2fb4874031030be9d857fd85c62256109520a2b3ab5cf9.scope - libcontainer container fb925451c28d035e2f2fb4874031030be9d857fd85c62256109520a2b3ab5cf9. Dec 13 13:34:13.415863 containerd[1540]: time="2024-12-13T13:34:13.415835874Z" level=info msg="StartContainer for \"fb925451c28d035e2f2fb4874031030be9d857fd85c62256109520a2b3ab5cf9\" returns successfully" Dec 13 13:34:13.647155 kubelet[2835]: I1213 13:34:13.647140 2835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08f37eb07a7cae4dcef392eb4a16301e2226023737c359d66ef523c02f90517b" Dec 13 13:34:13.648393 containerd[1540]: time="2024-12-13T13:34:13.647906390Z" level=info msg="StopPodSandbox for \"08f37eb07a7cae4dcef392eb4a16301e2226023737c359d66ef523c02f90517b\"" Dec 13 13:34:13.648393 containerd[1540]: time="2024-12-13T13:34:13.648025165Z" level=info msg="Ensure that sandbox 08f37eb07a7cae4dcef392eb4a16301e2226023737c359d66ef523c02f90517b in task-service has been cleanup successfully" Dec 13 13:34:13.648894 containerd[1540]: time="2024-12-13T13:34:13.648649835Z" level=info msg="TearDown network for sandbox \"08f37eb07a7cae4dcef392eb4a16301e2226023737c359d66ef523c02f90517b\" successfully" Dec 13 13:34:13.648894 containerd[1540]: time="2024-12-13T13:34:13.648679324Z" level=info msg="StopPodSandbox for \"08f37eb07a7cae4dcef392eb4a16301e2226023737c359d66ef523c02f90517b\" returns successfully" Dec 13 13:34:13.648964 containerd[1540]: time="2024-12-13T13:34:13.648943725Z" level=info msg="StopPodSandbox for \"2c10e14c46ce3ec5b997f10afea52ab894699d44aedc127a1a0a5d8708fe4938\"" Dec 13 13:34:13.649000 containerd[1540]: time="2024-12-13T13:34:13.648987748Z" level=info msg="TearDown network for sandbox \"2c10e14c46ce3ec5b997f10afea52ab894699d44aedc127a1a0a5d8708fe4938\" successfully" Dec 13 13:34:13.649000 containerd[1540]: time="2024-12-13T13:34:13.648995484Z" level=info msg="StopPodSandbox for \"2c10e14c46ce3ec5b997f10afea52ab894699d44aedc127a1a0a5d8708fe4938\" returns successfully" Dec 13 13:34:13.649202 containerd[1540]: time="2024-12-13T13:34:13.649188976Z" level=info msg="StopPodSandbox for \"38cd3764dd11dd796f443558251e2d1af831465a7cccb775e4565ed9f9116d58\"" Dec 13 13:34:13.649407 containerd[1540]: time="2024-12-13T13:34:13.649321984Z" level=info msg="TearDown network for sandbox \"38cd3764dd11dd796f443558251e2d1af831465a7cccb775e4565ed9f9116d58\" successfully" Dec 13 13:34:13.649407 containerd[1540]: time="2024-12-13T13:34:13.649401497Z" level=info msg="StopPodSandbox for \"38cd3764dd11dd796f443558251e2d1af831465a7cccb775e4565ed9f9116d58\" returns successfully" Dec 13 13:34:13.649699 containerd[1540]: time="2024-12-13T13:34:13.649680770Z" level=info msg="StopPodSandbox for \"31b9e46dd01ac8c3e96533dd860cad6d5ba5c4f5c8e5c992dc17ab64757b4ac1\"" Dec 13 13:34:13.650069 containerd[1540]: time="2024-12-13T13:34:13.649814575Z" level=info msg="TearDown network for sandbox \"31b9e46dd01ac8c3e96533dd860cad6d5ba5c4f5c8e5c992dc17ab64757b4ac1\" successfully" Dec 13 13:34:13.650069 containerd[1540]: time="2024-12-13T13:34:13.649823419Z" level=info msg="StopPodSandbox for \"31b9e46dd01ac8c3e96533dd860cad6d5ba5c4f5c8e5c992dc17ab64757b4ac1\" returns successfully" Dec 13 13:34:13.650120 kubelet[2835]: I1213 13:34:13.649841 2835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef13314c0568659bbaea1088d4d3e9c2a434cb3d9920a493b16a6b9aee714104" Dec 13 13:34:13.650158 containerd[1540]: time="2024-12-13T13:34:13.650092315Z" level=info msg="StopPodSandbox for \"de2a3220a48c9e9cc9b729ae9f0eb51a9e87590aadfcb088a4ac339b6d2a4bcb\"" Dec 13 13:34:13.650158 containerd[1540]: time="2024-12-13T13:34:13.650128063Z" level=info msg="TearDown network for sandbox \"de2a3220a48c9e9cc9b729ae9f0eb51a9e87590aadfcb088a4ac339b6d2a4bcb\" successfully" Dec 13 13:34:13.650158 containerd[1540]: time="2024-12-13T13:34:13.650134646Z" level=info msg="StopPodSandbox for \"de2a3220a48c9e9cc9b729ae9f0eb51a9e87590aadfcb088a4ac339b6d2a4bcb\" returns successfully" Dec 13 13:34:13.650219 containerd[1540]: time="2024-12-13T13:34:13.650209398Z" level=info msg="StopPodSandbox for \"ef13314c0568659bbaea1088d4d3e9c2a434cb3d9920a493b16a6b9aee714104\"" Dec 13 13:34:13.650398 containerd[1540]: time="2024-12-13T13:34:13.650307674Z" level=info msg="Ensure that sandbox ef13314c0568659bbaea1088d4d3e9c2a434cb3d9920a493b16a6b9aee714104 in task-service has been cleanup successfully" Dec 13 13:34:13.650433 containerd[1540]: time="2024-12-13T13:34:13.650426552Z" level=info msg="TearDown network for sandbox \"ef13314c0568659bbaea1088d4d3e9c2a434cb3d9920a493b16a6b9aee714104\" successfully" Dec 13 13:34:13.650453 containerd[1540]: time="2024-12-13T13:34:13.650433070Z" level=info msg="StopPodSandbox for \"ef13314c0568659bbaea1088d4d3e9c2a434cb3d9920a493b16a6b9aee714104\" returns successfully" Dec 13 13:34:13.652728 containerd[1540]: time="2024-12-13T13:34:13.651266870Z" level=info msg="StopPodSandbox for \"bfb40143397fd98326b98c45dd1f4f82c460a9b0879d7bc1fede2668ca7ef3e0\"" Dec 13 13:34:13.652728 containerd[1540]: time="2024-12-13T13:34:13.651317619Z" level=info msg="TearDown network for sandbox \"bfb40143397fd98326b98c45dd1f4f82c460a9b0879d7bc1fede2668ca7ef3e0\" successfully" Dec 13 13:34:13.652728 containerd[1540]: time="2024-12-13T13:34:13.651324300Z" level=info msg="StopPodSandbox for \"bfb40143397fd98326b98c45dd1f4f82c460a9b0879d7bc1fede2668ca7ef3e0\" returns successfully" Dec 13 13:34:13.652728 containerd[1540]: time="2024-12-13T13:34:13.651378907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vvl2m,Uid:95ca7197-6257-4f74-a945-b91e5b94e808,Namespace:kube-system,Attempt:5,}" Dec 13 13:34:13.652728 containerd[1540]: time="2024-12-13T13:34:13.651752698Z" level=info msg="StopPodSandbox for \"b728d27349f9d125051aedc463cf331f1faf1bcc698455461a967ac0b5d621f1\"" Dec 13 13:34:13.652728 containerd[1540]: time="2024-12-13T13:34:13.651794800Z" level=info msg="TearDown network for sandbox \"b728d27349f9d125051aedc463cf331f1faf1bcc698455461a967ac0b5d621f1\" successfully" Dec 13 13:34:13.652728 containerd[1540]: time="2024-12-13T13:34:13.651800568Z" level=info msg="StopPodSandbox for \"b728d27349f9d125051aedc463cf331f1faf1bcc698455461a967ac0b5d621f1\" returns successfully" Dec 13 13:34:13.652930 containerd[1540]: time="2024-12-13T13:34:13.652910803Z" level=info msg="StopPodSandbox for \"c6df07d0815948047f4d6cc2c7af0ea066838f1e44745877608753226f54289c\"" Dec 13 13:34:13.652970 containerd[1540]: time="2024-12-13T13:34:13.652960284Z" level=info msg="TearDown network for sandbox \"c6df07d0815948047f4d6cc2c7af0ea066838f1e44745877608753226f54289c\" successfully" Dec 13 13:34:13.652993 containerd[1540]: time="2024-12-13T13:34:13.652969034Z" level=info msg="StopPodSandbox for \"c6df07d0815948047f4d6cc2c7af0ea066838f1e44745877608753226f54289c\" returns successfully" Dec 13 13:34:13.653200 containerd[1540]: time="2024-12-13T13:34:13.653182847Z" level=info msg="StopPodSandbox for \"00f697d42c2d50b3139e5b825119ed91816dde91442ca398a78122299baeac17\"" Dec 13 13:34:13.653257 containerd[1540]: time="2024-12-13T13:34:13.653222053Z" level=info msg="TearDown network for sandbox \"00f697d42c2d50b3139e5b825119ed91816dde91442ca398a78122299baeac17\" successfully" Dec 13 13:34:13.653257 containerd[1540]: time="2024-12-13T13:34:13.653227793Z" level=info msg="StopPodSandbox for \"00f697d42c2d50b3139e5b825119ed91816dde91442ca398a78122299baeac17\" returns successfully" Dec 13 13:34:13.654049 containerd[1540]: time="2024-12-13T13:34:13.653988071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55476b5df-rctdv,Uid:2b3c7f26-9f5a-42bd-b1a3-f7dc58c816f3,Namespace:calico-apiserver,Attempt:5,}" Dec 13 13:34:13.655520 kubelet[2835]: I1213 13:34:13.655503 2835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a174934634c9372a6809b18365887d5a7720a83e221795127c2e46157e91c642" Dec 13 13:34:13.656254 containerd[1540]: time="2024-12-13T13:34:13.655813369Z" level=info msg="StopPodSandbox for \"a174934634c9372a6809b18365887d5a7720a83e221795127c2e46157e91c642\"" Dec 13 13:34:13.656254 containerd[1540]: time="2024-12-13T13:34:13.655923043Z" level=info msg="Ensure that sandbox a174934634c9372a6809b18365887d5a7720a83e221795127c2e46157e91c642 in task-service has been cleanup successfully" Dec 13 13:34:13.657291 containerd[1540]: time="2024-12-13T13:34:13.657202035Z" level=info msg="TearDown network for sandbox \"a174934634c9372a6809b18365887d5a7720a83e221795127c2e46157e91c642\" successfully" Dec 13 13:34:13.657291 containerd[1540]: time="2024-12-13T13:34:13.657224400Z" level=info msg="StopPodSandbox for \"a174934634c9372a6809b18365887d5a7720a83e221795127c2e46157e91c642\" returns successfully" Dec 13 13:34:13.657643 containerd[1540]: time="2024-12-13T13:34:13.657568488Z" level=info msg="StopPodSandbox for \"d7b4d845515e5010111f72d28fb6298ea56240c3f9e45380884271fce541ca12\"" Dec 13 13:34:13.657643 containerd[1540]: time="2024-12-13T13:34:13.657614020Z" level=info msg="TearDown network for sandbox \"d7b4d845515e5010111f72d28fb6298ea56240c3f9e45380884271fce541ca12\" successfully" Dec 13 13:34:13.657643 containerd[1540]: time="2024-12-13T13:34:13.657621436Z" level=info msg="StopPodSandbox for \"d7b4d845515e5010111f72d28fb6298ea56240c3f9e45380884271fce541ca12\" returns successfully" Dec 13 13:34:13.659721 containerd[1540]: time="2024-12-13T13:34:13.658448344Z" level=info msg="StopPodSandbox for \"9440f629d3fadd2bec6a0fdf543fbb5d75d841326e7be4bf0c7907196eb63e30\"" Dec 13 13:34:13.659721 containerd[1540]: time="2024-12-13T13:34:13.658509515Z" level=info msg="TearDown network for sandbox \"9440f629d3fadd2bec6a0fdf543fbb5d75d841326e7be4bf0c7907196eb63e30\" successfully" Dec 13 13:34:13.659721 containerd[1540]: time="2024-12-13T13:34:13.658516317Z" level=info msg="StopPodSandbox for \"9440f629d3fadd2bec6a0fdf543fbb5d75d841326e7be4bf0c7907196eb63e30\" returns successfully" Dec 13 13:34:13.659721 containerd[1540]: time="2024-12-13T13:34:13.658783272Z" level=info msg="StopPodSandbox for \"1b826aa3b1f3ae55bbafa8172c171a0658192d78431f4568b1172509d1bb6230\"" Dec 13 13:34:13.659721 containerd[1540]: time="2024-12-13T13:34:13.658820090Z" level=info msg="TearDown network for sandbox \"1b826aa3b1f3ae55bbafa8172c171a0658192d78431f4568b1172509d1bb6230\" successfully" Dec 13 13:34:13.659721 containerd[1540]: time="2024-12-13T13:34:13.658825817Z" level=info msg="StopPodSandbox for \"1b826aa3b1f3ae55bbafa8172c171a0658192d78431f4568b1172509d1bb6230\" returns successfully" Dec 13 13:34:13.660037 kubelet[2835]: I1213 13:34:13.659933 2835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c8f6eabdd280340942ebf86e9998375a1f8276fc220abdd7cf1f763f9862018" Dec 13 13:34:13.660489 containerd[1540]: time="2024-12-13T13:34:13.660256037Z" level=info msg="StopPodSandbox for \"4c8f6eabdd280340942ebf86e9998375a1f8276fc220abdd7cf1f763f9862018\"" Dec 13 13:34:13.660489 containerd[1540]: time="2024-12-13T13:34:13.660363318Z" level=info msg="Ensure that sandbox 4c8f6eabdd280340942ebf86e9998375a1f8276fc220abdd7cf1f763f9862018 in task-service has been cleanup successfully" Dec 13 13:34:13.660530 containerd[1540]: time="2024-12-13T13:34:13.660486146Z" level=info msg="TearDown network for sandbox \"4c8f6eabdd280340942ebf86e9998375a1f8276fc220abdd7cf1f763f9862018\" successfully" Dec 13 13:34:13.660530 containerd[1540]: time="2024-12-13T13:34:13.660506394Z" level=info msg="StopPodSandbox for \"4c8f6eabdd280340942ebf86e9998375a1f8276fc220abdd7cf1f763f9862018\" returns successfully" Dec 13 13:34:13.660583 containerd[1540]: time="2024-12-13T13:34:13.660567382Z" level=info msg="StopPodSandbox for \"9c73e609d1e184c6a8d67a7f217dec6cfe2db47a60b8873672c6aea8607a4de1\"" Dec 13 13:34:13.660718 containerd[1540]: time="2024-12-13T13:34:13.660615734Z" level=info msg="TearDown network for sandbox \"9c73e609d1e184c6a8d67a7f217dec6cfe2db47a60b8873672c6aea8607a4de1\" successfully" Dec 13 13:34:13.660754 containerd[1540]: time="2024-12-13T13:34:13.660712732Z" level=info msg="StopPodSandbox for \"9c73e609d1e184c6a8d67a7f217dec6cfe2db47a60b8873672c6aea8607a4de1\" returns successfully" Dec 13 13:34:13.661935 containerd[1540]: time="2024-12-13T13:34:13.660814750Z" level=info msg="StopPodSandbox for \"e5017c5028ba35b9b6aa1ab31287cb736abfbdd2030510101d39e2a50c82f63d\"" Dec 13 13:34:13.661935 containerd[1540]: time="2024-12-13T13:34:13.661067661Z" level=info msg="TearDown network for sandbox \"e5017c5028ba35b9b6aa1ab31287cb736abfbdd2030510101d39e2a50c82f63d\" successfully" Dec 13 13:34:13.661935 containerd[1540]: time="2024-12-13T13:34:13.661074849Z" level=info msg="StopPodSandbox for \"e5017c5028ba35b9b6aa1ab31287cb736abfbdd2030510101d39e2a50c82f63d\" returns successfully" Dec 13 13:34:13.661935 containerd[1540]: time="2024-12-13T13:34:13.661533294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t28ks,Uid:6d6a0031-7d0e-4f38-97a2-6db6c2123341,Namespace:calico-system,Attempt:5,}" Dec 13 13:34:13.661935 containerd[1540]: time="2024-12-13T13:34:13.661700683Z" level=info msg="StopPodSandbox for \"0b7439b43e35cac25488bd28ac234bf2d14fdc7eb34af6bf4fc4f0e2029647f1\"" Dec 13 13:34:13.661935 containerd[1540]: time="2024-12-13T13:34:13.661749090Z" level=info msg="TearDown network for sandbox \"0b7439b43e35cac25488bd28ac234bf2d14fdc7eb34af6bf4fc4f0e2029647f1\" successfully" Dec 13 13:34:13.661935 containerd[1540]: time="2024-12-13T13:34:13.661756167Z" level=info msg="StopPodSandbox for \"0b7439b43e35cac25488bd28ac234bf2d14fdc7eb34af6bf4fc4f0e2029647f1\" returns successfully" Dec 13 13:34:13.662911 containerd[1540]: time="2024-12-13T13:34:13.662608218Z" level=info msg="StopPodSandbox for \"aa7c49f6c656374edc32cfc35744958459f47fb19807c32786714cc939851a3a\"" Dec 13 13:34:13.662911 containerd[1540]: time="2024-12-13T13:34:13.662648596Z" level=info msg="TearDown network for sandbox \"aa7c49f6c656374edc32cfc35744958459f47fb19807c32786714cc939851a3a\" successfully" Dec 13 13:34:13.662911 containerd[1540]: time="2024-12-13T13:34:13.662654675Z" level=info msg="StopPodSandbox for \"aa7c49f6c656374edc32cfc35744958459f47fb19807c32786714cc939851a3a\" returns successfully" Dec 13 13:34:13.662911 containerd[1540]: time="2024-12-13T13:34:13.662821041Z" level=info msg="StopPodSandbox for \"ae609dfc2992fdb2c11066c2b9c3abd85054910235b779057a4858a878ea95d7\"" Dec 13 13:34:13.662911 containerd[1540]: time="2024-12-13T13:34:13.662856320Z" level=info msg="TearDown network for sandbox \"ae609dfc2992fdb2c11066c2b9c3abd85054910235b779057a4858a878ea95d7\" successfully" Dec 13 13:34:13.662911 containerd[1540]: time="2024-12-13T13:34:13.662861872Z" level=info msg="StopPodSandbox for \"ae609dfc2992fdb2c11066c2b9c3abd85054910235b779057a4858a878ea95d7\" returns successfully" Dec 13 13:34:13.663221 containerd[1540]: time="2024-12-13T13:34:13.663135603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74874996db-hwhtg,Uid:1a917722-7626-420c-b334-c54df1962ff7,Namespace:calico-system,Attempt:5,}" Dec 13 13:34:13.664242 kubelet[2835]: I1213 13:34:13.664217 2835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d44e8ab5c34fff971412b45b7052f7b56af24b5248bf6ec7cce270d854f1032c" Dec 13 13:34:13.666715 containerd[1540]: time="2024-12-13T13:34:13.664628980Z" level=info msg="StopPodSandbox for \"d44e8ab5c34fff971412b45b7052f7b56af24b5248bf6ec7cce270d854f1032c\"" Dec 13 13:34:13.666715 containerd[1540]: time="2024-12-13T13:34:13.664734291Z" level=info msg="Ensure that sandbox d44e8ab5c34fff971412b45b7052f7b56af24b5248bf6ec7cce270d854f1032c in task-service has been cleanup successfully" Dec 13 13:34:13.667270 containerd[1540]: time="2024-12-13T13:34:13.666804939Z" level=info msg="TearDown network for sandbox \"d44e8ab5c34fff971412b45b7052f7b56af24b5248bf6ec7cce270d854f1032c\" successfully" Dec 13 13:34:13.667270 containerd[1540]: time="2024-12-13T13:34:13.666817746Z" level=info msg="StopPodSandbox for \"d44e8ab5c34fff971412b45b7052f7b56af24b5248bf6ec7cce270d854f1032c\" returns successfully" Dec 13 13:34:13.667270 containerd[1540]: time="2024-12-13T13:34:13.667008099Z" level=info msg="StopPodSandbox for \"d46b92f868fc17306eeb28b300837c3c680502e5586b50517041c78a1b14f039\"" Dec 13 13:34:13.667270 containerd[1540]: time="2024-12-13T13:34:13.667059434Z" level=info msg="TearDown network for sandbox \"d46b92f868fc17306eeb28b300837c3c680502e5586b50517041c78a1b14f039\" successfully" Dec 13 13:34:13.667270 containerd[1540]: time="2024-12-13T13:34:13.667065307Z" level=info msg="StopPodSandbox for \"d46b92f868fc17306eeb28b300837c3c680502e5586b50517041c78a1b14f039\" returns successfully" Dec 13 13:34:13.668569 containerd[1540]: time="2024-12-13T13:34:13.668335473Z" level=info msg="StopPodSandbox for \"e95ee3e3a3545f69ea30e365faf744111ed0f2cfd0089e3b48f8cf66caab7830\"" Dec 13 13:34:13.668569 containerd[1540]: time="2024-12-13T13:34:13.668405693Z" level=info msg="TearDown network for sandbox \"e95ee3e3a3545f69ea30e365faf744111ed0f2cfd0089e3b48f8cf66caab7830\" successfully" Dec 13 13:34:13.668569 containerd[1540]: time="2024-12-13T13:34:13.668413411Z" level=info msg="StopPodSandbox for \"e95ee3e3a3545f69ea30e365faf744111ed0f2cfd0089e3b48f8cf66caab7830\" returns successfully" Dec 13 13:34:13.671505 containerd[1540]: time="2024-12-13T13:34:13.671425956Z" level=info msg="StopPodSandbox for \"0a82589ee1ce9211a28b404bf05d8a5d55870d00dbaa6f040e19fd5860a3aa2e\"" Dec 13 13:34:13.671677 containerd[1540]: time="2024-12-13T13:34:13.671621174Z" level=info msg="TearDown network for sandbox \"0a82589ee1ce9211a28b404bf05d8a5d55870d00dbaa6f040e19fd5860a3aa2e\" successfully" Dec 13 13:34:13.671857 containerd[1540]: time="2024-12-13T13:34:13.671629654Z" level=info msg="StopPodSandbox for \"0a82589ee1ce9211a28b404bf05d8a5d55870d00dbaa6f040e19fd5860a3aa2e\" returns successfully" Dec 13 13:34:13.672438 containerd[1540]: time="2024-12-13T13:34:13.672292921Z" level=info msg="StopPodSandbox for \"380b70df06034fce2ee4e2503b62611323260cf3362cc30533c33ca8924f6498\"" Dec 13 13:34:13.672438 containerd[1540]: time="2024-12-13T13:34:13.672390135Z" level=info msg="TearDown network for sandbox \"380b70df06034fce2ee4e2503b62611323260cf3362cc30533c33ca8924f6498\" successfully" Dec 13 13:34:13.672438 containerd[1540]: time="2024-12-13T13:34:13.672397696Z" level=info msg="StopPodSandbox for \"380b70df06034fce2ee4e2503b62611323260cf3362cc30533c33ca8924f6498\" returns successfully" Dec 13 13:34:13.673087 containerd[1540]: time="2024-12-13T13:34:13.673075345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-57pkw,Uid:044a2624-b2b5-4815-a892-56a4b4e7678a,Namespace:kube-system,Attempt:5,}" Dec 13 13:34:13.674753 kubelet[2835]: I1213 13:34:13.674555 2835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16d6a944a79031ebe24aa26205b00e27648ec9751873f338c25830e1dd94743b" Dec 13 13:34:13.675817 containerd[1540]: time="2024-12-13T13:34:13.675025307Z" level=info msg="StopPodSandbox for \"16d6a944a79031ebe24aa26205b00e27648ec9751873f338c25830e1dd94743b\"" Dec 13 13:34:13.677280 containerd[1540]: time="2024-12-13T13:34:13.676095491Z" level=info msg="Ensure that sandbox 16d6a944a79031ebe24aa26205b00e27648ec9751873f338c25830e1dd94743b in task-service has been cleanup successfully" Dec 13 13:34:13.678093 containerd[1540]: time="2024-12-13T13:34:13.678027690Z" level=info msg="TearDown network for sandbox \"16d6a944a79031ebe24aa26205b00e27648ec9751873f338c25830e1dd94743b\" successfully" Dec 13 13:34:13.678093 containerd[1540]: time="2024-12-13T13:34:13.678046740Z" level=info msg="StopPodSandbox for \"16d6a944a79031ebe24aa26205b00e27648ec9751873f338c25830e1dd94743b\" returns successfully" Dec 13 13:34:13.678978 containerd[1540]: time="2024-12-13T13:34:13.678954677Z" level=info msg="StopPodSandbox for \"d999310267a45ca66e6bc057e30db6969005bfdb7c001c4e75972ba37074eb74\"" Dec 13 13:34:13.679020 containerd[1540]: time="2024-12-13T13:34:13.679012811Z" level=info msg="TearDown network for sandbox \"d999310267a45ca66e6bc057e30db6969005bfdb7c001c4e75972ba37074eb74\" successfully" Dec 13 13:34:13.679044 containerd[1540]: time="2024-12-13T13:34:13.679020386Z" level=info msg="StopPodSandbox for \"d999310267a45ca66e6bc057e30db6969005bfdb7c001c4e75972ba37074eb74\" returns successfully" Dec 13 13:34:13.680187 containerd[1540]: time="2024-12-13T13:34:13.680174637Z" level=info msg="StopPodSandbox for \"f2395e64add4f1db0b85782980bdcdaf9699429f7df719adb1fd3c24cddc5aab\"" Dec 13 13:34:13.681128 containerd[1540]: time="2024-12-13T13:34:13.680798669Z" level=info msg="TearDown network for sandbox \"f2395e64add4f1db0b85782980bdcdaf9699429f7df719adb1fd3c24cddc5aab\" successfully" Dec 13 13:34:13.681128 containerd[1540]: time="2024-12-13T13:34:13.680809642Z" level=info msg="StopPodSandbox for \"f2395e64add4f1db0b85782980bdcdaf9699429f7df719adb1fd3c24cddc5aab\" returns successfully" Dec 13 13:34:13.681128 containerd[1540]: time="2024-12-13T13:34:13.681030962Z" level=info msg="StopPodSandbox for \"0dabb5f4b1ea298c91cbde131d95492354ec96009d66dab860652a5ca3101e2a\"" Dec 13 13:34:13.681430 containerd[1540]: time="2024-12-13T13:34:13.681415022Z" level=info msg="TearDown network for sandbox \"0dabb5f4b1ea298c91cbde131d95492354ec96009d66dab860652a5ca3101e2a\" successfully" Dec 13 13:34:13.681430 containerd[1540]: time="2024-12-13T13:34:13.681426651Z" level=info msg="StopPodSandbox for \"0dabb5f4b1ea298c91cbde131d95492354ec96009d66dab860652a5ca3101e2a\" returns successfully" Dec 13 13:34:13.699912 containerd[1540]: time="2024-12-13T13:34:13.699892352Z" level=info msg="StopPodSandbox for \"cdd50c9de02b78e81c8129b164d9902caeccd7a5f6cf84cfbc17155ba9403a75\"" Dec 13 13:34:13.701936 systemd[1]: run-netns-cni\x2d64dc4ec3\x2d614c\x2d7fcf\x2ddec5\x2d44ed39079f35.mount: Deactivated successfully. Dec 13 13:34:13.705481 containerd[1540]: time="2024-12-13T13:34:13.705424362Z" level=info msg="TearDown network for sandbox \"cdd50c9de02b78e81c8129b164d9902caeccd7a5f6cf84cfbc17155ba9403a75\" successfully" Dec 13 13:34:13.705622 containerd[1540]: time="2024-12-13T13:34:13.705550628Z" level=info msg="StopPodSandbox for \"cdd50c9de02b78e81c8129b164d9902caeccd7a5f6cf84cfbc17155ba9403a75\" returns successfully" Dec 13 13:34:13.705839 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-16d6a944a79031ebe24aa26205b00e27648ec9751873f338c25830e1dd94743b-shm.mount: Deactivated successfully. Dec 13 13:34:13.705897 systemd[1]: run-netns-cni\x2d9551454a\x2d1677\x2dd7d4\x2d9013\x2d7a807b3fea67.mount: Deactivated successfully. Dec 13 13:34:13.705936 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-08f37eb07a7cae4dcef392eb4a16301e2226023737c359d66ef523c02f90517b-shm.mount: Deactivated successfully. Dec 13 13:34:13.705974 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3069369736.mount: Deactivated successfully. Dec 13 13:34:13.715260 containerd[1540]: time="2024-12-13T13:34:13.713842638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55476b5df-txx74,Uid:594d5ede-fccb-4fa8-b157-5696299be69a,Namespace:calico-apiserver,Attempt:5,}" Dec 13 13:34:13.764563 containerd[1540]: time="2024-12-13T13:34:13.764455617Z" level=error msg="Failed to destroy network for sandbox \"6646c23a004b554d48a3e5ff96163dfcc04eb006aca168bfb75f4b706a895e8b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:13.767716 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6646c23a004b554d48a3e5ff96163dfcc04eb006aca168bfb75f4b706a895e8b-shm.mount: Deactivated successfully. Dec 13 13:34:13.770053 containerd[1540]: time="2024-12-13T13:34:13.769945020Z" level=error msg="encountered an error cleaning up failed sandbox \"6646c23a004b554d48a3e5ff96163dfcc04eb006aca168bfb75f4b706a895e8b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:13.770053 containerd[1540]: time="2024-12-13T13:34:13.769986676Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vvl2m,Uid:95ca7197-6257-4f74-a945-b91e5b94e808,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"6646c23a004b554d48a3e5ff96163dfcc04eb006aca168bfb75f4b706a895e8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:13.799350 containerd[1540]: time="2024-12-13T13:34:13.799068095Z" level=error msg="Failed to destroy network for sandbox \"1158c743f2cb761e3f68fd51d3f40279e2ac1cd4688beaaf49ed92ee8ea6947a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:13.799425 kubelet[2835]: E1213 13:34:13.799344 2835 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6646c23a004b554d48a3e5ff96163dfcc04eb006aca168bfb75f4b706a895e8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:13.799425 kubelet[2835]: E1213 13:34:13.799374 2835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6646c23a004b554d48a3e5ff96163dfcc04eb006aca168bfb75f4b706a895e8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-vvl2m" Dec 13 13:34:13.799425 kubelet[2835]: E1213 13:34:13.799390 2835 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6646c23a004b554d48a3e5ff96163dfcc04eb006aca168bfb75f4b706a895e8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-vvl2m" Dec 13 13:34:13.802362 containerd[1540]: time="2024-12-13T13:34:13.800152897Z" level=error msg="encountered an error cleaning up failed sandbox \"1158c743f2cb761e3f68fd51d3f40279e2ac1cd4688beaaf49ed92ee8ea6947a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:13.801996 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1158c743f2cb761e3f68fd51d3f40279e2ac1cd4688beaaf49ed92ee8ea6947a-shm.mount: Deactivated successfully. Dec 13 13:34:13.803183 kubelet[2835]: E1213 13:34:13.803157 2835 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-vvl2m_kube-system(95ca7197-6257-4f74-a945-b91e5b94e808)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-vvl2m_kube-system(95ca7197-6257-4f74-a945-b91e5b94e808)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6646c23a004b554d48a3e5ff96163dfcc04eb006aca168bfb75f4b706a895e8b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-vvl2m" podUID="95ca7197-6257-4f74-a945-b91e5b94e808" Dec 13 13:34:13.811252 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 13:34:13.811552 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 13:34:13.815376 containerd[1540]: time="2024-12-13T13:34:13.815345201Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55476b5df-rctdv,Uid:2b3c7f26-9f5a-42bd-b1a3-f7dc58c816f3,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"1158c743f2cb761e3f68fd51d3f40279e2ac1cd4688beaaf49ed92ee8ea6947a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:13.818347 kubelet[2835]: E1213 13:34:13.817570 2835 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1158c743f2cb761e3f68fd51d3f40279e2ac1cd4688beaaf49ed92ee8ea6947a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:13.818347 kubelet[2835]: E1213 13:34:13.817606 2835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1158c743f2cb761e3f68fd51d3f40279e2ac1cd4688beaaf49ed92ee8ea6947a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55476b5df-rctdv" Dec 13 13:34:13.818347 kubelet[2835]: E1213 13:34:13.817620 2835 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1158c743f2cb761e3f68fd51d3f40279e2ac1cd4688beaaf49ed92ee8ea6947a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55476b5df-rctdv" Dec 13 13:34:13.818504 kubelet[2835]: E1213 13:34:13.817654 2835 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55476b5df-rctdv_calico-apiserver(2b3c7f26-9f5a-42bd-b1a3-f7dc58c816f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55476b5df-rctdv_calico-apiserver(2b3c7f26-9f5a-42bd-b1a3-f7dc58c816f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1158c743f2cb761e3f68fd51d3f40279e2ac1cd4688beaaf49ed92ee8ea6947a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55476b5df-rctdv" podUID="2b3c7f26-9f5a-42bd-b1a3-f7dc58c816f3" Dec 13 13:34:13.828318 kubelet[2835]: I1213 13:34:13.827941 2835 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-x4vxm" podStartSLOduration=1.060966269 podStartE2EDuration="15.746621964s" podCreationTimestamp="2024-12-13 13:33:58 +0000 UTC" firstStartedPulling="2024-12-13 13:33:58.426496232 +0000 UTC m=+20.229216664" lastFinishedPulling="2024-12-13 13:34:13.112151928 +0000 UTC m=+34.914872359" observedRunningTime="2024-12-13 13:34:13.746562579 +0000 UTC m=+35.549283020" watchObservedRunningTime="2024-12-13 13:34:13.746621964 +0000 UTC m=+35.549342399" Dec 13 13:34:13.852856 containerd[1540]: time="2024-12-13T13:34:13.852820559Z" level=error msg="Failed to destroy network for sandbox \"060405da8e5828bef9dc11af75aa700c40fc7072fb9501ae49ebf1158ffabedc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:13.853223 containerd[1540]: time="2024-12-13T13:34:13.853204925Z" level=error msg="encountered an error cleaning up failed sandbox \"060405da8e5828bef9dc11af75aa700c40fc7072fb9501ae49ebf1158ffabedc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:13.853285 containerd[1540]: time="2024-12-13T13:34:13.853258237Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74874996db-hwhtg,Uid:1a917722-7626-420c-b334-c54df1962ff7,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"060405da8e5828bef9dc11af75aa700c40fc7072fb9501ae49ebf1158ffabedc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:13.856062 kubelet[2835]: E1213 13:34:13.855880 2835 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"060405da8e5828bef9dc11af75aa700c40fc7072fb9501ae49ebf1158ffabedc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:13.856062 kubelet[2835]: E1213 13:34:13.855926 2835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"060405da8e5828bef9dc11af75aa700c40fc7072fb9501ae49ebf1158ffabedc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74874996db-hwhtg" Dec 13 13:34:13.856062 kubelet[2835]: E1213 13:34:13.855940 2835 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"060405da8e5828bef9dc11af75aa700c40fc7072fb9501ae49ebf1158ffabedc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74874996db-hwhtg" Dec 13 13:34:13.856699 kubelet[2835]: E1213 13:34:13.855977 2835 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-74874996db-hwhtg_calico-system(1a917722-7626-420c-b334-c54df1962ff7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-74874996db-hwhtg_calico-system(1a917722-7626-420c-b334-c54df1962ff7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"060405da8e5828bef9dc11af75aa700c40fc7072fb9501ae49ebf1158ffabedc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74874996db-hwhtg" podUID="1a917722-7626-420c-b334-c54df1962ff7" Dec 13 13:34:13.869249 containerd[1540]: time="2024-12-13T13:34:13.869211650Z" level=error msg="Failed to destroy network for sandbox \"363ffd66381413d5bd85c719e4a3d228427a9028d883ef008f1b6ec2d0ee71f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:13.869615 containerd[1540]: time="2024-12-13T13:34:13.869601184Z" level=error msg="encountered an error cleaning up failed sandbox \"363ffd66381413d5bd85c719e4a3d228427a9028d883ef008f1b6ec2d0ee71f8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:13.869738 containerd[1540]: time="2024-12-13T13:34:13.869706097Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-57pkw,Uid:044a2624-b2b5-4815-a892-56a4b4e7678a,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"363ffd66381413d5bd85c719e4a3d228427a9028d883ef008f1b6ec2d0ee71f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:13.870116 kubelet[2835]: E1213 13:34:13.870091 2835 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"363ffd66381413d5bd85c719e4a3d228427a9028d883ef008f1b6ec2d0ee71f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:13.870206 kubelet[2835]: E1213 13:34:13.870200 2835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"363ffd66381413d5bd85c719e4a3d228427a9028d883ef008f1b6ec2d0ee71f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-57pkw" Dec 13 13:34:13.870281 kubelet[2835]: E1213 13:34:13.870276 2835 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"363ffd66381413d5bd85c719e4a3d228427a9028d883ef008f1b6ec2d0ee71f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-57pkw" Dec 13 13:34:13.870382 kubelet[2835]: E1213 13:34:13.870372 2835 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-57pkw_kube-system(044a2624-b2b5-4815-a892-56a4b4e7678a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-57pkw_kube-system(044a2624-b2b5-4815-a892-56a4b4e7678a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"363ffd66381413d5bd85c719e4a3d228427a9028d883ef008f1b6ec2d0ee71f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-57pkw" podUID="044a2624-b2b5-4815-a892-56a4b4e7678a" Dec 13 13:34:13.907850 containerd[1540]: time="2024-12-13T13:34:13.907778593Z" level=error msg="Failed to destroy network for sandbox \"87af3552c621dcbb760efc629c7ee824a46040fe91c0ca0574f446a598d03412\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:13.917851 containerd[1540]: time="2024-12-13T13:34:13.917664136Z" level=error msg="encountered an error cleaning up failed sandbox \"87af3552c621dcbb760efc629c7ee824a46040fe91c0ca0574f446a598d03412\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:13.917851 containerd[1540]: time="2024-12-13T13:34:13.917718438Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t28ks,Uid:6d6a0031-7d0e-4f38-97a2-6db6c2123341,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"87af3552c621dcbb760efc629c7ee824a46040fe91c0ca0574f446a598d03412\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:13.928073 kubelet[2835]: E1213 13:34:13.927984 2835 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87af3552c621dcbb760efc629c7ee824a46040fe91c0ca0574f446a598d03412\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:13.928073 kubelet[2835]: E1213 13:34:13.928025 2835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87af3552c621dcbb760efc629c7ee824a46040fe91c0ca0574f446a598d03412\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t28ks" Dec 13 13:34:13.928073 kubelet[2835]: E1213 13:34:13.928040 2835 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87af3552c621dcbb760efc629c7ee824a46040fe91c0ca0574f446a598d03412\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t28ks" Dec 13 13:34:13.928286 kubelet[2835]: E1213 13:34:13.928075 2835 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-t28ks_calico-system(6d6a0031-7d0e-4f38-97a2-6db6c2123341)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-t28ks_calico-system(6d6a0031-7d0e-4f38-97a2-6db6c2123341)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"87af3552c621dcbb760efc629c7ee824a46040fe91c0ca0574f446a598d03412\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t28ks" podUID="6d6a0031-7d0e-4f38-97a2-6db6c2123341" Dec 13 13:34:13.939910 containerd[1540]: time="2024-12-13T13:34:13.939879607Z" level=error msg="Failed to destroy network for sandbox \"3c1544c76631d5bce0905b816ace4eddc61c4ba221c37d7d5aa352c6931e51bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:13.942552 containerd[1540]: time="2024-12-13T13:34:13.942455536Z" level=error msg="encountered an error cleaning up failed sandbox \"3c1544c76631d5bce0905b816ace4eddc61c4ba221c37d7d5aa352c6931e51bd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:13.942552 containerd[1540]: time="2024-12-13T13:34:13.942498066Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55476b5df-txx74,Uid:594d5ede-fccb-4fa8-b157-5696299be69a,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"3c1544c76631d5bce0905b816ace4eddc61c4ba221c37d7d5aa352c6931e51bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:13.948454 kubelet[2835]: E1213 13:34:13.948324 2835 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c1544c76631d5bce0905b816ace4eddc61c4ba221c37d7d5aa352c6931e51bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:13.948454 kubelet[2835]: E1213 13:34:13.948360 2835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c1544c76631d5bce0905b816ace4eddc61c4ba221c37d7d5aa352c6931e51bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55476b5df-txx74" Dec 13 13:34:13.948454 kubelet[2835]: E1213 13:34:13.948375 2835 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c1544c76631d5bce0905b816ace4eddc61c4ba221c37d7d5aa352c6931e51bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55476b5df-txx74" Dec 13 13:34:13.948927 kubelet[2835]: E1213 13:34:13.948567 2835 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55476b5df-txx74_calico-apiserver(594d5ede-fccb-4fa8-b157-5696299be69a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55476b5df-txx74_calico-apiserver(594d5ede-fccb-4fa8-b157-5696299be69a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c1544c76631d5bce0905b816ace4eddc61c4ba221c37d7d5aa352c6931e51bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55476b5df-txx74" podUID="594d5ede-fccb-4fa8-b157-5696299be69a" Dec 13 13:34:14.684677 kubelet[2835]: I1213 13:34:14.684640 2835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c1544c76631d5bce0905b816ace4eddc61c4ba221c37d7d5aa352c6931e51bd" Dec 13 13:34:14.688048 containerd[1540]: time="2024-12-13T13:34:14.686205689Z" level=info msg="StopPodSandbox for \"3c1544c76631d5bce0905b816ace4eddc61c4ba221c37d7d5aa352c6931e51bd\"" Dec 13 13:34:14.688048 containerd[1540]: time="2024-12-13T13:34:14.686619345Z" level=info msg="Ensure that sandbox 3c1544c76631d5bce0905b816ace4eddc61c4ba221c37d7d5aa352c6931e51bd in task-service has been cleanup successfully" Dec 13 13:34:14.688048 containerd[1540]: time="2024-12-13T13:34:14.686948521Z" level=info msg="TearDown network for sandbox \"3c1544c76631d5bce0905b816ace4eddc61c4ba221c37d7d5aa352c6931e51bd\" successfully" Dec 13 13:34:14.688048 containerd[1540]: time="2024-12-13T13:34:14.686969562Z" level=info msg="StopPodSandbox for \"3c1544c76631d5bce0905b816ace4eddc61c4ba221c37d7d5aa352c6931e51bd\" returns successfully" Dec 13 13:34:14.691215 containerd[1540]: time="2024-12-13T13:34:14.691174841Z" level=info msg="StopPodSandbox for \"16d6a944a79031ebe24aa26205b00e27648ec9751873f338c25830e1dd94743b\"" Dec 13 13:34:14.691405 containerd[1540]: time="2024-12-13T13:34:14.691375341Z" level=info msg="TearDown network for sandbox \"16d6a944a79031ebe24aa26205b00e27648ec9751873f338c25830e1dd94743b\" successfully" Dec 13 13:34:14.691445 containerd[1540]: time="2024-12-13T13:34:14.691438517Z" level=info msg="StopPodSandbox for \"16d6a944a79031ebe24aa26205b00e27648ec9751873f338c25830e1dd94743b\" returns successfully" Dec 13 13:34:14.691724 containerd[1540]: time="2024-12-13T13:34:14.691703178Z" level=info msg="StopPodSandbox for \"d999310267a45ca66e6bc057e30db6969005bfdb7c001c4e75972ba37074eb74\"" Dec 13 13:34:14.691871 containerd[1540]: time="2024-12-13T13:34:14.691853127Z" level=info msg="TearDown network for sandbox \"d999310267a45ca66e6bc057e30db6969005bfdb7c001c4e75972ba37074eb74\" successfully" Dec 13 13:34:14.691939 containerd[1540]: time="2024-12-13T13:34:14.691921166Z" level=info msg="StopPodSandbox for \"d999310267a45ca66e6bc057e30db6969005bfdb7c001c4e75972ba37074eb74\" returns successfully" Dec 13 13:34:14.692445 containerd[1540]: time="2024-12-13T13:34:14.692435217Z" level=info msg="StopPodSandbox for \"f2395e64add4f1db0b85782980bdcdaf9699429f7df719adb1fd3c24cddc5aab\"" Dec 13 13:34:14.692524 containerd[1540]: time="2024-12-13T13:34:14.692515760Z" level=info msg="TearDown network for sandbox \"f2395e64add4f1db0b85782980bdcdaf9699429f7df719adb1fd3c24cddc5aab\" successfully" Dec 13 13:34:14.692561 containerd[1540]: time="2024-12-13T13:34:14.692554757Z" level=info msg="StopPodSandbox for \"f2395e64add4f1db0b85782980bdcdaf9699429f7df719adb1fd3c24cddc5aab\" returns successfully" Dec 13 13:34:14.692865 kubelet[2835]: I1213 13:34:14.692843 2835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6646c23a004b554d48a3e5ff96163dfcc04eb006aca168bfb75f4b706a895e8b" Dec 13 13:34:14.693201 containerd[1540]: time="2024-12-13T13:34:14.693110241Z" level=info msg="StopPodSandbox for \"0dabb5f4b1ea298c91cbde131d95492354ec96009d66dab860652a5ca3101e2a\"" Dec 13 13:34:14.693201 containerd[1540]: time="2024-12-13T13:34:14.693154785Z" level=info msg="TearDown network for sandbox \"0dabb5f4b1ea298c91cbde131d95492354ec96009d66dab860652a5ca3101e2a\" successfully" Dec 13 13:34:14.693201 containerd[1540]: time="2024-12-13T13:34:14.693176253Z" level=info msg="StopPodSandbox for \"0dabb5f4b1ea298c91cbde131d95492354ec96009d66dab860652a5ca3101e2a\" returns successfully" Dec 13 13:34:14.693376 containerd[1540]: time="2024-12-13T13:34:14.693356136Z" level=info msg="StopPodSandbox for \"cdd50c9de02b78e81c8129b164d9902caeccd7a5f6cf84cfbc17155ba9403a75\"" Dec 13 13:34:14.693432 containerd[1540]: time="2024-12-13T13:34:14.693415896Z" level=info msg="TearDown network for sandbox \"cdd50c9de02b78e81c8129b164d9902caeccd7a5f6cf84cfbc17155ba9403a75\" successfully" Dec 13 13:34:14.693459 containerd[1540]: time="2024-12-13T13:34:14.693430255Z" level=info msg="StopPodSandbox for \"cdd50c9de02b78e81c8129b164d9902caeccd7a5f6cf84cfbc17155ba9403a75\" returns successfully" Dec 13 13:34:14.695301 containerd[1540]: time="2024-12-13T13:34:14.694281555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55476b5df-txx74,Uid:594d5ede-fccb-4fa8-b157-5696299be69a,Namespace:calico-apiserver,Attempt:6,}" Dec 13 13:34:14.695118 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-363ffd66381413d5bd85c719e4a3d228427a9028d883ef008f1b6ec2d0ee71f8-shm.mount: Deactivated successfully. Dec 13 13:34:14.695391 kubelet[2835]: I1213 13:34:14.695023 2835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87af3552c621dcbb760efc629c7ee824a46040fe91c0ca0574f446a598d03412" Dec 13 13:34:14.695188 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-060405da8e5828bef9dc11af75aa700c40fc7072fb9501ae49ebf1158ffabedc-shm.mount: Deactivated successfully. Dec 13 13:34:14.695229 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-87af3552c621dcbb760efc629c7ee824a46040fe91c0ca0574f446a598d03412-shm.mount: Deactivated successfully. Dec 13 13:34:14.696780 containerd[1540]: time="2024-12-13T13:34:14.696545831Z" level=info msg="StopPodSandbox for \"6646c23a004b554d48a3e5ff96163dfcc04eb006aca168bfb75f4b706a895e8b\"" Dec 13 13:34:14.696780 containerd[1540]: time="2024-12-13T13:34:14.696723699Z" level=info msg="Ensure that sandbox 6646c23a004b554d48a3e5ff96163dfcc04eb006aca168bfb75f4b706a895e8b in task-service has been cleanup successfully" Dec 13 13:34:14.698377 containerd[1540]: time="2024-12-13T13:34:14.698340069Z" level=info msg="TearDown network for sandbox \"6646c23a004b554d48a3e5ff96163dfcc04eb006aca168bfb75f4b706a895e8b\" successfully" Dec 13 13:34:14.698449 containerd[1540]: time="2024-12-13T13:34:14.698375131Z" level=info msg="StopPodSandbox for \"6646c23a004b554d48a3e5ff96163dfcc04eb006aca168bfb75f4b706a895e8b\" returns successfully" Dec 13 13:34:14.699662 containerd[1540]: time="2024-12-13T13:34:14.698584089Z" level=info msg="StopPodSandbox for \"87af3552c621dcbb760efc629c7ee824a46040fe91c0ca0574f446a598d03412\"" Dec 13 13:34:14.699662 containerd[1540]: time="2024-12-13T13:34:14.698731839Z" level=info msg="Ensure that sandbox 87af3552c621dcbb760efc629c7ee824a46040fe91c0ca0574f446a598d03412 in task-service has been cleanup successfully" Dec 13 13:34:14.700363 systemd[1]: run-netns-cni\x2de044cdac\x2dadf6\x2d0257\x2de994\x2dd5f645f4492c.mount: Deactivated successfully. Dec 13 13:34:14.700935 containerd[1540]: time="2024-12-13T13:34:14.700898782Z" level=info msg="TearDown network for sandbox \"87af3552c621dcbb760efc629c7ee824a46040fe91c0ca0574f446a598d03412\" successfully" Dec 13 13:34:14.700935 containerd[1540]: time="2024-12-13T13:34:14.700922345Z" level=info msg="StopPodSandbox for \"87af3552c621dcbb760efc629c7ee824a46040fe91c0ca0574f446a598d03412\" returns successfully" Dec 13 13:34:14.701905 containerd[1540]: time="2024-12-13T13:34:14.701305097Z" level=info msg="StopPodSandbox for \"08f37eb07a7cae4dcef392eb4a16301e2226023737c359d66ef523c02f90517b\"" Dec 13 13:34:14.701905 containerd[1540]: time="2024-12-13T13:34:14.701359971Z" level=info msg="TearDown network for sandbox \"08f37eb07a7cae4dcef392eb4a16301e2226023737c359d66ef523c02f90517b\" successfully" Dec 13 13:34:14.701905 containerd[1540]: time="2024-12-13T13:34:14.701366022Z" level=info msg="StopPodSandbox for \"08f37eb07a7cae4dcef392eb4a16301e2226023737c359d66ef523c02f90517b\" returns successfully" Dec 13 13:34:14.701905 containerd[1540]: time="2024-12-13T13:34:14.701387754Z" level=info msg="StopPodSandbox for \"a174934634c9372a6809b18365887d5a7720a83e221795127c2e46157e91c642\"" Dec 13 13:34:14.701905 containerd[1540]: time="2024-12-13T13:34:14.701435318Z" level=info msg="TearDown network for sandbox \"a174934634c9372a6809b18365887d5a7720a83e221795127c2e46157e91c642\" successfully" Dec 13 13:34:14.701905 containerd[1540]: time="2024-12-13T13:34:14.701442742Z" level=info msg="StopPodSandbox for \"a174934634c9372a6809b18365887d5a7720a83e221795127c2e46157e91c642\" returns successfully" Dec 13 13:34:14.701905 containerd[1540]: time="2024-12-13T13:34:14.701620860Z" level=info msg="StopPodSandbox for \"2c10e14c46ce3ec5b997f10afea52ab894699d44aedc127a1a0a5d8708fe4938\"" Dec 13 13:34:14.701905 containerd[1540]: time="2024-12-13T13:34:14.701660627Z" level=info msg="TearDown network for sandbox \"2c10e14c46ce3ec5b997f10afea52ab894699d44aedc127a1a0a5d8708fe4938\" successfully" Dec 13 13:34:14.701905 containerd[1540]: time="2024-12-13T13:34:14.701666518Z" level=info msg="StopPodSandbox for \"2c10e14c46ce3ec5b997f10afea52ab894699d44aedc127a1a0a5d8708fe4938\" returns successfully" Dec 13 13:34:14.701905 containerd[1540]: time="2024-12-13T13:34:14.701691822Z" level=info msg="StopPodSandbox for \"d7b4d845515e5010111f72d28fb6298ea56240c3f9e45380884271fce541ca12\"" Dec 13 13:34:14.701905 containerd[1540]: time="2024-12-13T13:34:14.701721627Z" level=info msg="TearDown network for sandbox \"d7b4d845515e5010111f72d28fb6298ea56240c3f9e45380884271fce541ca12\" successfully" Dec 13 13:34:14.701905 containerd[1540]: time="2024-12-13T13:34:14.701726552Z" level=info msg="StopPodSandbox for \"d7b4d845515e5010111f72d28fb6298ea56240c3f9e45380884271fce541ca12\" returns successfully" Dec 13 13:34:14.702992 containerd[1540]: time="2024-12-13T13:34:14.702814667Z" level=info msg="StopPodSandbox for \"9440f629d3fadd2bec6a0fdf543fbb5d75d841326e7be4bf0c7907196eb63e30\"" Dec 13 13:34:14.702992 containerd[1540]: time="2024-12-13T13:34:14.702861995Z" level=info msg="TearDown network for sandbox \"9440f629d3fadd2bec6a0fdf543fbb5d75d841326e7be4bf0c7907196eb63e30\" successfully" Dec 13 13:34:14.702992 containerd[1540]: time="2024-12-13T13:34:14.702868428Z" level=info msg="StopPodSandbox for \"9440f629d3fadd2bec6a0fdf543fbb5d75d841326e7be4bf0c7907196eb63e30\" returns successfully" Dec 13 13:34:14.702992 containerd[1540]: time="2024-12-13T13:34:14.702928894Z" level=info msg="StopPodSandbox for \"38cd3764dd11dd796f443558251e2d1af831465a7cccb775e4565ed9f9116d58\"" Dec 13 13:34:14.703642 containerd[1540]: time="2024-12-13T13:34:14.703623883Z" level=info msg="TearDown network for sandbox \"38cd3764dd11dd796f443558251e2d1af831465a7cccb775e4565ed9f9116d58\" successfully" Dec 13 13:34:14.703642 containerd[1540]: time="2024-12-13T13:34:14.703635388Z" level=info msg="StopPodSandbox for \"38cd3764dd11dd796f443558251e2d1af831465a7cccb775e4565ed9f9116d58\" returns successfully" Dec 13 13:34:14.704274 containerd[1540]: time="2024-12-13T13:34:14.703918264Z" level=info msg="StopPodSandbox for \"31b9e46dd01ac8c3e96533dd860cad6d5ba5c4f5c8e5c992dc17ab64757b4ac1\"" Dec 13 13:34:14.704274 containerd[1540]: time="2024-12-13T13:34:14.703961901Z" level=info msg="TearDown network for sandbox \"31b9e46dd01ac8c3e96533dd860cad6d5ba5c4f5c8e5c992dc17ab64757b4ac1\" successfully" Dec 13 13:34:14.704274 containerd[1540]: time="2024-12-13T13:34:14.703969206Z" level=info msg="StopPodSandbox for \"31b9e46dd01ac8c3e96533dd860cad6d5ba5c4f5c8e5c992dc17ab64757b4ac1\" returns successfully" Dec 13 13:34:14.704274 containerd[1540]: time="2024-12-13T13:34:14.704000665Z" level=info msg="StopPodSandbox for \"1b826aa3b1f3ae55bbafa8172c171a0658192d78431f4568b1172509d1bb6230\"" Dec 13 13:34:14.704274 containerd[1540]: time="2024-12-13T13:34:14.704031162Z" level=info msg="TearDown network for sandbox \"1b826aa3b1f3ae55bbafa8172c171a0658192d78431f4568b1172509d1bb6230\" successfully" Dec 13 13:34:14.704274 containerd[1540]: time="2024-12-13T13:34:14.704036271Z" level=info msg="StopPodSandbox for \"1b826aa3b1f3ae55bbafa8172c171a0658192d78431f4568b1172509d1bb6230\" returns successfully" Dec 13 13:34:14.704274 containerd[1540]: time="2024-12-13T13:34:14.704185949Z" level=info msg="StopPodSandbox for \"de2a3220a48c9e9cc9b729ae9f0eb51a9e87590aadfcb088a4ac339b6d2a4bcb\"" Dec 13 13:34:14.704274 containerd[1540]: time="2024-12-13T13:34:14.704231539Z" level=info msg="TearDown network for sandbox \"de2a3220a48c9e9cc9b729ae9f0eb51a9e87590aadfcb088a4ac339b6d2a4bcb\" successfully" Dec 13 13:34:14.704274 containerd[1540]: time="2024-12-13T13:34:14.704245685Z" level=info msg="StopPodSandbox for \"de2a3220a48c9e9cc9b729ae9f0eb51a9e87590aadfcb088a4ac339b6d2a4bcb\" returns successfully" Dec 13 13:34:14.705116 containerd[1540]: time="2024-12-13T13:34:14.704280828Z" level=info msg="StopPodSandbox for \"9c73e609d1e184c6a8d67a7f217dec6cfe2db47a60b8873672c6aea8607a4de1\"" Dec 13 13:34:14.705116 containerd[1540]: time="2024-12-13T13:34:14.704312451Z" level=info msg="TearDown network for sandbox \"9c73e609d1e184c6a8d67a7f217dec6cfe2db47a60b8873672c6aea8607a4de1\" successfully" Dec 13 13:34:14.705116 containerd[1540]: time="2024-12-13T13:34:14.704317302Z" level=info msg="StopPodSandbox for \"9c73e609d1e184c6a8d67a7f217dec6cfe2db47a60b8873672c6aea8607a4de1\" returns successfully" Dec 13 13:34:14.705116 containerd[1540]: time="2024-12-13T13:34:14.704742029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t28ks,Uid:6d6a0031-7d0e-4f38-97a2-6db6c2123341,Namespace:calico-system,Attempt:6,}" Dec 13 13:34:14.704843 systemd[1]: run-netns-cni\x2dd03fb4d0\x2d4ba2\x2df221\x2dfa5f\x2d60d84996834b.mount: Deactivated successfully. Dec 13 13:34:14.710638 containerd[1540]: time="2024-12-13T13:34:14.710607789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vvl2m,Uid:95ca7197-6257-4f74-a945-b91e5b94e808,Namespace:kube-system,Attempt:6,}" Dec 13 13:34:14.711072 kubelet[2835]: I1213 13:34:14.711054 2835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1158c743f2cb761e3f68fd51d3f40279e2ac1cd4688beaaf49ed92ee8ea6947a" Dec 13 13:34:14.711469 containerd[1540]: time="2024-12-13T13:34:14.711447952Z" level=info msg="StopPodSandbox for \"1158c743f2cb761e3f68fd51d3f40279e2ac1cd4688beaaf49ed92ee8ea6947a\"" Dec 13 13:34:14.711745 containerd[1540]: time="2024-12-13T13:34:14.711730393Z" level=info msg="Ensure that sandbox 1158c743f2cb761e3f68fd51d3f40279e2ac1cd4688beaaf49ed92ee8ea6947a in task-service has been cleanup successfully" Dec 13 13:34:14.712034 containerd[1540]: time="2024-12-13T13:34:14.711905395Z" level=info msg="TearDown network for sandbox \"1158c743f2cb761e3f68fd51d3f40279e2ac1cd4688beaaf49ed92ee8ea6947a\" successfully" Dec 13 13:34:14.712034 containerd[1540]: time="2024-12-13T13:34:14.711916087Z" level=info msg="StopPodSandbox for \"1158c743f2cb761e3f68fd51d3f40279e2ac1cd4688beaaf49ed92ee8ea6947a\" returns successfully" Dec 13 13:34:14.714330 systemd[1]: run-netns-cni\x2df9e3489a\x2d4d15\x2de339\x2d9916\x2d714f389ab3a6.mount: Deactivated successfully. Dec 13 13:34:14.715705 containerd[1540]: time="2024-12-13T13:34:14.712599514Z" level=info msg="StopPodSandbox for \"ef13314c0568659bbaea1088d4d3e9c2a434cb3d9920a493b16a6b9aee714104\"" Dec 13 13:34:14.715705 containerd[1540]: time="2024-12-13T13:34:14.715561848Z" level=info msg="TearDown network for sandbox \"ef13314c0568659bbaea1088d4d3e9c2a434cb3d9920a493b16a6b9aee714104\" successfully" Dec 13 13:34:14.715705 containerd[1540]: time="2024-12-13T13:34:14.715574773Z" level=info msg="StopPodSandbox for \"ef13314c0568659bbaea1088d4d3e9c2a434cb3d9920a493b16a6b9aee714104\" returns successfully" Dec 13 13:34:14.716696 containerd[1540]: time="2024-12-13T13:34:14.716681615Z" level=info msg="StopPodSandbox for \"bfb40143397fd98326b98c45dd1f4f82c460a9b0879d7bc1fede2668ca7ef3e0\"" Dec 13 13:34:14.716975 containerd[1540]: time="2024-12-13T13:34:14.716800109Z" level=info msg="TearDown network for sandbox \"bfb40143397fd98326b98c45dd1f4f82c460a9b0879d7bc1fede2668ca7ef3e0\" successfully" Dec 13 13:34:14.716975 containerd[1540]: time="2024-12-13T13:34:14.716809460Z" level=info msg="StopPodSandbox for \"bfb40143397fd98326b98c45dd1f4f82c460a9b0879d7bc1fede2668ca7ef3e0\" returns successfully" Dec 13 13:34:14.717284 containerd[1540]: time="2024-12-13T13:34:14.717084386Z" level=info msg="StopPodSandbox for \"b728d27349f9d125051aedc463cf331f1faf1bcc698455461a967ac0b5d621f1\"" Dec 13 13:34:14.717284 containerd[1540]: time="2024-12-13T13:34:14.717126114Z" level=info msg="TearDown network for sandbox \"b728d27349f9d125051aedc463cf331f1faf1bcc698455461a967ac0b5d621f1\" successfully" Dec 13 13:34:14.717284 containerd[1540]: time="2024-12-13T13:34:14.717132692Z" level=info msg="StopPodSandbox for \"b728d27349f9d125051aedc463cf331f1faf1bcc698455461a967ac0b5d621f1\" returns successfully" Dec 13 13:34:14.717413 containerd[1540]: time="2024-12-13T13:34:14.717402767Z" level=info msg="StopPodSandbox for \"c6df07d0815948047f4d6cc2c7af0ea066838f1e44745877608753226f54289c\"" Dec 13 13:34:14.717597 containerd[1540]: time="2024-12-13T13:34:14.717483747Z" level=info msg="TearDown network for sandbox \"c6df07d0815948047f4d6cc2c7af0ea066838f1e44745877608753226f54289c\" successfully" Dec 13 13:34:14.717597 containerd[1540]: time="2024-12-13T13:34:14.717535390Z" level=info msg="StopPodSandbox for \"c6df07d0815948047f4d6cc2c7af0ea066838f1e44745877608753226f54289c\" returns successfully" Dec 13 13:34:14.717864 containerd[1540]: time="2024-12-13T13:34:14.717763578Z" level=info msg="StopPodSandbox for \"00f697d42c2d50b3139e5b825119ed91816dde91442ca398a78122299baeac17\"" Dec 13 13:34:14.717864 containerd[1540]: time="2024-12-13T13:34:14.717803085Z" level=info msg="TearDown network for sandbox \"00f697d42c2d50b3139e5b825119ed91816dde91442ca398a78122299baeac17\" successfully" Dec 13 13:34:14.717864 containerd[1540]: time="2024-12-13T13:34:14.717809458Z" level=info msg="StopPodSandbox for \"00f697d42c2d50b3139e5b825119ed91816dde91442ca398a78122299baeac17\" returns successfully" Dec 13 13:34:14.718269 containerd[1540]: time="2024-12-13T13:34:14.718146135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55476b5df-rctdv,Uid:2b3c7f26-9f5a-42bd-b1a3-f7dc58c816f3,Namespace:calico-apiserver,Attempt:6,}" Dec 13 13:34:14.718809 kubelet[2835]: I1213 13:34:14.718604 2835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="060405da8e5828bef9dc11af75aa700c40fc7072fb9501ae49ebf1158ffabedc" Dec 13 13:34:14.720456 containerd[1540]: time="2024-12-13T13:34:14.718898088Z" level=info msg="StopPodSandbox for \"060405da8e5828bef9dc11af75aa700c40fc7072fb9501ae49ebf1158ffabedc\"" Dec 13 13:34:14.720456 containerd[1540]: time="2024-12-13T13:34:14.719000366Z" level=info msg="Ensure that sandbox 060405da8e5828bef9dc11af75aa700c40fc7072fb9501ae49ebf1158ffabedc in task-service has been cleanup successfully" Dec 13 13:34:14.720738 containerd[1540]: time="2024-12-13T13:34:14.720724750Z" level=info msg="TearDown network for sandbox \"060405da8e5828bef9dc11af75aa700c40fc7072fb9501ae49ebf1158ffabedc\" successfully" Dec 13 13:34:14.720901 containerd[1540]: time="2024-12-13T13:34:14.720779574Z" level=info msg="StopPodSandbox for \"060405da8e5828bef9dc11af75aa700c40fc7072fb9501ae49ebf1158ffabedc\" returns successfully" Dec 13 13:34:14.721087 systemd[1]: run-netns-cni\x2d681be4b3\x2d6bdd\x2d0827\x2d6ce4\x2d22421069e091.mount: Deactivated successfully. Dec 13 13:34:14.725904 containerd[1540]: time="2024-12-13T13:34:14.722443743Z" level=info msg="StopPodSandbox for \"4c8f6eabdd280340942ebf86e9998375a1f8276fc220abdd7cf1f763f9862018\"" Dec 13 13:34:14.725904 containerd[1540]: time="2024-12-13T13:34:14.722510117Z" level=info msg="TearDown network for sandbox \"4c8f6eabdd280340942ebf86e9998375a1f8276fc220abdd7cf1f763f9862018\" successfully" Dec 13 13:34:14.725904 containerd[1540]: time="2024-12-13T13:34:14.722519089Z" level=info msg="StopPodSandbox for \"4c8f6eabdd280340942ebf86e9998375a1f8276fc220abdd7cf1f763f9862018\" returns successfully" Dec 13 13:34:14.725904 containerd[1540]: time="2024-12-13T13:34:14.722737715Z" level=info msg="StopPodSandbox for \"e5017c5028ba35b9b6aa1ab31287cb736abfbdd2030510101d39e2a50c82f63d\"" Dec 13 13:34:14.725904 containerd[1540]: time="2024-12-13T13:34:14.722794885Z" level=info msg="TearDown network for sandbox \"e5017c5028ba35b9b6aa1ab31287cb736abfbdd2030510101d39e2a50c82f63d\" successfully" Dec 13 13:34:14.725904 containerd[1540]: time="2024-12-13T13:34:14.722803678Z" level=info msg="StopPodSandbox for \"e5017c5028ba35b9b6aa1ab31287cb736abfbdd2030510101d39e2a50c82f63d\" returns successfully" Dec 13 13:34:14.725904 containerd[1540]: time="2024-12-13T13:34:14.722988654Z" level=info msg="StopPodSandbox for \"0b7439b43e35cac25488bd28ac234bf2d14fdc7eb34af6bf4fc4f0e2029647f1\"" Dec 13 13:34:14.725904 containerd[1540]: time="2024-12-13T13:34:14.723036980Z" level=info msg="TearDown network for sandbox \"0b7439b43e35cac25488bd28ac234bf2d14fdc7eb34af6bf4fc4f0e2029647f1\" successfully" Dec 13 13:34:14.725904 containerd[1540]: time="2024-12-13T13:34:14.723045395Z" level=info msg="StopPodSandbox for \"0b7439b43e35cac25488bd28ac234bf2d14fdc7eb34af6bf4fc4f0e2029647f1\" returns successfully" Dec 13 13:34:14.725904 containerd[1540]: time="2024-12-13T13:34:14.723232027Z" level=info msg="StopPodSandbox for \"aa7c49f6c656374edc32cfc35744958459f47fb19807c32786714cc939851a3a\"" Dec 13 13:34:14.725904 containerd[1540]: time="2024-12-13T13:34:14.723293583Z" level=info msg="TearDown network for sandbox \"aa7c49f6c656374edc32cfc35744958459f47fb19807c32786714cc939851a3a\" successfully" Dec 13 13:34:14.725904 containerd[1540]: time="2024-12-13T13:34:14.723302617Z" level=info msg="StopPodSandbox for \"aa7c49f6c656374edc32cfc35744958459f47fb19807c32786714cc939851a3a\" returns successfully" Dec 13 13:34:14.725904 containerd[1540]: time="2024-12-13T13:34:14.723502883Z" level=info msg="StopPodSandbox for \"ae609dfc2992fdb2c11066c2b9c3abd85054910235b779057a4858a878ea95d7\"" Dec 13 13:34:14.725904 containerd[1540]: time="2024-12-13T13:34:14.723556387Z" level=info msg="TearDown network for sandbox \"ae609dfc2992fdb2c11066c2b9c3abd85054910235b779057a4858a878ea95d7\" successfully" Dec 13 13:34:14.725904 containerd[1540]: time="2024-12-13T13:34:14.723566141Z" level=info msg="StopPodSandbox for \"ae609dfc2992fdb2c11066c2b9c3abd85054910235b779057a4858a878ea95d7\" returns successfully" Dec 13 13:34:14.725904 containerd[1540]: time="2024-12-13T13:34:14.723875413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74874996db-hwhtg,Uid:1a917722-7626-420c-b334-c54df1962ff7,Namespace:calico-system,Attempt:6,}" Dec 13 13:34:14.726496 kubelet[2835]: I1213 13:34:14.724894 2835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="363ffd66381413d5bd85c719e4a3d228427a9028d883ef008f1b6ec2d0ee71f8" Dec 13 13:34:14.728068 containerd[1540]: time="2024-12-13T13:34:14.727727617Z" level=info msg="StopPodSandbox for \"363ffd66381413d5bd85c719e4a3d228427a9028d883ef008f1b6ec2d0ee71f8\"" Dec 13 13:34:14.728068 containerd[1540]: time="2024-12-13T13:34:14.727948572Z" level=info msg="Ensure that sandbox 363ffd66381413d5bd85c719e4a3d228427a9028d883ef008f1b6ec2d0ee71f8 in task-service has been cleanup successfully" Dec 13 13:34:14.729328 containerd[1540]: time="2024-12-13T13:34:14.729303109Z" level=info msg="TearDown network for sandbox \"363ffd66381413d5bd85c719e4a3d228427a9028d883ef008f1b6ec2d0ee71f8\" successfully" Dec 13 13:34:14.729328 containerd[1540]: time="2024-12-13T13:34:14.729322583Z" level=info msg="StopPodSandbox for \"363ffd66381413d5bd85c719e4a3d228427a9028d883ef008f1b6ec2d0ee71f8\" returns successfully" Dec 13 13:34:14.747853 containerd[1540]: time="2024-12-13T13:34:14.747819389Z" level=info msg="StopPodSandbox for \"d44e8ab5c34fff971412b45b7052f7b56af24b5248bf6ec7cce270d854f1032c\"" Dec 13 13:34:14.747986 containerd[1540]: time="2024-12-13T13:34:14.747898721Z" level=info msg="TearDown network for sandbox \"d44e8ab5c34fff971412b45b7052f7b56af24b5248bf6ec7cce270d854f1032c\" successfully" Dec 13 13:34:14.747986 containerd[1540]: time="2024-12-13T13:34:14.747906637Z" level=info msg="StopPodSandbox for \"d44e8ab5c34fff971412b45b7052f7b56af24b5248bf6ec7cce270d854f1032c\" returns successfully" Dec 13 13:34:14.751615 containerd[1540]: time="2024-12-13T13:34:14.751495829Z" level=info msg="StopPodSandbox for \"d46b92f868fc17306eeb28b300837c3c680502e5586b50517041c78a1b14f039\"" Dec 13 13:34:14.751615 containerd[1540]: time="2024-12-13T13:34:14.751567404Z" level=info msg="TearDown network for sandbox \"d46b92f868fc17306eeb28b300837c3c680502e5586b50517041c78a1b14f039\" successfully" Dec 13 13:34:14.751615 containerd[1540]: time="2024-12-13T13:34:14.751575775Z" level=info msg="StopPodSandbox for \"d46b92f868fc17306eeb28b300837c3c680502e5586b50517041c78a1b14f039\" returns successfully" Dec 13 13:34:14.752755 containerd[1540]: time="2024-12-13T13:34:14.752642516Z" level=info msg="StopPodSandbox for \"e95ee3e3a3545f69ea30e365faf744111ed0f2cfd0089e3b48f8cf66caab7830\"" Dec 13 13:34:14.752920 containerd[1540]: time="2024-12-13T13:34:14.752824394Z" level=info msg="TearDown network for sandbox \"e95ee3e3a3545f69ea30e365faf744111ed0f2cfd0089e3b48f8cf66caab7830\" successfully" Dec 13 13:34:14.753061 containerd[1540]: time="2024-12-13T13:34:14.753043188Z" level=info msg="StopPodSandbox for \"e95ee3e3a3545f69ea30e365faf744111ed0f2cfd0089e3b48f8cf66caab7830\" returns successfully" Dec 13 13:34:14.754190 containerd[1540]: time="2024-12-13T13:34:14.754165082Z" level=info msg="StopPodSandbox for \"0a82589ee1ce9211a28b404bf05d8a5d55870d00dbaa6f040e19fd5860a3aa2e\"" Dec 13 13:34:14.754262 containerd[1540]: time="2024-12-13T13:34:14.754228483Z" level=info msg="TearDown network for sandbox \"0a82589ee1ce9211a28b404bf05d8a5d55870d00dbaa6f040e19fd5860a3aa2e\" successfully" Dec 13 13:34:14.754262 containerd[1540]: time="2024-12-13T13:34:14.754244448Z" level=info msg="StopPodSandbox for \"0a82589ee1ce9211a28b404bf05d8a5d55870d00dbaa6f040e19fd5860a3aa2e\" returns successfully" Dec 13 13:34:14.755740 containerd[1540]: time="2024-12-13T13:34:14.754925747Z" level=info msg="StopPodSandbox for \"380b70df06034fce2ee4e2503b62611323260cf3362cc30533c33ca8924f6498\"" Dec 13 13:34:14.755740 containerd[1540]: time="2024-12-13T13:34:14.754979800Z" level=info msg="TearDown network for sandbox \"380b70df06034fce2ee4e2503b62611323260cf3362cc30533c33ca8924f6498\" successfully" Dec 13 13:34:14.755740 containerd[1540]: time="2024-12-13T13:34:14.754987964Z" level=info msg="StopPodSandbox for \"380b70df06034fce2ee4e2503b62611323260cf3362cc30533c33ca8924f6498\" returns successfully" Dec 13 13:34:14.755740 containerd[1540]: time="2024-12-13T13:34:14.755725017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-57pkw,Uid:044a2624-b2b5-4815-a892-56a4b4e7678a,Namespace:kube-system,Attempt:6,}" Dec 13 13:34:15.231402 systemd-networkd[1439]: calib57420b43a0: Link UP Dec 13 13:34:15.231529 systemd-networkd[1439]: calib57420b43a0: Gained carrier Dec 13 13:34:15.239878 containerd[1540]: 2024-12-13 13:34:14.866 [INFO][4804] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:34:15.239878 containerd[1540]: 2024-12-13 13:34:14.876 [INFO][4804] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--55476b5df--rctdv-eth0 calico-apiserver-55476b5df- calico-apiserver 2b3c7f26-9f5a-42bd-b1a3-f7dc58c816f3 699 0 2024-12-13 13:33:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:55476b5df projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-55476b5df-rctdv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib57420b43a0 [] []}} ContainerID="e6be5cc5c2ac4116b043183e3d0af23e78c9a1286893eef22267be2133c4dd55" Namespace="calico-apiserver" Pod="calico-apiserver-55476b5df-rctdv" WorkloadEndpoint="localhost-k8s-calico--apiserver--55476b5df--rctdv-" Dec 13 13:34:15.239878 containerd[1540]: 2024-12-13 13:34:14.876 [INFO][4804] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e6be5cc5c2ac4116b043183e3d0af23e78c9a1286893eef22267be2133c4dd55" Namespace="calico-apiserver" Pod="calico-apiserver-55476b5df-rctdv" WorkloadEndpoint="localhost-k8s-calico--apiserver--55476b5df--rctdv-eth0" Dec 13 13:34:15.239878 containerd[1540]: 2024-12-13 13:34:15.143 [INFO][4847] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e6be5cc5c2ac4116b043183e3d0af23e78c9a1286893eef22267be2133c4dd55" HandleID="k8s-pod-network.e6be5cc5c2ac4116b043183e3d0af23e78c9a1286893eef22267be2133c4dd55" Workload="localhost-k8s-calico--apiserver--55476b5df--rctdv-eth0" Dec 13 13:34:15.239878 containerd[1540]: 2024-12-13 13:34:15.169 [INFO][4847] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e6be5cc5c2ac4116b043183e3d0af23e78c9a1286893eef22267be2133c4dd55" HandleID="k8s-pod-network.e6be5cc5c2ac4116b043183e3d0af23e78c9a1286893eef22267be2133c4dd55" Workload="localhost-k8s-calico--apiserver--55476b5df--rctdv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000102a80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-55476b5df-rctdv", "timestamp":"2024-12-13 13:34:15.143399482 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:34:15.239878 containerd[1540]: 2024-12-13 13:34:15.169 [INFO][4847] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:34:15.239878 containerd[1540]: 2024-12-13 13:34:15.170 [INFO][4847] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:34:15.239878 containerd[1540]: 2024-12-13 13:34:15.170 [INFO][4847] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 13:34:15.239878 containerd[1540]: 2024-12-13 13:34:15.175 [INFO][4847] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e6be5cc5c2ac4116b043183e3d0af23e78c9a1286893eef22267be2133c4dd55" host="localhost" Dec 13 13:34:15.239878 containerd[1540]: 2024-12-13 13:34:15.186 [INFO][4847] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 13:34:15.239878 containerd[1540]: 2024-12-13 13:34:15.191 [INFO][4847] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 13:34:15.239878 containerd[1540]: 2024-12-13 13:34:15.192 [INFO][4847] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 13:34:15.239878 containerd[1540]: 2024-12-13 13:34:15.193 [INFO][4847] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 13:34:15.239878 containerd[1540]: 2024-12-13 13:34:15.193 [INFO][4847] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e6be5cc5c2ac4116b043183e3d0af23e78c9a1286893eef22267be2133c4dd55" host="localhost" Dec 13 13:34:15.239878 containerd[1540]: 2024-12-13 13:34:15.194 [INFO][4847] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e6be5cc5c2ac4116b043183e3d0af23e78c9a1286893eef22267be2133c4dd55 Dec 13 13:34:15.239878 containerd[1540]: 2024-12-13 13:34:15.197 [INFO][4847] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e6be5cc5c2ac4116b043183e3d0af23e78c9a1286893eef22267be2133c4dd55" host="localhost" Dec 13 13:34:15.239878 containerd[1540]: 2024-12-13 13:34:15.200 [INFO][4847] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.e6be5cc5c2ac4116b043183e3d0af23e78c9a1286893eef22267be2133c4dd55" host="localhost" Dec 13 13:34:15.239878 containerd[1540]: 2024-12-13 13:34:15.200 [INFO][4847] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.e6be5cc5c2ac4116b043183e3d0af23e78c9a1286893eef22267be2133c4dd55" host="localhost" Dec 13 13:34:15.239878 containerd[1540]: 2024-12-13 13:34:15.200 [INFO][4847] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:34:15.239878 containerd[1540]: 2024-12-13 13:34:15.200 [INFO][4847] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="e6be5cc5c2ac4116b043183e3d0af23e78c9a1286893eef22267be2133c4dd55" HandleID="k8s-pod-network.e6be5cc5c2ac4116b043183e3d0af23e78c9a1286893eef22267be2133c4dd55" Workload="localhost-k8s-calico--apiserver--55476b5df--rctdv-eth0" Dec 13 13:34:15.249817 containerd[1540]: 2024-12-13 13:34:15.203 [INFO][4804] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e6be5cc5c2ac4116b043183e3d0af23e78c9a1286893eef22267be2133c4dd55" Namespace="calico-apiserver" Pod="calico-apiserver-55476b5df-rctdv" WorkloadEndpoint="localhost-k8s-calico--apiserver--55476b5df--rctdv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55476b5df--rctdv-eth0", GenerateName:"calico-apiserver-55476b5df-", Namespace:"calico-apiserver", SelfLink:"", UID:"2b3c7f26-9f5a-42bd-b1a3-f7dc58c816f3", ResourceVersion:"699", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 33, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55476b5df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-55476b5df-rctdv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib57420b43a0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:34:15.249817 containerd[1540]: 2024-12-13 13:34:15.203 [INFO][4804] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="e6be5cc5c2ac4116b043183e3d0af23e78c9a1286893eef22267be2133c4dd55" Namespace="calico-apiserver" Pod="calico-apiserver-55476b5df-rctdv" WorkloadEndpoint="localhost-k8s-calico--apiserver--55476b5df--rctdv-eth0" Dec 13 13:34:15.249817 containerd[1540]: 2024-12-13 13:34:15.203 [INFO][4804] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib57420b43a0 ContainerID="e6be5cc5c2ac4116b043183e3d0af23e78c9a1286893eef22267be2133c4dd55" Namespace="calico-apiserver" Pod="calico-apiserver-55476b5df-rctdv" WorkloadEndpoint="localhost-k8s-calico--apiserver--55476b5df--rctdv-eth0" Dec 13 13:34:15.249817 containerd[1540]: 2024-12-13 13:34:15.223 [INFO][4804] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e6be5cc5c2ac4116b043183e3d0af23e78c9a1286893eef22267be2133c4dd55" Namespace="calico-apiserver" Pod="calico-apiserver-55476b5df-rctdv" WorkloadEndpoint="localhost-k8s-calico--apiserver--55476b5df--rctdv-eth0" Dec 13 13:34:15.249817 containerd[1540]: 2024-12-13 13:34:15.223 [INFO][4804] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e6be5cc5c2ac4116b043183e3d0af23e78c9a1286893eef22267be2133c4dd55" Namespace="calico-apiserver" Pod="calico-apiserver-55476b5df-rctdv" WorkloadEndpoint="localhost-k8s-calico--apiserver--55476b5df--rctdv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55476b5df--rctdv-eth0", GenerateName:"calico-apiserver-55476b5df-", Namespace:"calico-apiserver", SelfLink:"", UID:"2b3c7f26-9f5a-42bd-b1a3-f7dc58c816f3", ResourceVersion:"699", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 33, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55476b5df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e6be5cc5c2ac4116b043183e3d0af23e78c9a1286893eef22267be2133c4dd55", Pod:"calico-apiserver-55476b5df-rctdv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib57420b43a0", MAC:"c6:1a:c3:21:86:2e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:34:15.249817 containerd[1540]: 2024-12-13 13:34:15.235 [INFO][4804] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e6be5cc5c2ac4116b043183e3d0af23e78c9a1286893eef22267be2133c4dd55" Namespace="calico-apiserver" Pod="calico-apiserver-55476b5df-rctdv" WorkloadEndpoint="localhost-k8s-calico--apiserver--55476b5df--rctdv-eth0" Dec 13 13:34:15.248035 systemd-networkd[1439]: calif218a015ba8: Link UP Dec 13 13:34:15.248139 systemd-networkd[1439]: calif218a015ba8: Gained carrier Dec 13 13:34:15.267692 containerd[1540]: 2024-12-13 13:34:14.875 [INFO][4818] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:34:15.267692 containerd[1540]: 2024-12-13 13:34:14.890 [INFO][4818] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--74874996db--hwhtg-eth0 calico-kube-controllers-74874996db- calico-system 1a917722-7626-420c-b334-c54df1962ff7 700 0 2024-12-13 13:33:58 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:74874996db projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-74874996db-hwhtg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif218a015ba8 [] []}} ContainerID="53bcaef531676c01c1057c25d2b02b1171d712e911d5e862c98d3b2b1f390d99" Namespace="calico-system" Pod="calico-kube-controllers-74874996db-hwhtg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74874996db--hwhtg-" Dec 13 13:34:15.267692 containerd[1540]: 2024-12-13 13:34:14.890 [INFO][4818] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="53bcaef531676c01c1057c25d2b02b1171d712e911d5e862c98d3b2b1f390d99" Namespace="calico-system" Pod="calico-kube-controllers-74874996db-hwhtg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74874996db--hwhtg-eth0" Dec 13 13:34:15.267692 containerd[1540]: 2024-12-13 13:34:15.143 [INFO][4851] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="53bcaef531676c01c1057c25d2b02b1171d712e911d5e862c98d3b2b1f390d99" HandleID="k8s-pod-network.53bcaef531676c01c1057c25d2b02b1171d712e911d5e862c98d3b2b1f390d99" Workload="localhost-k8s-calico--kube--controllers--74874996db--hwhtg-eth0" Dec 13 13:34:15.267692 containerd[1540]: 2024-12-13 13:34:15.172 [INFO][4851] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="53bcaef531676c01c1057c25d2b02b1171d712e911d5e862c98d3b2b1f390d99" HandleID="k8s-pod-network.53bcaef531676c01c1057c25d2b02b1171d712e911d5e862c98d3b2b1f390d99" Workload="localhost-k8s-calico--kube--controllers--74874996db--hwhtg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003162f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-74874996db-hwhtg", "timestamp":"2024-12-13 13:34:15.143499272 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:34:15.267692 containerd[1540]: 2024-12-13 13:34:15.172 [INFO][4851] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:34:15.267692 containerd[1540]: 2024-12-13 13:34:15.201 [INFO][4851] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:34:15.267692 containerd[1540]: 2024-12-13 13:34:15.202 [INFO][4851] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 13:34:15.267692 containerd[1540]: 2024-12-13 13:34:15.204 [INFO][4851] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.53bcaef531676c01c1057c25d2b02b1171d712e911d5e862c98d3b2b1f390d99" host="localhost" Dec 13 13:34:15.267692 containerd[1540]: 2024-12-13 13:34:15.206 [INFO][4851] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 13:34:15.267692 containerd[1540]: 2024-12-13 13:34:15.208 [INFO][4851] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 13:34:15.267692 containerd[1540]: 2024-12-13 13:34:15.210 [INFO][4851] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 13:34:15.267692 containerd[1540]: 2024-12-13 13:34:15.212 [INFO][4851] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 13:34:15.267692 containerd[1540]: 2024-12-13 13:34:15.212 [INFO][4851] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.53bcaef531676c01c1057c25d2b02b1171d712e911d5e862c98d3b2b1f390d99" host="localhost" Dec 13 13:34:15.267692 containerd[1540]: 2024-12-13 13:34:15.213 [INFO][4851] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.53bcaef531676c01c1057c25d2b02b1171d712e911d5e862c98d3b2b1f390d99 Dec 13 13:34:15.267692 containerd[1540]: 2024-12-13 13:34:15.226 [INFO][4851] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.53bcaef531676c01c1057c25d2b02b1171d712e911d5e862c98d3b2b1f390d99" host="localhost" Dec 13 13:34:15.267692 containerd[1540]: 2024-12-13 13:34:15.234 [INFO][4851] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.53bcaef531676c01c1057c25d2b02b1171d712e911d5e862c98d3b2b1f390d99" host="localhost" Dec 13 13:34:15.267692 containerd[1540]: 2024-12-13 13:34:15.234 [INFO][4851] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.53bcaef531676c01c1057c25d2b02b1171d712e911d5e862c98d3b2b1f390d99" host="localhost" Dec 13 13:34:15.267692 containerd[1540]: 2024-12-13 13:34:15.234 [INFO][4851] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:34:15.267692 containerd[1540]: 2024-12-13 13:34:15.234 [INFO][4851] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="53bcaef531676c01c1057c25d2b02b1171d712e911d5e862c98d3b2b1f390d99" HandleID="k8s-pod-network.53bcaef531676c01c1057c25d2b02b1171d712e911d5e862c98d3b2b1f390d99" Workload="localhost-k8s-calico--kube--controllers--74874996db--hwhtg-eth0" Dec 13 13:34:15.269194 containerd[1540]: 2024-12-13 13:34:15.242 [INFO][4818] cni-plugin/k8s.go 386: Populated endpoint ContainerID="53bcaef531676c01c1057c25d2b02b1171d712e911d5e862c98d3b2b1f390d99" Namespace="calico-system" Pod="calico-kube-controllers-74874996db-hwhtg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74874996db--hwhtg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--74874996db--hwhtg-eth0", GenerateName:"calico-kube-controllers-74874996db-", Namespace:"calico-system", SelfLink:"", UID:"1a917722-7626-420c-b334-c54df1962ff7", ResourceVersion:"700", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 33, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74874996db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-74874996db-hwhtg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif218a015ba8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:34:15.269194 containerd[1540]: 2024-12-13 13:34:15.242 [INFO][4818] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="53bcaef531676c01c1057c25d2b02b1171d712e911d5e862c98d3b2b1f390d99" Namespace="calico-system" Pod="calico-kube-controllers-74874996db-hwhtg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74874996db--hwhtg-eth0" Dec 13 13:34:15.269194 containerd[1540]: 2024-12-13 13:34:15.242 [INFO][4818] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif218a015ba8 ContainerID="53bcaef531676c01c1057c25d2b02b1171d712e911d5e862c98d3b2b1f390d99" Namespace="calico-system" Pod="calico-kube-controllers-74874996db-hwhtg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74874996db--hwhtg-eth0" Dec 13 13:34:15.269194 containerd[1540]: 2024-12-13 13:34:15.244 [INFO][4818] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="53bcaef531676c01c1057c25d2b02b1171d712e911d5e862c98d3b2b1f390d99" Namespace="calico-system" Pod="calico-kube-controllers-74874996db-hwhtg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74874996db--hwhtg-eth0" Dec 13 13:34:15.269194 containerd[1540]: 2024-12-13 13:34:15.244 [INFO][4818] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="53bcaef531676c01c1057c25d2b02b1171d712e911d5e862c98d3b2b1f390d99" Namespace="calico-system" Pod="calico-kube-controllers-74874996db-hwhtg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74874996db--hwhtg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--74874996db--hwhtg-eth0", GenerateName:"calico-kube-controllers-74874996db-", Namespace:"calico-system", SelfLink:"", UID:"1a917722-7626-420c-b334-c54df1962ff7", ResourceVersion:"700", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 33, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74874996db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"53bcaef531676c01c1057c25d2b02b1171d712e911d5e862c98d3b2b1f390d99", Pod:"calico-kube-controllers-74874996db-hwhtg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif218a015ba8", MAC:"56:7e:e6:7d:a3:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:34:15.269194 containerd[1540]: 2024-12-13 13:34:15.262 [INFO][4818] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="53bcaef531676c01c1057c25d2b02b1171d712e911d5e862c98d3b2b1f390d99" Namespace="calico-system" Pod="calico-kube-controllers-74874996db-hwhtg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74874996db--hwhtg-eth0" Dec 13 13:34:15.295824 systemd-networkd[1439]: calic3056c06c91: Link UP Dec 13 13:34:15.296675 systemd-networkd[1439]: calic3056c06c91: Gained carrier Dec 13 13:34:15.319347 containerd[1540]: 2024-12-13 13:34:14.872 [INFO][4822] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:34:15.319347 containerd[1540]: 2024-12-13 13:34:14.889 [INFO][4822] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--57pkw-eth0 coredns-76f75df574- kube-system 044a2624-b2b5-4815-a892-56a4b4e7678a 701 0 2024-12-13 13:33:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-57pkw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic3056c06c91 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b2ea575c1817a93af98e03e06a8a177e1d70d9371447b9836ce21c4a90437000" Namespace="kube-system" Pod="coredns-76f75df574-57pkw" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--57pkw-" Dec 13 13:34:15.319347 containerd[1540]: 2024-12-13 13:34:14.889 [INFO][4822] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b2ea575c1817a93af98e03e06a8a177e1d70d9371447b9836ce21c4a90437000" Namespace="kube-system" Pod="coredns-76f75df574-57pkw" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--57pkw-eth0" Dec 13 13:34:15.319347 containerd[1540]: 2024-12-13 13:34:15.143 [INFO][4850] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b2ea575c1817a93af98e03e06a8a177e1d70d9371447b9836ce21c4a90437000" HandleID="k8s-pod-network.b2ea575c1817a93af98e03e06a8a177e1d70d9371447b9836ce21c4a90437000" Workload="localhost-k8s-coredns--76f75df574--57pkw-eth0" Dec 13 13:34:15.319347 containerd[1540]: 2024-12-13 13:34:15.173 [INFO][4850] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b2ea575c1817a93af98e03e06a8a177e1d70d9371447b9836ce21c4a90437000" HandleID="k8s-pod-network.b2ea575c1817a93af98e03e06a8a177e1d70d9371447b9836ce21c4a90437000" Workload="localhost-k8s-coredns--76f75df574--57pkw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003aa7e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-57pkw", "timestamp":"2024-12-13 13:34:15.14355428 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:34:15.319347 containerd[1540]: 2024-12-13 13:34:15.173 [INFO][4850] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:34:15.319347 containerd[1540]: 2024-12-13 13:34:15.234 [INFO][4850] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:34:15.319347 containerd[1540]: 2024-12-13 13:34:15.234 [INFO][4850] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 13:34:15.319347 containerd[1540]: 2024-12-13 13:34:15.241 [INFO][4850] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b2ea575c1817a93af98e03e06a8a177e1d70d9371447b9836ce21c4a90437000" host="localhost" Dec 13 13:34:15.319347 containerd[1540]: 2024-12-13 13:34:15.254 [INFO][4850] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 13:34:15.319347 containerd[1540]: 2024-12-13 13:34:15.262 [INFO][4850] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 13:34:15.319347 containerd[1540]: 2024-12-13 13:34:15.264 [INFO][4850] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 13:34:15.319347 containerd[1540]: 2024-12-13 13:34:15.266 [INFO][4850] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 13:34:15.319347 containerd[1540]: 2024-12-13 13:34:15.267 [INFO][4850] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b2ea575c1817a93af98e03e06a8a177e1d70d9371447b9836ce21c4a90437000" host="localhost" Dec 13 13:34:15.319347 containerd[1540]: 2024-12-13 13:34:15.271 [INFO][4850] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b2ea575c1817a93af98e03e06a8a177e1d70d9371447b9836ce21c4a90437000 Dec 13 13:34:15.319347 containerd[1540]: 2024-12-13 13:34:15.277 [INFO][4850] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b2ea575c1817a93af98e03e06a8a177e1d70d9371447b9836ce21c4a90437000" host="localhost" Dec 13 13:34:15.319347 containerd[1540]: 2024-12-13 13:34:15.284 [INFO][4850] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.b2ea575c1817a93af98e03e06a8a177e1d70d9371447b9836ce21c4a90437000" host="localhost" Dec 13 13:34:15.319347 containerd[1540]: 2024-12-13 13:34:15.284 [INFO][4850] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.b2ea575c1817a93af98e03e06a8a177e1d70d9371447b9836ce21c4a90437000" host="localhost" Dec 13 13:34:15.319347 containerd[1540]: 2024-12-13 13:34:15.284 [INFO][4850] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:34:15.319347 containerd[1540]: 2024-12-13 13:34:15.284 [INFO][4850] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="b2ea575c1817a93af98e03e06a8a177e1d70d9371447b9836ce21c4a90437000" HandleID="k8s-pod-network.b2ea575c1817a93af98e03e06a8a177e1d70d9371447b9836ce21c4a90437000" Workload="localhost-k8s-coredns--76f75df574--57pkw-eth0" Dec 13 13:34:15.320788 containerd[1540]: 2024-12-13 13:34:15.291 [INFO][4822] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b2ea575c1817a93af98e03e06a8a177e1d70d9371447b9836ce21c4a90437000" Namespace="kube-system" Pod="coredns-76f75df574-57pkw" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--57pkw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--57pkw-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"044a2624-b2b5-4815-a892-56a4b4e7678a", ResourceVersion:"701", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 33, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-57pkw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic3056c06c91", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:34:15.320788 containerd[1540]: 2024-12-13 13:34:15.291 [INFO][4822] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="b2ea575c1817a93af98e03e06a8a177e1d70d9371447b9836ce21c4a90437000" Namespace="kube-system" Pod="coredns-76f75df574-57pkw" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--57pkw-eth0" Dec 13 13:34:15.320788 containerd[1540]: 2024-12-13 13:34:15.291 [INFO][4822] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic3056c06c91 ContainerID="b2ea575c1817a93af98e03e06a8a177e1d70d9371447b9836ce21c4a90437000" Namespace="kube-system" Pod="coredns-76f75df574-57pkw" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--57pkw-eth0" Dec 13 13:34:15.320788 containerd[1540]: 2024-12-13 13:34:15.294 [INFO][4822] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b2ea575c1817a93af98e03e06a8a177e1d70d9371447b9836ce21c4a90437000" Namespace="kube-system" Pod="coredns-76f75df574-57pkw" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--57pkw-eth0" Dec 13 13:34:15.320788 containerd[1540]: 2024-12-13 13:34:15.294 [INFO][4822] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b2ea575c1817a93af98e03e06a8a177e1d70d9371447b9836ce21c4a90437000" Namespace="kube-system" Pod="coredns-76f75df574-57pkw" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--57pkw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--57pkw-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"044a2624-b2b5-4815-a892-56a4b4e7678a", ResourceVersion:"701", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 33, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b2ea575c1817a93af98e03e06a8a177e1d70d9371447b9836ce21c4a90437000", Pod:"coredns-76f75df574-57pkw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic3056c06c91", MAC:"ca:5d:84:b5:3f:44", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:34:15.320788 containerd[1540]: 2024-12-13 13:34:15.314 [INFO][4822] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b2ea575c1817a93af98e03e06a8a177e1d70d9371447b9836ce21c4a90437000" Namespace="kube-system" Pod="coredns-76f75df574-57pkw" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--57pkw-eth0" Dec 13 13:34:15.346353 systemd-networkd[1439]: cali92c5319b070: Link UP Dec 13 13:34:15.346518 systemd-networkd[1439]: cali92c5319b070: Gained carrier Dec 13 13:34:15.372291 containerd[1540]: 2024-12-13 13:34:14.841 [INFO][4794] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:34:15.372291 containerd[1540]: 2024-12-13 13:34:14.872 [INFO][4794] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--t28ks-eth0 csi-node-driver- calico-system 6d6a0031-7d0e-4f38-97a2-6db6c2123341 616 0 2024-12-13 13:33:58 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-t28ks eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali92c5319b070 [] []}} ContainerID="3f041f3199ede0f353ce8f5636b6d407d45dc1ac5103a2d334a9b272a01dedde" Namespace="calico-system" Pod="csi-node-driver-t28ks" WorkloadEndpoint="localhost-k8s-csi--node--driver--t28ks-" Dec 13 13:34:15.372291 containerd[1540]: 2024-12-13 13:34:14.872 [INFO][4794] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3f041f3199ede0f353ce8f5636b6d407d45dc1ac5103a2d334a9b272a01dedde" Namespace="calico-system" Pod="csi-node-driver-t28ks" WorkloadEndpoint="localhost-k8s-csi--node--driver--t28ks-eth0" Dec 13 13:34:15.372291 containerd[1540]: 2024-12-13 13:34:15.143 [INFO][4845] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3f041f3199ede0f353ce8f5636b6d407d45dc1ac5103a2d334a9b272a01dedde" HandleID="k8s-pod-network.3f041f3199ede0f353ce8f5636b6d407d45dc1ac5103a2d334a9b272a01dedde" Workload="localhost-k8s-csi--node--driver--t28ks-eth0" Dec 13 13:34:15.372291 containerd[1540]: 2024-12-13 13:34:15.178 [INFO][4845] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3f041f3199ede0f353ce8f5636b6d407d45dc1ac5103a2d334a9b272a01dedde" HandleID="k8s-pod-network.3f041f3199ede0f353ce8f5636b6d407d45dc1ac5103a2d334a9b272a01dedde" Workload="localhost-k8s-csi--node--driver--t28ks-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002b9d10), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-t28ks", "timestamp":"2024-12-13 13:34:15.143558054 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:34:15.372291 containerd[1540]: 2024-12-13 13:34:15.178 [INFO][4845] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:34:15.372291 containerd[1540]: 2024-12-13 13:34:15.284 [INFO][4845] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:34:15.372291 containerd[1540]: 2024-12-13 13:34:15.285 [INFO][4845] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 13:34:15.372291 containerd[1540]: 2024-12-13 13:34:15.287 [INFO][4845] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3f041f3199ede0f353ce8f5636b6d407d45dc1ac5103a2d334a9b272a01dedde" host="localhost" Dec 13 13:34:15.372291 containerd[1540]: 2024-12-13 13:34:15.293 [INFO][4845] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 13:34:15.372291 containerd[1540]: 2024-12-13 13:34:15.306 [INFO][4845] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 13:34:15.372291 containerd[1540]: 2024-12-13 13:34:15.311 [INFO][4845] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 13:34:15.372291 containerd[1540]: 2024-12-13 13:34:15.314 [INFO][4845] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 13:34:15.372291 containerd[1540]: 2024-12-13 13:34:15.314 [INFO][4845] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3f041f3199ede0f353ce8f5636b6d407d45dc1ac5103a2d334a9b272a01dedde" host="localhost" Dec 13 13:34:15.372291 containerd[1540]: 2024-12-13 13:34:15.317 [INFO][4845] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3f041f3199ede0f353ce8f5636b6d407d45dc1ac5103a2d334a9b272a01dedde Dec 13 13:34:15.372291 containerd[1540]: 2024-12-13 13:34:15.323 [INFO][4845] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3f041f3199ede0f353ce8f5636b6d407d45dc1ac5103a2d334a9b272a01dedde" host="localhost" Dec 13 13:34:15.372291 containerd[1540]: 2024-12-13 13:34:15.330 [INFO][4845] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.3f041f3199ede0f353ce8f5636b6d407d45dc1ac5103a2d334a9b272a01dedde" host="localhost" Dec 13 13:34:15.372291 containerd[1540]: 2024-12-13 13:34:15.330 [INFO][4845] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.3f041f3199ede0f353ce8f5636b6d407d45dc1ac5103a2d334a9b272a01dedde" host="localhost" Dec 13 13:34:15.372291 containerd[1540]: 2024-12-13 13:34:15.330 [INFO][4845] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:34:15.372291 containerd[1540]: 2024-12-13 13:34:15.330 [INFO][4845] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="3f041f3199ede0f353ce8f5636b6d407d45dc1ac5103a2d334a9b272a01dedde" HandleID="k8s-pod-network.3f041f3199ede0f353ce8f5636b6d407d45dc1ac5103a2d334a9b272a01dedde" Workload="localhost-k8s-csi--node--driver--t28ks-eth0" Dec 13 13:34:15.372757 containerd[1540]: 2024-12-13 13:34:15.335 [INFO][4794] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3f041f3199ede0f353ce8f5636b6d407d45dc1ac5103a2d334a9b272a01dedde" Namespace="calico-system" Pod="csi-node-driver-t28ks" WorkloadEndpoint="localhost-k8s-csi--node--driver--t28ks-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--t28ks-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6d6a0031-7d0e-4f38-97a2-6db6c2123341", ResourceVersion:"616", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 33, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-t28ks", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali92c5319b070", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:34:15.372757 containerd[1540]: 2024-12-13 13:34:15.335 [INFO][4794] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="3f041f3199ede0f353ce8f5636b6d407d45dc1ac5103a2d334a9b272a01dedde" Namespace="calico-system" Pod="csi-node-driver-t28ks" WorkloadEndpoint="localhost-k8s-csi--node--driver--t28ks-eth0" Dec 13 13:34:15.372757 containerd[1540]: 2024-12-13 13:34:15.335 [INFO][4794] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali92c5319b070 ContainerID="3f041f3199ede0f353ce8f5636b6d407d45dc1ac5103a2d334a9b272a01dedde" Namespace="calico-system" Pod="csi-node-driver-t28ks" WorkloadEndpoint="localhost-k8s-csi--node--driver--t28ks-eth0" Dec 13 13:34:15.372757 containerd[1540]: 2024-12-13 13:34:15.345 [INFO][4794] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3f041f3199ede0f353ce8f5636b6d407d45dc1ac5103a2d334a9b272a01dedde" Namespace="calico-system" Pod="csi-node-driver-t28ks" WorkloadEndpoint="localhost-k8s-csi--node--driver--t28ks-eth0" Dec 13 13:34:15.372757 containerd[1540]: 2024-12-13 13:34:15.346 [INFO][4794] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3f041f3199ede0f353ce8f5636b6d407d45dc1ac5103a2d334a9b272a01dedde" Namespace="calico-system" Pod="csi-node-driver-t28ks" WorkloadEndpoint="localhost-k8s-csi--node--driver--t28ks-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--t28ks-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6d6a0031-7d0e-4f38-97a2-6db6c2123341", ResourceVersion:"616", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 33, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3f041f3199ede0f353ce8f5636b6d407d45dc1ac5103a2d334a9b272a01dedde", Pod:"csi-node-driver-t28ks", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali92c5319b070", MAC:"e2:dc:40:25:37:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:34:15.372757 containerd[1540]: 2024-12-13 13:34:15.367 [INFO][4794] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3f041f3199ede0f353ce8f5636b6d407d45dc1ac5103a2d334a9b272a01dedde" Namespace="calico-system" Pod="csi-node-driver-t28ks" WorkloadEndpoint="localhost-k8s-csi--node--driver--t28ks-eth0" Dec 13 13:34:15.405351 systemd-networkd[1439]: cali5b4f8397b60: Link UP Dec 13 13:34:15.406623 systemd-networkd[1439]: cali5b4f8397b60: Gained carrier Dec 13 13:34:15.418196 containerd[1540]: 2024-12-13 13:34:14.771 [INFO][4747] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:34:15.418196 containerd[1540]: 2024-12-13 13:34:14.874 [INFO][4747] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--55476b5df--txx74-eth0 calico-apiserver-55476b5df- calico-apiserver 594d5ede-fccb-4fa8-b157-5696299be69a 698 0 2024-12-13 13:33:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:55476b5df projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-55476b5df-txx74 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5b4f8397b60 [] []}} ContainerID="b9a4ca080c10bfa44d3c2f9a62fb3a9393107e4bf17d9b8a588b7a67dadcd134" Namespace="calico-apiserver" Pod="calico-apiserver-55476b5df-txx74" WorkloadEndpoint="localhost-k8s-calico--apiserver--55476b5df--txx74-" Dec 13 13:34:15.418196 containerd[1540]: 2024-12-13 13:34:14.874 [INFO][4747] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b9a4ca080c10bfa44d3c2f9a62fb3a9393107e4bf17d9b8a588b7a67dadcd134" Namespace="calico-apiserver" Pod="calico-apiserver-55476b5df-txx74" WorkloadEndpoint="localhost-k8s-calico--apiserver--55476b5df--txx74-eth0" Dec 13 13:34:15.418196 containerd[1540]: 2024-12-13 13:34:15.143 [INFO][4849] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b9a4ca080c10bfa44d3c2f9a62fb3a9393107e4bf17d9b8a588b7a67dadcd134" HandleID="k8s-pod-network.b9a4ca080c10bfa44d3c2f9a62fb3a9393107e4bf17d9b8a588b7a67dadcd134" Workload="localhost-k8s-calico--apiserver--55476b5df--txx74-eth0" Dec 13 13:34:15.418196 containerd[1540]: 2024-12-13 13:34:15.179 [INFO][4849] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b9a4ca080c10bfa44d3c2f9a62fb3a9393107e4bf17d9b8a588b7a67dadcd134" HandleID="k8s-pod-network.b9a4ca080c10bfa44d3c2f9a62fb3a9393107e4bf17d9b8a588b7a67dadcd134" Workload="localhost-k8s-calico--apiserver--55476b5df--txx74-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001039e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-55476b5df-txx74", "timestamp":"2024-12-13 13:34:15.143392054 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:34:15.418196 containerd[1540]: 2024-12-13 13:34:15.179 [INFO][4849] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:34:15.418196 containerd[1540]: 2024-12-13 13:34:15.330 [INFO][4849] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:34:15.418196 containerd[1540]: 2024-12-13 13:34:15.330 [INFO][4849] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 13:34:15.418196 containerd[1540]: 2024-12-13 13:34:15.334 [INFO][4849] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b9a4ca080c10bfa44d3c2f9a62fb3a9393107e4bf17d9b8a588b7a67dadcd134" host="localhost" Dec 13 13:34:15.418196 containerd[1540]: 2024-12-13 13:34:15.356 [INFO][4849] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 13:34:15.418196 containerd[1540]: 2024-12-13 13:34:15.383 [INFO][4849] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 13:34:15.418196 containerd[1540]: 2024-12-13 13:34:15.384 [INFO][4849] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 13:34:15.418196 containerd[1540]: 2024-12-13 13:34:15.387 [INFO][4849] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 13:34:15.418196 containerd[1540]: 2024-12-13 13:34:15.387 [INFO][4849] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b9a4ca080c10bfa44d3c2f9a62fb3a9393107e4bf17d9b8a588b7a67dadcd134" host="localhost" Dec 13 13:34:15.418196 containerd[1540]: 2024-12-13 13:34:15.388 [INFO][4849] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b9a4ca080c10bfa44d3c2f9a62fb3a9393107e4bf17d9b8a588b7a67dadcd134 Dec 13 13:34:15.418196 containerd[1540]: 2024-12-13 13:34:15.396 [INFO][4849] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b9a4ca080c10bfa44d3c2f9a62fb3a9393107e4bf17d9b8a588b7a67dadcd134" host="localhost" Dec 13 13:34:15.418196 containerd[1540]: 2024-12-13 13:34:15.401 [INFO][4849] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.b9a4ca080c10bfa44d3c2f9a62fb3a9393107e4bf17d9b8a588b7a67dadcd134" host="localhost" Dec 13 13:34:15.418196 containerd[1540]: 2024-12-13 13:34:15.401 [INFO][4849] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.b9a4ca080c10bfa44d3c2f9a62fb3a9393107e4bf17d9b8a588b7a67dadcd134" host="localhost" Dec 13 13:34:15.418196 containerd[1540]: 2024-12-13 13:34:15.401 [INFO][4849] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:34:15.418196 containerd[1540]: 2024-12-13 13:34:15.401 [INFO][4849] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="b9a4ca080c10bfa44d3c2f9a62fb3a9393107e4bf17d9b8a588b7a67dadcd134" HandleID="k8s-pod-network.b9a4ca080c10bfa44d3c2f9a62fb3a9393107e4bf17d9b8a588b7a67dadcd134" Workload="localhost-k8s-calico--apiserver--55476b5df--txx74-eth0" Dec 13 13:34:15.419671 containerd[1540]: 2024-12-13 13:34:15.403 [INFO][4747] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b9a4ca080c10bfa44d3c2f9a62fb3a9393107e4bf17d9b8a588b7a67dadcd134" Namespace="calico-apiserver" Pod="calico-apiserver-55476b5df-txx74" WorkloadEndpoint="localhost-k8s-calico--apiserver--55476b5df--txx74-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55476b5df--txx74-eth0", GenerateName:"calico-apiserver-55476b5df-", Namespace:"calico-apiserver", SelfLink:"", UID:"594d5ede-fccb-4fa8-b157-5696299be69a", ResourceVersion:"698", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 33, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55476b5df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-55476b5df-txx74", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5b4f8397b60", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:34:15.419671 containerd[1540]: 2024-12-13 13:34:15.403 [INFO][4747] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="b9a4ca080c10bfa44d3c2f9a62fb3a9393107e4bf17d9b8a588b7a67dadcd134" Namespace="calico-apiserver" Pod="calico-apiserver-55476b5df-txx74" WorkloadEndpoint="localhost-k8s-calico--apiserver--55476b5df--txx74-eth0" Dec 13 13:34:15.419671 containerd[1540]: 2024-12-13 13:34:15.403 [INFO][4747] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5b4f8397b60 ContainerID="b9a4ca080c10bfa44d3c2f9a62fb3a9393107e4bf17d9b8a588b7a67dadcd134" Namespace="calico-apiserver" Pod="calico-apiserver-55476b5df-txx74" WorkloadEndpoint="localhost-k8s-calico--apiserver--55476b5df--txx74-eth0" Dec 13 13:34:15.419671 containerd[1540]: 2024-12-13 13:34:15.407 [INFO][4747] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b9a4ca080c10bfa44d3c2f9a62fb3a9393107e4bf17d9b8a588b7a67dadcd134" Namespace="calico-apiserver" Pod="calico-apiserver-55476b5df-txx74" WorkloadEndpoint="localhost-k8s-calico--apiserver--55476b5df--txx74-eth0" Dec 13 13:34:15.419671 containerd[1540]: 2024-12-13 13:34:15.407 [INFO][4747] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b9a4ca080c10bfa44d3c2f9a62fb3a9393107e4bf17d9b8a588b7a67dadcd134" Namespace="calico-apiserver" Pod="calico-apiserver-55476b5df-txx74" WorkloadEndpoint="localhost-k8s-calico--apiserver--55476b5df--txx74-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55476b5df--txx74-eth0", GenerateName:"calico-apiserver-55476b5df-", Namespace:"calico-apiserver", SelfLink:"", UID:"594d5ede-fccb-4fa8-b157-5696299be69a", ResourceVersion:"698", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 33, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55476b5df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b9a4ca080c10bfa44d3c2f9a62fb3a9393107e4bf17d9b8a588b7a67dadcd134", Pod:"calico-apiserver-55476b5df-txx74", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5b4f8397b60", MAC:"6a:7e:1b:41:be:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:34:15.419671 containerd[1540]: 2024-12-13 13:34:15.415 [INFO][4747] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b9a4ca080c10bfa44d3c2f9a62fb3a9393107e4bf17d9b8a588b7a67dadcd134" Namespace="calico-apiserver" Pod="calico-apiserver-55476b5df-txx74" WorkloadEndpoint="localhost-k8s-calico--apiserver--55476b5df--txx74-eth0" Dec 13 13:34:15.434334 systemd-networkd[1439]: cali1277bab4160: Link UP Dec 13 13:34:15.434673 systemd-networkd[1439]: cali1277bab4160: Gained carrier Dec 13 13:34:15.447555 containerd[1540]: time="2024-12-13T13:34:15.447352951Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:34:15.447555 containerd[1540]: time="2024-12-13T13:34:15.447399131Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:34:15.447555 containerd[1540]: time="2024-12-13T13:34:15.447407047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:34:15.447555 containerd[1540]: time="2024-12-13T13:34:15.447456046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:34:15.459438 containerd[1540]: 2024-12-13 13:34:14.770 [INFO][4751] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:34:15.459438 containerd[1540]: 2024-12-13 13:34:14.872 [INFO][4751] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--vvl2m-eth0 coredns-76f75df574- kube-system 95ca7197-6257-4f74-a945-b91e5b94e808 694 0 2024-12-13 13:33:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-vvl2m eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1277bab4160 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c0e0702f5e6e9797b9a1291f881c117f6a92c78aa0b342bb97ff0ac038af3809" Namespace="kube-system" Pod="coredns-76f75df574-vvl2m" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--vvl2m-" Dec 13 13:34:15.459438 containerd[1540]: 2024-12-13 13:34:14.872 [INFO][4751] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c0e0702f5e6e9797b9a1291f881c117f6a92c78aa0b342bb97ff0ac038af3809" Namespace="kube-system" Pod="coredns-76f75df574-vvl2m" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--vvl2m-eth0" Dec 13 13:34:15.459438 containerd[1540]: 2024-12-13 13:34:15.144 [INFO][4848] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c0e0702f5e6e9797b9a1291f881c117f6a92c78aa0b342bb97ff0ac038af3809" HandleID="k8s-pod-network.c0e0702f5e6e9797b9a1291f881c117f6a92c78aa0b342bb97ff0ac038af3809" Workload="localhost-k8s-coredns--76f75df574--vvl2m-eth0" Dec 13 13:34:15.459438 containerd[1540]: 2024-12-13 13:34:15.181 [INFO][4848] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c0e0702f5e6e9797b9a1291f881c117f6a92c78aa0b342bb97ff0ac038af3809" HandleID="k8s-pod-network.c0e0702f5e6e9797b9a1291f881c117f6a92c78aa0b342bb97ff0ac038af3809" Workload="localhost-k8s-coredns--76f75df574--vvl2m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000424750), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-vvl2m", "timestamp":"2024-12-13 13:34:15.143686483 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:34:15.459438 containerd[1540]: 2024-12-13 13:34:15.182 [INFO][4848] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:34:15.459438 containerd[1540]: 2024-12-13 13:34:15.401 [INFO][4848] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:34:15.459438 containerd[1540]: 2024-12-13 13:34:15.401 [INFO][4848] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 13:34:15.459438 containerd[1540]: 2024-12-13 13:34:15.403 [INFO][4848] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c0e0702f5e6e9797b9a1291f881c117f6a92c78aa0b342bb97ff0ac038af3809" host="localhost" Dec 13 13:34:15.459438 containerd[1540]: 2024-12-13 13:34:15.407 [INFO][4848] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 13:34:15.459438 containerd[1540]: 2024-12-13 13:34:15.414 [INFO][4848] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 13:34:15.459438 containerd[1540]: 2024-12-13 13:34:15.417 [INFO][4848] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 13:34:15.459438 containerd[1540]: 2024-12-13 13:34:15.419 [INFO][4848] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 13:34:15.459438 containerd[1540]: 2024-12-13 13:34:15.419 [INFO][4848] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c0e0702f5e6e9797b9a1291f881c117f6a92c78aa0b342bb97ff0ac038af3809" host="localhost" Dec 13 13:34:15.459438 containerd[1540]: 2024-12-13 13:34:15.420 [INFO][4848] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c0e0702f5e6e9797b9a1291f881c117f6a92c78aa0b342bb97ff0ac038af3809 Dec 13 13:34:15.459438 containerd[1540]: 2024-12-13 13:34:15.424 [INFO][4848] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c0e0702f5e6e9797b9a1291f881c117f6a92c78aa0b342bb97ff0ac038af3809" host="localhost" Dec 13 13:34:15.459438 containerd[1540]: 2024-12-13 13:34:15.428 [INFO][4848] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.c0e0702f5e6e9797b9a1291f881c117f6a92c78aa0b342bb97ff0ac038af3809" host="localhost" Dec 13 13:34:15.459438 containerd[1540]: 2024-12-13 13:34:15.429 [INFO][4848] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.c0e0702f5e6e9797b9a1291f881c117f6a92c78aa0b342bb97ff0ac038af3809" host="localhost" Dec 13 13:34:15.459438 containerd[1540]: 2024-12-13 13:34:15.429 [INFO][4848] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:34:15.459438 containerd[1540]: 2024-12-13 13:34:15.429 [INFO][4848] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="c0e0702f5e6e9797b9a1291f881c117f6a92c78aa0b342bb97ff0ac038af3809" HandleID="k8s-pod-network.c0e0702f5e6e9797b9a1291f881c117f6a92c78aa0b342bb97ff0ac038af3809" Workload="localhost-k8s-coredns--76f75df574--vvl2m-eth0" Dec 13 13:34:15.460035 containerd[1540]: 2024-12-13 13:34:15.431 [INFO][4751] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c0e0702f5e6e9797b9a1291f881c117f6a92c78aa0b342bb97ff0ac038af3809" Namespace="kube-system" Pod="coredns-76f75df574-vvl2m" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--vvl2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--vvl2m-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"95ca7197-6257-4f74-a945-b91e5b94e808", ResourceVersion:"694", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 33, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-vvl2m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1277bab4160", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:34:15.460035 containerd[1540]: 2024-12-13 13:34:15.431 [INFO][4751] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="c0e0702f5e6e9797b9a1291f881c117f6a92c78aa0b342bb97ff0ac038af3809" Namespace="kube-system" Pod="coredns-76f75df574-vvl2m" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--vvl2m-eth0" Dec 13 13:34:15.460035 containerd[1540]: 2024-12-13 13:34:15.431 [INFO][4751] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1277bab4160 ContainerID="c0e0702f5e6e9797b9a1291f881c117f6a92c78aa0b342bb97ff0ac038af3809" Namespace="kube-system" Pod="coredns-76f75df574-vvl2m" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--vvl2m-eth0" Dec 13 13:34:15.460035 containerd[1540]: 2024-12-13 13:34:15.434 [INFO][4751] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c0e0702f5e6e9797b9a1291f881c117f6a92c78aa0b342bb97ff0ac038af3809" Namespace="kube-system" Pod="coredns-76f75df574-vvl2m" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--vvl2m-eth0" Dec 13 13:34:15.460035 containerd[1540]: 2024-12-13 13:34:15.438 [INFO][4751] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c0e0702f5e6e9797b9a1291f881c117f6a92c78aa0b342bb97ff0ac038af3809" Namespace="kube-system" Pod="coredns-76f75df574-vvl2m" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--vvl2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--vvl2m-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"95ca7197-6257-4f74-a945-b91e5b94e808", ResourceVersion:"694", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 33, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c0e0702f5e6e9797b9a1291f881c117f6a92c78aa0b342bb97ff0ac038af3809", Pod:"coredns-76f75df574-vvl2m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1277bab4160", MAC:"16:64:14:f2:de:c5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:34:15.460035 containerd[1540]: 2024-12-13 13:34:15.454 [INFO][4751] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c0e0702f5e6e9797b9a1291f881c117f6a92c78aa0b342bb97ff0ac038af3809" Namespace="kube-system" Pod="coredns-76f75df574-vvl2m" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--vvl2m-eth0" Dec 13 13:34:15.461362 containerd[1540]: time="2024-12-13T13:34:15.461091089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:34:15.461362 containerd[1540]: time="2024-12-13T13:34:15.461301762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:34:15.461362 containerd[1540]: time="2024-12-13T13:34:15.461326685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:34:15.461770 containerd[1540]: time="2024-12-13T13:34:15.461607348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:34:15.475596 containerd[1540]: time="2024-12-13T13:34:15.475063994Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:34:15.475596 containerd[1540]: time="2024-12-13T13:34:15.475218333Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:34:15.475596 containerd[1540]: time="2024-12-13T13:34:15.475229238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:34:15.476408 containerd[1540]: time="2024-12-13T13:34:15.476259407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:34:15.478389 systemd[1]: Started cri-containerd-3f041f3199ede0f353ce8f5636b6d407d45dc1ac5103a2d334a9b272a01dedde.scope - libcontainer container 3f041f3199ede0f353ce8f5636b6d407d45dc1ac5103a2d334a9b272a01dedde. Dec 13 13:34:15.494080 containerd[1540]: time="2024-12-13T13:34:15.493776105Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:34:15.494080 containerd[1540]: time="2024-12-13T13:34:15.493825210Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:34:15.494080 containerd[1540]: time="2024-12-13T13:34:15.493838859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:34:15.494502 systemd[1]: Started cri-containerd-e6be5cc5c2ac4116b043183e3d0af23e78c9a1286893eef22267be2133c4dd55.scope - libcontainer container e6be5cc5c2ac4116b043183e3d0af23e78c9a1286893eef22267be2133c4dd55. Dec 13 13:34:15.495669 containerd[1540]: time="2024-12-13T13:34:15.493938184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:34:15.514375 systemd[1]: Started cri-containerd-53bcaef531676c01c1057c25d2b02b1171d712e911d5e862c98d3b2b1f390d99.scope - libcontainer container 53bcaef531676c01c1057c25d2b02b1171d712e911d5e862c98d3b2b1f390d99. Dec 13 13:34:15.516642 systemd-resolved[1441]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:34:15.524813 systemd-resolved[1441]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:34:15.527504 containerd[1540]: time="2024-12-13T13:34:15.527329438Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:34:15.527504 containerd[1540]: time="2024-12-13T13:34:15.527369560Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:34:15.527504 containerd[1540]: time="2024-12-13T13:34:15.527381303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:34:15.527504 containerd[1540]: time="2024-12-13T13:34:15.527462512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:34:15.534587 containerd[1540]: time="2024-12-13T13:34:15.533438906Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:34:15.534587 containerd[1540]: time="2024-12-13T13:34:15.533516239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:34:15.534930 containerd[1540]: time="2024-12-13T13:34:15.533867799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:34:15.534930 containerd[1540]: time="2024-12-13T13:34:15.533940694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:34:15.537674 systemd-resolved[1441]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:34:15.557774 systemd[1]: Started cri-containerd-b2ea575c1817a93af98e03e06a8a177e1d70d9371447b9836ce21c4a90437000.scope - libcontainer container b2ea575c1817a93af98e03e06a8a177e1d70d9371447b9836ce21c4a90437000. Dec 13 13:34:15.560849 containerd[1540]: time="2024-12-13T13:34:15.560822225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t28ks,Uid:6d6a0031-7d0e-4f38-97a2-6db6c2123341,Namespace:calico-system,Attempt:6,} returns sandbox id \"3f041f3199ede0f353ce8f5636b6d407d45dc1ac5103a2d334a9b272a01dedde\"" Dec 13 13:34:15.564373 containerd[1540]: time="2024-12-13T13:34:15.564347870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 13:34:15.571966 systemd[1]: Started cri-containerd-c0e0702f5e6e9797b9a1291f881c117f6a92c78aa0b342bb97ff0ac038af3809.scope - libcontainer container c0e0702f5e6e9797b9a1291f881c117f6a92c78aa0b342bb97ff0ac038af3809. Dec 13 13:34:15.588549 systemd[1]: Started cri-containerd-b9a4ca080c10bfa44d3c2f9a62fb3a9393107e4bf17d9b8a588b7a67dadcd134.scope - libcontainer container b9a4ca080c10bfa44d3c2f9a62fb3a9393107e4bf17d9b8a588b7a67dadcd134. Dec 13 13:34:15.589050 systemd-resolved[1441]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:34:15.604189 systemd-resolved[1441]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:34:15.641555 systemd-resolved[1441]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:34:15.658562 containerd[1540]: time="2024-12-13T13:34:15.656287353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-57pkw,Uid:044a2624-b2b5-4815-a892-56a4b4e7678a,Namespace:kube-system,Attempt:6,} returns sandbox id \"b2ea575c1817a93af98e03e06a8a177e1d70d9371447b9836ce21c4a90437000\"" Dec 13 13:34:15.668749 containerd[1540]: time="2024-12-13T13:34:15.668368917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55476b5df-rctdv,Uid:2b3c7f26-9f5a-42bd-b1a3-f7dc58c816f3,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"e6be5cc5c2ac4116b043183e3d0af23e78c9a1286893eef22267be2133c4dd55\"" Dec 13 13:34:15.668834 containerd[1540]: time="2024-12-13T13:34:15.668769130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74874996db-hwhtg,Uid:1a917722-7626-420c-b334-c54df1962ff7,Namespace:calico-system,Attempt:6,} returns sandbox id \"53bcaef531676c01c1057c25d2b02b1171d712e911d5e862c98d3b2b1f390d99\"" Dec 13 13:34:15.680619 containerd[1540]: time="2024-12-13T13:34:15.680595031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vvl2m,Uid:95ca7197-6257-4f74-a945-b91e5b94e808,Namespace:kube-system,Attempt:6,} returns sandbox id \"c0e0702f5e6e9797b9a1291f881c117f6a92c78aa0b342bb97ff0ac038af3809\"" Dec 13 13:34:15.686274 containerd[1540]: time="2024-12-13T13:34:15.685380768Z" level=info msg="CreateContainer within sandbox \"c0e0702f5e6e9797b9a1291f881c117f6a92c78aa0b342bb97ff0ac038af3809\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 13:34:15.687014 containerd[1540]: time="2024-12-13T13:34:15.686998934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55476b5df-txx74,Uid:594d5ede-fccb-4fa8-b157-5696299be69a,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"b9a4ca080c10bfa44d3c2f9a62fb3a9393107e4bf17d9b8a588b7a67dadcd134\"" Dec 13 13:34:15.703505 systemd[1]: run-netns-cni\x2dbf64ae83\x2d921f\x2d47a7\x2df8c7\x2d8038d92aba85.mount: Deactivated successfully. Dec 13 13:34:15.710429 containerd[1540]: time="2024-12-13T13:34:15.710340791Z" level=info msg="CreateContainer within sandbox \"b2ea575c1817a93af98e03e06a8a177e1d70d9371447b9836ce21c4a90437000\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 13:34:15.770881 systemd[1]: run-containerd-runc-k8s.io-fb925451c28d035e2f2fb4874031030be9d857fd85c62256109520a2b3ab5cf9-runc.ghKxVM.mount: Deactivated successfully. Dec 13 13:34:15.940601 containerd[1540]: time="2024-12-13T13:34:15.940565654Z" level=info msg="CreateContainer within sandbox \"c0e0702f5e6e9797b9a1291f881c117f6a92c78aa0b342bb97ff0ac038af3809\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bf16a904604f73b1126641226149a9fcc9398967125003ae1288840e7a6931b2\"" Dec 13 13:34:15.941004 containerd[1540]: time="2024-12-13T13:34:15.940989997Z" level=info msg="StartContainer for \"bf16a904604f73b1126641226149a9fcc9398967125003ae1288840e7a6931b2\"" Dec 13 13:34:15.950131 containerd[1540]: time="2024-12-13T13:34:15.950103607Z" level=info msg="CreateContainer within sandbox \"b2ea575c1817a93af98e03e06a8a177e1d70d9371447b9836ce21c4a90437000\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f80c200ac58c79bc531c5e39bd6f9a7a180a0d70297a2f5bffc069fe1528b97f\"" Dec 13 13:34:15.950619 containerd[1540]: time="2024-12-13T13:34:15.950601571Z" level=info msg="StartContainer for \"f80c200ac58c79bc531c5e39bd6f9a7a180a0d70297a2f5bffc069fe1528b97f\"" Dec 13 13:34:15.965364 systemd[1]: Started cri-containerd-bf16a904604f73b1126641226149a9fcc9398967125003ae1288840e7a6931b2.scope - libcontainer container bf16a904604f73b1126641226149a9fcc9398967125003ae1288840e7a6931b2. Dec 13 13:34:15.978375 systemd[1]: Started cri-containerd-f80c200ac58c79bc531c5e39bd6f9a7a180a0d70297a2f5bffc069fe1528b97f.scope - libcontainer container f80c200ac58c79bc531c5e39bd6f9a7a180a0d70297a2f5bffc069fe1528b97f. Dec 13 13:34:16.097683 containerd[1540]: time="2024-12-13T13:34:16.097609660Z" level=info msg="StartContainer for \"f80c200ac58c79bc531c5e39bd6f9a7a180a0d70297a2f5bffc069fe1528b97f\" returns successfully" Dec 13 13:34:16.097683 containerd[1540]: time="2024-12-13T13:34:16.097613456Z" level=info msg="StartContainer for \"bf16a904604f73b1126641226149a9fcc9398967125003ae1288840e7a6931b2\" returns successfully" Dec 13 13:34:16.276373 systemd-networkd[1439]: calib57420b43a0: Gained IPv6LL Dec 13 13:34:16.340984 systemd-networkd[1439]: calic3056c06c91: Gained IPv6LL Dec 13 13:34:16.660486 systemd-networkd[1439]: cali92c5319b070: Gained IPv6LL Dec 13 13:34:16.697149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount550570640.mount: Deactivated successfully. Dec 13 13:34:16.697265 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount613823359.mount: Deactivated successfully. Dec 13 13:34:16.725351 systemd-networkd[1439]: calif218a015ba8: Gained IPv6LL Dec 13 13:34:16.844416 kubelet[2835]: I1213 13:34:16.843744 2835 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-57pkw" podStartSLOduration=24.843714805 podStartE2EDuration="24.843714805s" podCreationTimestamp="2024-12-13 13:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:34:16.84267714 +0000 UTC m=+38.645397580" watchObservedRunningTime="2024-12-13 13:34:16.843714805 +0000 UTC m=+38.646435240" Dec 13 13:34:16.852328 systemd-networkd[1439]: cali1277bab4160: Gained IPv6LL Dec 13 13:34:16.980311 systemd-networkd[1439]: cali5b4f8397b60: Gained IPv6LL Dec 13 13:34:17.264378 kubelet[2835]: I1213 13:34:17.264360 2835 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-vvl2m" podStartSLOduration=25.264332077 podStartE2EDuration="25.264332077s" podCreationTimestamp="2024-12-13 13:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:34:16.869155388 +0000 UTC m=+38.671875828" watchObservedRunningTime="2024-12-13 13:34:17.264332077 +0000 UTC m=+39.067052512" Dec 13 13:34:17.270700 kubelet[2835]: I1213 13:34:17.270257 2835 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 13:34:17.377396 containerd[1540]: time="2024-12-13T13:34:17.377365893Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:17.377942 containerd[1540]: time="2024-12-13T13:34:17.377915621Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Dec 13 13:34:17.378312 containerd[1540]: time="2024-12-13T13:34:17.378295471Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:17.379417 containerd[1540]: time="2024-12-13T13:34:17.379390576Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:17.380141 containerd[1540]: time="2024-12-13T13:34:17.380125953Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.815754221s" Dec 13 13:34:17.380171 containerd[1540]: time="2024-12-13T13:34:17.380142813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 13:34:17.380644 containerd[1540]: time="2024-12-13T13:34:17.380585408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 13:34:17.381386 containerd[1540]: time="2024-12-13T13:34:17.381371157Z" level=info msg="CreateContainer within sandbox \"3f041f3199ede0f353ce8f5636b6d407d45dc1ac5103a2d334a9b272a01dedde\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 13:34:17.392358 containerd[1540]: time="2024-12-13T13:34:17.392292463Z" level=info msg="CreateContainer within sandbox \"3f041f3199ede0f353ce8f5636b6d407d45dc1ac5103a2d334a9b272a01dedde\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"4c41258eae4650b4040507fdc1664c74881842a19e7df7e0d41f98e935c5947e\"" Dec 13 13:34:17.392746 containerd[1540]: time="2024-12-13T13:34:17.392726706Z" level=info msg="StartContainer for \"4c41258eae4650b4040507fdc1664c74881842a19e7df7e0d41f98e935c5947e\"" Dec 13 13:34:17.415350 systemd[1]: Started cri-containerd-4c41258eae4650b4040507fdc1664c74881842a19e7df7e0d41f98e935c5947e.scope - libcontainer container 4c41258eae4650b4040507fdc1664c74881842a19e7df7e0d41f98e935c5947e. Dec 13 13:34:17.437225 containerd[1540]: time="2024-12-13T13:34:17.437160429Z" level=info msg="StartContainer for \"4c41258eae4650b4040507fdc1664c74881842a19e7df7e0d41f98e935c5947e\" returns successfully" Dec 13 13:34:18.731259 kernel: bpftool[5510]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 13:34:19.148319 systemd-networkd[1439]: vxlan.calico: Link UP Dec 13 13:34:19.148324 systemd-networkd[1439]: vxlan.calico: Gained carrier Dec 13 13:34:19.689578 containerd[1540]: time="2024-12-13T13:34:19.689538936Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:19.694857 containerd[1540]: time="2024-12-13T13:34:19.694811410Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Dec 13 13:34:19.695375 containerd[1540]: time="2024-12-13T13:34:19.695344040Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:19.696770 containerd[1540]: time="2024-12-13T13:34:19.696754127Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:19.697416 containerd[1540]: time="2024-12-13T13:34:19.697336822Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.316642132s" Dec 13 13:34:19.697416 containerd[1540]: time="2024-12-13T13:34:19.697357700Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Dec 13 13:34:19.698167 containerd[1540]: time="2024-12-13T13:34:19.697903322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 13:34:19.712662 containerd[1540]: time="2024-12-13T13:34:19.712546138Z" level=info msg="CreateContainer within sandbox \"53bcaef531676c01c1057c25d2b02b1171d712e911d5e862c98d3b2b1f390d99\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 13:34:19.722727 containerd[1540]: time="2024-12-13T13:34:19.722201424Z" level=info msg="CreateContainer within sandbox \"53bcaef531676c01c1057c25d2b02b1171d712e911d5e862c98d3b2b1f390d99\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f955e4bfefb9c180936f1a58543e09092ba3cce12e694000331c31bbad52ea55\"" Dec 13 13:34:19.723404 containerd[1540]: time="2024-12-13T13:34:19.723378774Z" level=info msg="StartContainer for \"f955e4bfefb9c180936f1a58543e09092ba3cce12e694000331c31bbad52ea55\"" Dec 13 13:34:19.726345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2253402889.mount: Deactivated successfully. Dec 13 13:34:19.768499 systemd[1]: Started cri-containerd-f955e4bfefb9c180936f1a58543e09092ba3cce12e694000331c31bbad52ea55.scope - libcontainer container f955e4bfefb9c180936f1a58543e09092ba3cce12e694000331c31bbad52ea55. Dec 13 13:34:19.796649 containerd[1540]: time="2024-12-13T13:34:19.796624520Z" level=info msg="StartContainer for \"f955e4bfefb9c180936f1a58543e09092ba3cce12e694000331c31bbad52ea55\" returns successfully" Dec 13 13:34:19.845427 kubelet[2835]: I1213 13:34:19.845404 2835 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-74874996db-hwhtg" podStartSLOduration=17.818658366 podStartE2EDuration="21.845296398s" podCreationTimestamp="2024-12-13 13:33:58 +0000 UTC" firstStartedPulling="2024-12-13 13:34:15.671025592 +0000 UTC m=+37.473746025" lastFinishedPulling="2024-12-13 13:34:19.69766362 +0000 UTC m=+41.500384057" observedRunningTime="2024-12-13 13:34:19.844978333 +0000 UTC m=+41.647698772" watchObservedRunningTime="2024-12-13 13:34:19.845296398 +0000 UTC m=+41.648016833" Dec 13 13:34:20.436463 systemd-networkd[1439]: vxlan.calico: Gained IPv6LL Dec 13 13:34:20.839927 kubelet[2835]: I1213 13:34:20.839782 2835 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 13:34:21.606928 containerd[1540]: time="2024-12-13T13:34:21.606893853Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:21.607836 containerd[1540]: time="2024-12-13T13:34:21.607556112Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Dec 13 13:34:21.608265 containerd[1540]: time="2024-12-13T13:34:21.608251659Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:21.609277 containerd[1540]: time="2024-12-13T13:34:21.609252917Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:21.609753 containerd[1540]: time="2024-12-13T13:34:21.609674133Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 1.911749036s" Dec 13 13:34:21.609753 containerd[1540]: time="2024-12-13T13:34:21.609688355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 13:34:21.610199 containerd[1540]: time="2024-12-13T13:34:21.610188388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 13:34:21.611455 containerd[1540]: time="2024-12-13T13:34:21.611440312Z" level=info msg="CreateContainer within sandbox \"e6be5cc5c2ac4116b043183e3d0af23e78c9a1286893eef22267be2133c4dd55\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 13:34:21.629190 containerd[1540]: time="2024-12-13T13:34:21.629122297Z" level=info msg="CreateContainer within sandbox \"e6be5cc5c2ac4116b043183e3d0af23e78c9a1286893eef22267be2133c4dd55\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"00a41991423afcd122bf8315984dea75588b37ebd460018a86248a831c48d884\"" Dec 13 13:34:21.629484 containerd[1540]: time="2024-12-13T13:34:21.629450497Z" level=info msg="StartContainer for \"00a41991423afcd122bf8315984dea75588b37ebd460018a86248a831c48d884\"" Dec 13 13:34:21.658338 systemd[1]: Started cri-containerd-00a41991423afcd122bf8315984dea75588b37ebd460018a86248a831c48d884.scope - libcontainer container 00a41991423afcd122bf8315984dea75588b37ebd460018a86248a831c48d884. Dec 13 13:34:21.688196 containerd[1540]: time="2024-12-13T13:34:21.688171117Z" level=info msg="StartContainer for \"00a41991423afcd122bf8315984dea75588b37ebd460018a86248a831c48d884\" returns successfully" Dec 13 13:34:21.858119 kubelet[2835]: I1213 13:34:21.857944 2835 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-55476b5df-rctdv" podStartSLOduration=17.921838773 podStartE2EDuration="23.857614409s" podCreationTimestamp="2024-12-13 13:33:58 +0000 UTC" firstStartedPulling="2024-12-13 13:34:15.674299535 +0000 UTC m=+37.477019968" lastFinishedPulling="2024-12-13 13:34:21.610075171 +0000 UTC m=+43.412795604" observedRunningTime="2024-12-13 13:34:21.855685167 +0000 UTC m=+43.658405601" watchObservedRunningTime="2024-12-13 13:34:21.857614409 +0000 UTC m=+43.660334845" Dec 13 13:34:22.289773 containerd[1540]: time="2024-12-13T13:34:22.289711082Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:22.298946 containerd[1540]: time="2024-12-13T13:34:22.298899960Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Dec 13 13:34:22.327795 containerd[1540]: time="2024-12-13T13:34:22.327594344Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 717.335926ms" Dec 13 13:34:22.327795 containerd[1540]: time="2024-12-13T13:34:22.327627975Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 13:34:22.335011 containerd[1540]: time="2024-12-13T13:34:22.334682769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 13:34:22.346872 containerd[1540]: time="2024-12-13T13:34:22.346857352Z" level=info msg="CreateContainer within sandbox \"b9a4ca080c10bfa44d3c2f9a62fb3a9393107e4bf17d9b8a588b7a67dadcd134\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 13:34:22.413642 containerd[1540]: time="2024-12-13T13:34:22.413610382Z" level=info msg="CreateContainer within sandbox \"b9a4ca080c10bfa44d3c2f9a62fb3a9393107e4bf17d9b8a588b7a67dadcd134\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"422cabd88145d51ac6e03610f9d779d4f5db93d8798a873a03a01b5399704dde\"" Dec 13 13:34:22.433657 containerd[1540]: time="2024-12-13T13:34:22.433138794Z" level=info msg="StartContainer for \"422cabd88145d51ac6e03610f9d779d4f5db93d8798a873a03a01b5399704dde\"" Dec 13 13:34:22.474362 systemd[1]: Started cri-containerd-422cabd88145d51ac6e03610f9d779d4f5db93d8798a873a03a01b5399704dde.scope - libcontainer container 422cabd88145d51ac6e03610f9d779d4f5db93d8798a873a03a01b5399704dde. Dec 13 13:34:22.505420 containerd[1540]: time="2024-12-13T13:34:22.505399691Z" level=info msg="StartContainer for \"422cabd88145d51ac6e03610f9d779d4f5db93d8798a873a03a01b5399704dde\" returns successfully" Dec 13 13:34:23.848203 kubelet[2835]: I1213 13:34:23.848183 2835 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 13:34:25.087292 containerd[1540]: time="2024-12-13T13:34:25.086896917Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:25.087292 containerd[1540]: time="2024-12-13T13:34:25.087273893Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Dec 13 13:34:25.087804 containerd[1540]: time="2024-12-13T13:34:25.087792168Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:25.088931 containerd[1540]: time="2024-12-13T13:34:25.088894388Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:25.089341 containerd[1540]: time="2024-12-13T13:34:25.089325824Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.754623615s" Dec 13 13:34:25.089686 containerd[1540]: time="2024-12-13T13:34:25.089342413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 13:34:25.091532 containerd[1540]: time="2024-12-13T13:34:25.091514030Z" level=info msg="CreateContainer within sandbox \"3f041f3199ede0f353ce8f5636b6d407d45dc1ac5103a2d334a9b272a01dedde\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 13:34:25.128018 containerd[1540]: time="2024-12-13T13:34:25.127976630Z" level=info msg="CreateContainer within sandbox \"3f041f3199ede0f353ce8f5636b6d407d45dc1ac5103a2d334a9b272a01dedde\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a4436e0170b7b34e4a86d74757f30cd932bbc35dc2f587db5b061d2693f13674\"" Dec 13 13:34:25.128449 containerd[1540]: time="2024-12-13T13:34:25.128423777Z" level=info msg="StartContainer for \"a4436e0170b7b34e4a86d74757f30cd932bbc35dc2f587db5b061d2693f13674\"" Dec 13 13:34:25.176450 systemd[1]: Started cri-containerd-a4436e0170b7b34e4a86d74757f30cd932bbc35dc2f587db5b061d2693f13674.scope - libcontainer container a4436e0170b7b34e4a86d74757f30cd932bbc35dc2f587db5b061d2693f13674. Dec 13 13:34:25.194410 containerd[1540]: time="2024-12-13T13:34:25.194385139Z" level=info msg="StartContainer for \"a4436e0170b7b34e4a86d74757f30cd932bbc35dc2f587db5b061d2693f13674\" returns successfully" Dec 13 13:34:25.530749 kubelet[2835]: I1213 13:34:25.530711 2835 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 13:34:25.536782 kubelet[2835]: I1213 13:34:25.536741 2835 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 13:34:25.865593 kubelet[2835]: I1213 13:34:25.865188 2835 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-t28ks" podStartSLOduration=18.338764403 podStartE2EDuration="27.8651009s" podCreationTimestamp="2024-12-13 13:33:58 +0000 UTC" firstStartedPulling="2024-12-13 13:34:15.563232634 +0000 UTC m=+37.365953066" lastFinishedPulling="2024-12-13 13:34:25.089569132 +0000 UTC m=+46.892289563" observedRunningTime="2024-12-13 13:34:25.864832606 +0000 UTC m=+47.667553047" watchObservedRunningTime="2024-12-13 13:34:25.8651009 +0000 UTC m=+47.667821333" Dec 13 13:34:25.866606 kubelet[2835]: I1213 13:34:25.866511 2835 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-55476b5df-txx74" podStartSLOduration=21.226897615 podStartE2EDuration="27.866491978s" podCreationTimestamp="2024-12-13 13:33:58 +0000 UTC" firstStartedPulling="2024-12-13 13:34:15.688884633 +0000 UTC m=+37.491605067" lastFinishedPulling="2024-12-13 13:34:22.328478997 +0000 UTC m=+44.131199430" observedRunningTime="2024-12-13 13:34:22.853996468 +0000 UTC m=+44.656716908" watchObservedRunningTime="2024-12-13 13:34:25.866491978 +0000 UTC m=+47.669212413" Dec 13 13:34:37.301835 kubelet[2835]: I1213 13:34:37.301805 2835 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 13:34:38.352028 containerd[1540]: time="2024-12-13T13:34:38.351526220Z" level=info msg="StopPodSandbox for \"00f697d42c2d50b3139e5b825119ed91816dde91442ca398a78122299baeac17\"" Dec 13 13:34:38.352028 containerd[1540]: time="2024-12-13T13:34:38.351591428Z" level=info msg="TearDown network for sandbox \"00f697d42c2d50b3139e5b825119ed91816dde91442ca398a78122299baeac17\" successfully" Dec 13 13:34:38.352028 containerd[1540]: time="2024-12-13T13:34:38.351598384Z" level=info msg="StopPodSandbox for \"00f697d42c2d50b3139e5b825119ed91816dde91442ca398a78122299baeac17\" returns successfully" Dec 13 13:34:38.384375 containerd[1540]: time="2024-12-13T13:34:38.384348125Z" level=info msg="RemovePodSandbox for \"00f697d42c2d50b3139e5b825119ed91816dde91442ca398a78122299baeac17\"" Dec 13 13:34:38.386628 containerd[1540]: time="2024-12-13T13:34:38.386612792Z" level=info msg="Forcibly stopping sandbox \"00f697d42c2d50b3139e5b825119ed91816dde91442ca398a78122299baeac17\"" Dec 13 13:34:38.393642 containerd[1540]: time="2024-12-13T13:34:38.386669076Z" level=info msg="TearDown network for sandbox \"00f697d42c2d50b3139e5b825119ed91816dde91442ca398a78122299baeac17\" successfully" Dec 13 13:34:38.399765 containerd[1540]: time="2024-12-13T13:34:38.399742202Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"00f697d42c2d50b3139e5b825119ed91816dde91442ca398a78122299baeac17\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:38.407408 containerd[1540]: time="2024-12-13T13:34:38.407385539Z" level=info msg="RemovePodSandbox \"00f697d42c2d50b3139e5b825119ed91816dde91442ca398a78122299baeac17\" returns successfully" Dec 13 13:34:38.407775 containerd[1540]: time="2024-12-13T13:34:38.407765426Z" level=info msg="StopPodSandbox for \"c6df07d0815948047f4d6cc2c7af0ea066838f1e44745877608753226f54289c\"" Dec 13 13:34:38.408496 containerd[1540]: time="2024-12-13T13:34:38.407900594Z" level=info msg="TearDown network for sandbox \"c6df07d0815948047f4d6cc2c7af0ea066838f1e44745877608753226f54289c\" successfully" Dec 13 13:34:38.408496 containerd[1540]: time="2024-12-13T13:34:38.407908171Z" level=info msg="StopPodSandbox for \"c6df07d0815948047f4d6cc2c7af0ea066838f1e44745877608753226f54289c\" returns successfully" Dec 13 13:34:38.408496 containerd[1540]: time="2024-12-13T13:34:38.408166745Z" level=info msg="RemovePodSandbox for \"c6df07d0815948047f4d6cc2c7af0ea066838f1e44745877608753226f54289c\"" Dec 13 13:34:38.408496 containerd[1540]: time="2024-12-13T13:34:38.408178743Z" level=info msg="Forcibly stopping sandbox \"c6df07d0815948047f4d6cc2c7af0ea066838f1e44745877608753226f54289c\"" Dec 13 13:34:38.408496 containerd[1540]: time="2024-12-13T13:34:38.408266229Z" level=info msg="TearDown network for sandbox \"c6df07d0815948047f4d6cc2c7af0ea066838f1e44745877608753226f54289c\" successfully" Dec 13 13:34:38.409738 containerd[1540]: time="2024-12-13T13:34:38.409723727Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c6df07d0815948047f4d6cc2c7af0ea066838f1e44745877608753226f54289c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:38.409775 containerd[1540]: time="2024-12-13T13:34:38.409750998Z" level=info msg="RemovePodSandbox \"c6df07d0815948047f4d6cc2c7af0ea066838f1e44745877608753226f54289c\" returns successfully" Dec 13 13:34:38.409933 containerd[1540]: time="2024-12-13T13:34:38.409924039Z" level=info msg="StopPodSandbox for \"b728d27349f9d125051aedc463cf331f1faf1bcc698455461a967ac0b5d621f1\"" Dec 13 13:34:38.410052 containerd[1540]: time="2024-12-13T13:34:38.410007327Z" level=info msg="TearDown network for sandbox \"b728d27349f9d125051aedc463cf331f1faf1bcc698455461a967ac0b5d621f1\" successfully" Dec 13 13:34:38.410052 containerd[1540]: time="2024-12-13T13:34:38.410015327Z" level=info msg="StopPodSandbox for \"b728d27349f9d125051aedc463cf331f1faf1bcc698455461a967ac0b5d621f1\" returns successfully" Dec 13 13:34:38.411071 containerd[1540]: time="2024-12-13T13:34:38.410189955Z" level=info msg="RemovePodSandbox for \"b728d27349f9d125051aedc463cf331f1faf1bcc698455461a967ac0b5d621f1\"" Dec 13 13:34:38.411071 containerd[1540]: time="2024-12-13T13:34:38.410199877Z" level=info msg="Forcibly stopping sandbox \"b728d27349f9d125051aedc463cf331f1faf1bcc698455461a967ac0b5d621f1\"" Dec 13 13:34:38.411071 containerd[1540]: time="2024-12-13T13:34:38.410267088Z" level=info msg="TearDown network for sandbox \"b728d27349f9d125051aedc463cf331f1faf1bcc698455461a967ac0b5d621f1\" successfully" Dec 13 13:34:38.411552 containerd[1540]: time="2024-12-13T13:34:38.411497588Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b728d27349f9d125051aedc463cf331f1faf1bcc698455461a967ac0b5d621f1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:38.411552 containerd[1540]: time="2024-12-13T13:34:38.411520431Z" level=info msg="RemovePodSandbox \"b728d27349f9d125051aedc463cf331f1faf1bcc698455461a967ac0b5d621f1\" returns successfully" Dec 13 13:34:38.411702 containerd[1540]: time="2024-12-13T13:34:38.411688541Z" level=info msg="StopPodSandbox for \"bfb40143397fd98326b98c45dd1f4f82c460a9b0879d7bc1fede2668ca7ef3e0\"" Dec 13 13:34:38.411753 containerd[1540]: time="2024-12-13T13:34:38.411740960Z" level=info msg="TearDown network for sandbox \"bfb40143397fd98326b98c45dd1f4f82c460a9b0879d7bc1fede2668ca7ef3e0\" successfully" Dec 13 13:34:38.411753 containerd[1540]: time="2024-12-13T13:34:38.411750453Z" level=info msg="StopPodSandbox for \"bfb40143397fd98326b98c45dd1f4f82c460a9b0879d7bc1fede2668ca7ef3e0\" returns successfully" Dec 13 13:34:38.412385 containerd[1540]: time="2024-12-13T13:34:38.411863924Z" level=info msg="RemovePodSandbox for \"bfb40143397fd98326b98c45dd1f4f82c460a9b0879d7bc1fede2668ca7ef3e0\"" Dec 13 13:34:38.412385 containerd[1540]: time="2024-12-13T13:34:38.411874221Z" level=info msg="Forcibly stopping sandbox \"bfb40143397fd98326b98c45dd1f4f82c460a9b0879d7bc1fede2668ca7ef3e0\"" Dec 13 13:34:38.412385 containerd[1540]: time="2024-12-13T13:34:38.411902667Z" level=info msg="TearDown network for sandbox \"bfb40143397fd98326b98c45dd1f4f82c460a9b0879d7bc1fede2668ca7ef3e0\" successfully" Dec 13 13:34:38.416217 containerd[1540]: time="2024-12-13T13:34:38.412977565Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bfb40143397fd98326b98c45dd1f4f82c460a9b0879d7bc1fede2668ca7ef3e0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:38.416217 containerd[1540]: time="2024-12-13T13:34:38.412996515Z" level=info msg="RemovePodSandbox \"bfb40143397fd98326b98c45dd1f4f82c460a9b0879d7bc1fede2668ca7ef3e0\" returns successfully" Dec 13 13:34:38.416217 containerd[1540]: time="2024-12-13T13:34:38.413134109Z" level=info msg="StopPodSandbox for \"ef13314c0568659bbaea1088d4d3e9c2a434cb3d9920a493b16a6b9aee714104\"" Dec 13 13:34:38.416217 containerd[1540]: time="2024-12-13T13:34:38.413182392Z" level=info msg="TearDown network for sandbox \"ef13314c0568659bbaea1088d4d3e9c2a434cb3d9920a493b16a6b9aee714104\" successfully" Dec 13 13:34:38.416217 containerd[1540]: time="2024-12-13T13:34:38.413188965Z" level=info msg="StopPodSandbox for \"ef13314c0568659bbaea1088d4d3e9c2a434cb3d9920a493b16a6b9aee714104\" returns successfully" Dec 13 13:34:38.416217 containerd[1540]: time="2024-12-13T13:34:38.413360620Z" level=info msg="RemovePodSandbox for \"ef13314c0568659bbaea1088d4d3e9c2a434cb3d9920a493b16a6b9aee714104\"" Dec 13 13:34:38.416217 containerd[1540]: time="2024-12-13T13:34:38.413370490Z" level=info msg="Forcibly stopping sandbox \"ef13314c0568659bbaea1088d4d3e9c2a434cb3d9920a493b16a6b9aee714104\"" Dec 13 13:34:38.416217 containerd[1540]: time="2024-12-13T13:34:38.413405222Z" level=info msg="TearDown network for sandbox \"ef13314c0568659bbaea1088d4d3e9c2a434cb3d9920a493b16a6b9aee714104\" successfully" Dec 13 13:34:38.416217 containerd[1540]: time="2024-12-13T13:34:38.414401423Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ef13314c0568659bbaea1088d4d3e9c2a434cb3d9920a493b16a6b9aee714104\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:38.416217 containerd[1540]: time="2024-12-13T13:34:38.414418149Z" level=info msg="RemovePodSandbox \"ef13314c0568659bbaea1088d4d3e9c2a434cb3d9920a493b16a6b9aee714104\" returns successfully" Dec 13 13:34:38.416217 containerd[1540]: time="2024-12-13T13:34:38.414587483Z" level=info msg="StopPodSandbox for \"1158c743f2cb761e3f68fd51d3f40279e2ac1cd4688beaaf49ed92ee8ea6947a\"" Dec 13 13:34:38.416217 containerd[1540]: time="2024-12-13T13:34:38.414625433Z" level=info msg="TearDown network for sandbox \"1158c743f2cb761e3f68fd51d3f40279e2ac1cd4688beaaf49ed92ee8ea6947a\" successfully" Dec 13 13:34:38.416217 containerd[1540]: time="2024-12-13T13:34:38.414631553Z" level=info msg="StopPodSandbox for \"1158c743f2cb761e3f68fd51d3f40279e2ac1cd4688beaaf49ed92ee8ea6947a\" returns successfully" Dec 13 13:34:38.416217 containerd[1540]: time="2024-12-13T13:34:38.414781190Z" level=info msg="RemovePodSandbox for \"1158c743f2cb761e3f68fd51d3f40279e2ac1cd4688beaaf49ed92ee8ea6947a\"" Dec 13 13:34:38.416217 containerd[1540]: time="2024-12-13T13:34:38.414790298Z" level=info msg="Forcibly stopping sandbox \"1158c743f2cb761e3f68fd51d3f40279e2ac1cd4688beaaf49ed92ee8ea6947a\"" Dec 13 13:34:38.416217 containerd[1540]: time="2024-12-13T13:34:38.414817673Z" level=info msg="TearDown network for sandbox \"1158c743f2cb761e3f68fd51d3f40279e2ac1cd4688beaaf49ed92ee8ea6947a\" successfully" Dec 13 13:34:38.418978 containerd[1540]: time="2024-12-13T13:34:38.418964264Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1158c743f2cb761e3f68fd51d3f40279e2ac1cd4688beaaf49ed92ee8ea6947a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:38.419006 containerd[1540]: time="2024-12-13T13:34:38.418985751Z" level=info msg="RemovePodSandbox \"1158c743f2cb761e3f68fd51d3f40279e2ac1cd4688beaaf49ed92ee8ea6947a\" returns successfully" Dec 13 13:34:38.419173 containerd[1540]: time="2024-12-13T13:34:38.419159246Z" level=info msg="StopPodSandbox for \"ae609dfc2992fdb2c11066c2b9c3abd85054910235b779057a4858a878ea95d7\"" Dec 13 13:34:38.419210 containerd[1540]: time="2024-12-13T13:34:38.419199850Z" level=info msg="TearDown network for sandbox \"ae609dfc2992fdb2c11066c2b9c3abd85054910235b779057a4858a878ea95d7\" successfully" Dec 13 13:34:38.419210 containerd[1540]: time="2024-12-13T13:34:38.419207561Z" level=info msg="StopPodSandbox for \"ae609dfc2992fdb2c11066c2b9c3abd85054910235b779057a4858a878ea95d7\" returns successfully" Dec 13 13:34:38.421269 containerd[1540]: time="2024-12-13T13:34:38.419345255Z" level=info msg="RemovePodSandbox for \"ae609dfc2992fdb2c11066c2b9c3abd85054910235b779057a4858a878ea95d7\"" Dec 13 13:34:38.421269 containerd[1540]: time="2024-12-13T13:34:38.419355847Z" level=info msg="Forcibly stopping sandbox \"ae609dfc2992fdb2c11066c2b9c3abd85054910235b779057a4858a878ea95d7\"" Dec 13 13:34:38.421269 containerd[1540]: time="2024-12-13T13:34:38.419384617Z" level=info msg="TearDown network for sandbox \"ae609dfc2992fdb2c11066c2b9c3abd85054910235b779057a4858a878ea95d7\" successfully" Dec 13 13:34:38.421269 containerd[1540]: time="2024-12-13T13:34:38.420506710Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ae609dfc2992fdb2c11066c2b9c3abd85054910235b779057a4858a878ea95d7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:38.421269 containerd[1540]: time="2024-12-13T13:34:38.420523583Z" level=info msg="RemovePodSandbox \"ae609dfc2992fdb2c11066c2b9c3abd85054910235b779057a4858a878ea95d7\" returns successfully" Dec 13 13:34:38.421269 containerd[1540]: time="2024-12-13T13:34:38.420677545Z" level=info msg="StopPodSandbox for \"aa7c49f6c656374edc32cfc35744958459f47fb19807c32786714cc939851a3a\"" Dec 13 13:34:38.421269 containerd[1540]: time="2024-12-13T13:34:38.420747663Z" level=info msg="TearDown network for sandbox \"aa7c49f6c656374edc32cfc35744958459f47fb19807c32786714cc939851a3a\" successfully" Dec 13 13:34:38.421269 containerd[1540]: time="2024-12-13T13:34:38.420758248Z" level=info msg="StopPodSandbox for \"aa7c49f6c656374edc32cfc35744958459f47fb19807c32786714cc939851a3a\" returns successfully" Dec 13 13:34:38.421269 containerd[1540]: time="2024-12-13T13:34:38.420879881Z" level=info msg="RemovePodSandbox for \"aa7c49f6c656374edc32cfc35744958459f47fb19807c32786714cc939851a3a\"" Dec 13 13:34:38.421269 containerd[1540]: time="2024-12-13T13:34:38.420889963Z" level=info msg="Forcibly stopping sandbox \"aa7c49f6c656374edc32cfc35744958459f47fb19807c32786714cc939851a3a\"" Dec 13 13:34:38.421269 containerd[1540]: time="2024-12-13T13:34:38.420946778Z" level=info msg="TearDown network for sandbox \"aa7c49f6c656374edc32cfc35744958459f47fb19807c32786714cc939851a3a\" successfully" Dec 13 13:34:38.422194 containerd[1540]: time="2024-12-13T13:34:38.422179264Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aa7c49f6c656374edc32cfc35744958459f47fb19807c32786714cc939851a3a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:38.422225 containerd[1540]: time="2024-12-13T13:34:38.422199249Z" level=info msg="RemovePodSandbox \"aa7c49f6c656374edc32cfc35744958459f47fb19807c32786714cc939851a3a\" returns successfully" Dec 13 13:34:38.422429 containerd[1540]: time="2024-12-13T13:34:38.422354613Z" level=info msg="StopPodSandbox for \"0b7439b43e35cac25488bd28ac234bf2d14fdc7eb34af6bf4fc4f0e2029647f1\"" Dec 13 13:34:38.422429 containerd[1540]: time="2024-12-13T13:34:38.422395889Z" level=info msg="TearDown network for sandbox \"0b7439b43e35cac25488bd28ac234bf2d14fdc7eb34af6bf4fc4f0e2029647f1\" successfully" Dec 13 13:34:38.422429 containerd[1540]: time="2024-12-13T13:34:38.422402163Z" level=info msg="StopPodSandbox for \"0b7439b43e35cac25488bd28ac234bf2d14fdc7eb34af6bf4fc4f0e2029647f1\" returns successfully" Dec 13 13:34:38.422571 containerd[1540]: time="2024-12-13T13:34:38.422561993Z" level=info msg="RemovePodSandbox for \"0b7439b43e35cac25488bd28ac234bf2d14fdc7eb34af6bf4fc4f0e2029647f1\"" Dec 13 13:34:38.422619 containerd[1540]: time="2024-12-13T13:34:38.422603993Z" level=info msg="Forcibly stopping sandbox \"0b7439b43e35cac25488bd28ac234bf2d14fdc7eb34af6bf4fc4f0e2029647f1\"" Dec 13 13:34:38.422696 containerd[1540]: time="2024-12-13T13:34:38.422672215Z" level=info msg="TearDown network for sandbox \"0b7439b43e35cac25488bd28ac234bf2d14fdc7eb34af6bf4fc4f0e2029647f1\" successfully" Dec 13 13:34:38.423827 containerd[1540]: time="2024-12-13T13:34:38.423776981Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0b7439b43e35cac25488bd28ac234bf2d14fdc7eb34af6bf4fc4f0e2029647f1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:38.423827 containerd[1540]: time="2024-12-13T13:34:38.423795903Z" level=info msg="RemovePodSandbox \"0b7439b43e35cac25488bd28ac234bf2d14fdc7eb34af6bf4fc4f0e2029647f1\" returns successfully" Dec 13 13:34:38.423968 containerd[1540]: time="2024-12-13T13:34:38.423955832Z" level=info msg="StopPodSandbox for \"e5017c5028ba35b9b6aa1ab31287cb736abfbdd2030510101d39e2a50c82f63d\"" Dec 13 13:34:38.424444 containerd[1540]: time="2024-12-13T13:34:38.423998107Z" level=info msg="TearDown network for sandbox \"e5017c5028ba35b9b6aa1ab31287cb736abfbdd2030510101d39e2a50c82f63d\" successfully" Dec 13 13:34:38.424444 containerd[1540]: time="2024-12-13T13:34:38.424004336Z" level=info msg="StopPodSandbox for \"e5017c5028ba35b9b6aa1ab31287cb736abfbdd2030510101d39e2a50c82f63d\" returns successfully" Dec 13 13:34:38.424444 containerd[1540]: time="2024-12-13T13:34:38.424126811Z" level=info msg="RemovePodSandbox for \"e5017c5028ba35b9b6aa1ab31287cb736abfbdd2030510101d39e2a50c82f63d\"" Dec 13 13:34:38.424444 containerd[1540]: time="2024-12-13T13:34:38.424137125Z" level=info msg="Forcibly stopping sandbox \"e5017c5028ba35b9b6aa1ab31287cb736abfbdd2030510101d39e2a50c82f63d\"" Dec 13 13:34:38.424444 containerd[1540]: time="2024-12-13T13:34:38.424209404Z" level=info msg="TearDown network for sandbox \"e5017c5028ba35b9b6aa1ab31287cb736abfbdd2030510101d39e2a50c82f63d\" successfully" Dec 13 13:34:38.425271 containerd[1540]: time="2024-12-13T13:34:38.425256806Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e5017c5028ba35b9b6aa1ab31287cb736abfbdd2030510101d39e2a50c82f63d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:38.425301 containerd[1540]: time="2024-12-13T13:34:38.425276836Z" level=info msg="RemovePodSandbox \"e5017c5028ba35b9b6aa1ab31287cb736abfbdd2030510101d39e2a50c82f63d\" returns successfully" Dec 13 13:34:38.425406 containerd[1540]: time="2024-12-13T13:34:38.425392038Z" level=info msg="StopPodSandbox for \"4c8f6eabdd280340942ebf86e9998375a1f8276fc220abdd7cf1f763f9862018\"" Dec 13 13:34:38.425439 containerd[1540]: time="2024-12-13T13:34:38.425432616Z" level=info msg="TearDown network for sandbox \"4c8f6eabdd280340942ebf86e9998375a1f8276fc220abdd7cf1f763f9862018\" successfully" Dec 13 13:34:38.425463 containerd[1540]: time="2024-12-13T13:34:38.425438273Z" level=info msg="StopPodSandbox for \"4c8f6eabdd280340942ebf86e9998375a1f8276fc220abdd7cf1f763f9862018\" returns successfully" Dec 13 13:34:38.425575 containerd[1540]: time="2024-12-13T13:34:38.425561526Z" level=info msg="RemovePodSandbox for \"4c8f6eabdd280340942ebf86e9998375a1f8276fc220abdd7cf1f763f9862018\"" Dec 13 13:34:38.425606 containerd[1540]: time="2024-12-13T13:34:38.425575505Z" level=info msg="Forcibly stopping sandbox \"4c8f6eabdd280340942ebf86e9998375a1f8276fc220abdd7cf1f763f9862018\"" Dec 13 13:34:38.425669 containerd[1540]: time="2024-12-13T13:34:38.425647338Z" level=info msg="TearDown network for sandbox \"4c8f6eabdd280340942ebf86e9998375a1f8276fc220abdd7cf1f763f9862018\" successfully" Dec 13 13:34:38.426665 containerd[1540]: time="2024-12-13T13:34:38.426651347Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4c8f6eabdd280340942ebf86e9998375a1f8276fc220abdd7cf1f763f9862018\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:38.426695 containerd[1540]: time="2024-12-13T13:34:38.426670007Z" level=info msg="RemovePodSandbox \"4c8f6eabdd280340942ebf86e9998375a1f8276fc220abdd7cf1f763f9862018\" returns successfully" Dec 13 13:34:38.426793 containerd[1540]: time="2024-12-13T13:34:38.426777169Z" level=info msg="StopPodSandbox for \"060405da8e5828bef9dc11af75aa700c40fc7072fb9501ae49ebf1158ffabedc\"" Dec 13 13:34:38.426951 containerd[1540]: time="2024-12-13T13:34:38.426890418Z" level=info msg="TearDown network for sandbox \"060405da8e5828bef9dc11af75aa700c40fc7072fb9501ae49ebf1158ffabedc\" successfully" Dec 13 13:34:38.426951 containerd[1540]: time="2024-12-13T13:34:38.426906086Z" level=info msg="StopPodSandbox for \"060405da8e5828bef9dc11af75aa700c40fc7072fb9501ae49ebf1158ffabedc\" returns successfully" Dec 13 13:34:38.427061 containerd[1540]: time="2024-12-13T13:34:38.427023492Z" level=info msg="RemovePodSandbox for \"060405da8e5828bef9dc11af75aa700c40fc7072fb9501ae49ebf1158ffabedc\"" Dec 13 13:34:38.427061 containerd[1540]: time="2024-12-13T13:34:38.427035412Z" level=info msg="Forcibly stopping sandbox \"060405da8e5828bef9dc11af75aa700c40fc7072fb9501ae49ebf1158ffabedc\"" Dec 13 13:34:38.427117 containerd[1540]: time="2024-12-13T13:34:38.427066322Z" level=info msg="TearDown network for sandbox \"060405da8e5828bef9dc11af75aa700c40fc7072fb9501ae49ebf1158ffabedc\" successfully" Dec 13 13:34:38.428091 containerd[1540]: time="2024-12-13T13:34:38.428076840Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"060405da8e5828bef9dc11af75aa700c40fc7072fb9501ae49ebf1158ffabedc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:38.428147 containerd[1540]: time="2024-12-13T13:34:38.428095492Z" level=info msg="RemovePodSandbox \"060405da8e5828bef9dc11af75aa700c40fc7072fb9501ae49ebf1158ffabedc\" returns successfully" Dec 13 13:34:38.428699 containerd[1540]: time="2024-12-13T13:34:38.428532448Z" level=info msg="StopPodSandbox for \"380b70df06034fce2ee4e2503b62611323260cf3362cc30533c33ca8924f6498\"" Dec 13 13:34:38.428699 containerd[1540]: time="2024-12-13T13:34:38.428590443Z" level=info msg="TearDown network for sandbox \"380b70df06034fce2ee4e2503b62611323260cf3362cc30533c33ca8924f6498\" successfully" Dec 13 13:34:38.428699 containerd[1540]: time="2024-12-13T13:34:38.428597459Z" level=info msg="StopPodSandbox for \"380b70df06034fce2ee4e2503b62611323260cf3362cc30533c33ca8924f6498\" returns successfully" Dec 13 13:34:38.429194 containerd[1540]: time="2024-12-13T13:34:38.428988810Z" level=info msg="RemovePodSandbox for \"380b70df06034fce2ee4e2503b62611323260cf3362cc30533c33ca8924f6498\"" Dec 13 13:34:38.429194 containerd[1540]: time="2024-12-13T13:34:38.429005125Z" level=info msg="Forcibly stopping sandbox \"380b70df06034fce2ee4e2503b62611323260cf3362cc30533c33ca8924f6498\"" Dec 13 13:34:38.429194 containerd[1540]: time="2024-12-13T13:34:38.429112628Z" level=info msg="TearDown network for sandbox \"380b70df06034fce2ee4e2503b62611323260cf3362cc30533c33ca8924f6498\" successfully" Dec 13 13:34:38.430665 containerd[1540]: time="2024-12-13T13:34:38.430604265Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"380b70df06034fce2ee4e2503b62611323260cf3362cc30533c33ca8924f6498\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:38.430665 containerd[1540]: time="2024-12-13T13:34:38.430631267Z" level=info msg="RemovePodSandbox \"380b70df06034fce2ee4e2503b62611323260cf3362cc30533c33ca8924f6498\" returns successfully" Dec 13 13:34:38.431078 containerd[1540]: time="2024-12-13T13:34:38.430957103Z" level=info msg="StopPodSandbox for \"0a82589ee1ce9211a28b404bf05d8a5d55870d00dbaa6f040e19fd5860a3aa2e\"" Dec 13 13:34:38.431078 containerd[1540]: time="2024-12-13T13:34:38.431017338Z" level=info msg="TearDown network for sandbox \"0a82589ee1ce9211a28b404bf05d8a5d55870d00dbaa6f040e19fd5860a3aa2e\" successfully" Dec 13 13:34:38.431078 containerd[1540]: time="2024-12-13T13:34:38.431028356Z" level=info msg="StopPodSandbox for \"0a82589ee1ce9211a28b404bf05d8a5d55870d00dbaa6f040e19fd5860a3aa2e\" returns successfully" Dec 13 13:34:38.431789 containerd[1540]: time="2024-12-13T13:34:38.431198840Z" level=info msg="RemovePodSandbox for \"0a82589ee1ce9211a28b404bf05d8a5d55870d00dbaa6f040e19fd5860a3aa2e\"" Dec 13 13:34:38.431789 containerd[1540]: time="2024-12-13T13:34:38.431212279Z" level=info msg="Forcibly stopping sandbox \"0a82589ee1ce9211a28b404bf05d8a5d55870d00dbaa6f040e19fd5860a3aa2e\"" Dec 13 13:34:38.431789 containerd[1540]: time="2024-12-13T13:34:38.431279648Z" level=info msg="TearDown network for sandbox \"0a82589ee1ce9211a28b404bf05d8a5d55870d00dbaa6f040e19fd5860a3aa2e\" successfully" Dec 13 13:34:38.432440 containerd[1540]: time="2024-12-13T13:34:38.432425990Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0a82589ee1ce9211a28b404bf05d8a5d55870d00dbaa6f040e19fd5860a3aa2e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:38.432464 containerd[1540]: time="2024-12-13T13:34:38.432446529Z" level=info msg="RemovePodSandbox \"0a82589ee1ce9211a28b404bf05d8a5d55870d00dbaa6f040e19fd5860a3aa2e\" returns successfully" Dec 13 13:34:38.433019 containerd[1540]: time="2024-12-13T13:34:38.432665506Z" level=info msg="StopPodSandbox for \"e95ee3e3a3545f69ea30e365faf744111ed0f2cfd0089e3b48f8cf66caab7830\"" Dec 13 13:34:38.433019 containerd[1540]: time="2024-12-13T13:34:38.432702769Z" level=info msg="TearDown network for sandbox \"e95ee3e3a3545f69ea30e365faf744111ed0f2cfd0089e3b48f8cf66caab7830\" successfully" Dec 13 13:34:38.433019 containerd[1540]: time="2024-12-13T13:34:38.432709018Z" level=info msg="StopPodSandbox for \"e95ee3e3a3545f69ea30e365faf744111ed0f2cfd0089e3b48f8cf66caab7830\" returns successfully" Dec 13 13:34:38.433019 containerd[1540]: time="2024-12-13T13:34:38.432833063Z" level=info msg="RemovePodSandbox for \"e95ee3e3a3545f69ea30e365faf744111ed0f2cfd0089e3b48f8cf66caab7830\"" Dec 13 13:34:38.433019 containerd[1540]: time="2024-12-13T13:34:38.432843581Z" level=info msg="Forcibly stopping sandbox \"e95ee3e3a3545f69ea30e365faf744111ed0f2cfd0089e3b48f8cf66caab7830\"" Dec 13 13:34:38.433019 containerd[1540]: time="2024-12-13T13:34:38.432879414Z" level=info msg="TearDown network for sandbox \"e95ee3e3a3545f69ea30e365faf744111ed0f2cfd0089e3b48f8cf66caab7830\" successfully" Dec 13 13:34:38.433937 containerd[1540]: time="2024-12-13T13:34:38.433921883Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e95ee3e3a3545f69ea30e365faf744111ed0f2cfd0089e3b48f8cf66caab7830\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:38.435056 containerd[1540]: time="2024-12-13T13:34:38.433941792Z" level=info msg="RemovePodSandbox \"e95ee3e3a3545f69ea30e365faf744111ed0f2cfd0089e3b48f8cf66caab7830\" returns successfully" Dec 13 13:34:38.435056 containerd[1540]: time="2024-12-13T13:34:38.434219705Z" level=info msg="StopPodSandbox for \"d46b92f868fc17306eeb28b300837c3c680502e5586b50517041c78a1b14f039\"" Dec 13 13:34:38.435056 containerd[1540]: time="2024-12-13T13:34:38.434268869Z" level=info msg="TearDown network for sandbox \"d46b92f868fc17306eeb28b300837c3c680502e5586b50517041c78a1b14f039\" successfully" Dec 13 13:34:38.435056 containerd[1540]: time="2024-12-13T13:34:38.434274677Z" level=info msg="StopPodSandbox for \"d46b92f868fc17306eeb28b300837c3c680502e5586b50517041c78a1b14f039\" returns successfully" Dec 13 13:34:38.435056 containerd[1540]: time="2024-12-13T13:34:38.434464116Z" level=info msg="RemovePodSandbox for \"d46b92f868fc17306eeb28b300837c3c680502e5586b50517041c78a1b14f039\"" Dec 13 13:34:38.435056 containerd[1540]: time="2024-12-13T13:34:38.434475075Z" level=info msg="Forcibly stopping sandbox \"d46b92f868fc17306eeb28b300837c3c680502e5586b50517041c78a1b14f039\"" Dec 13 13:34:38.435056 containerd[1540]: time="2024-12-13T13:34:38.434532336Z" level=info msg="TearDown network for sandbox \"d46b92f868fc17306eeb28b300837c3c680502e5586b50517041c78a1b14f039\" successfully" Dec 13 13:34:38.435770 containerd[1540]: time="2024-12-13T13:34:38.435755913Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d46b92f868fc17306eeb28b300837c3c680502e5586b50517041c78a1b14f039\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:38.436698 containerd[1540]: time="2024-12-13T13:34:38.435775700Z" level=info msg="RemovePodSandbox \"d46b92f868fc17306eeb28b300837c3c680502e5586b50517041c78a1b14f039\" returns successfully" Dec 13 13:34:38.436698 containerd[1540]: time="2024-12-13T13:34:38.435886270Z" level=info msg="StopPodSandbox for \"d44e8ab5c34fff971412b45b7052f7b56af24b5248bf6ec7cce270d854f1032c\"" Dec 13 13:34:38.436698 containerd[1540]: time="2024-12-13T13:34:38.435921841Z" level=info msg="TearDown network for sandbox \"d44e8ab5c34fff971412b45b7052f7b56af24b5248bf6ec7cce270d854f1032c\" successfully" Dec 13 13:34:38.436698 containerd[1540]: time="2024-12-13T13:34:38.435927207Z" level=info msg="StopPodSandbox for \"d44e8ab5c34fff971412b45b7052f7b56af24b5248bf6ec7cce270d854f1032c\" returns successfully" Dec 13 13:34:38.436698 containerd[1540]: time="2024-12-13T13:34:38.436043620Z" level=info msg="RemovePodSandbox for \"d44e8ab5c34fff971412b45b7052f7b56af24b5248bf6ec7cce270d854f1032c\"" Dec 13 13:34:38.436698 containerd[1540]: time="2024-12-13T13:34:38.436054023Z" level=info msg="Forcibly stopping sandbox \"d44e8ab5c34fff971412b45b7052f7b56af24b5248bf6ec7cce270d854f1032c\"" Dec 13 13:34:38.436698 containerd[1540]: time="2024-12-13T13:34:38.436092417Z" level=info msg="TearDown network for sandbox \"d44e8ab5c34fff971412b45b7052f7b56af24b5248bf6ec7cce270d854f1032c\" successfully" Dec 13 13:34:38.437392 containerd[1540]: time="2024-12-13T13:34:38.437342143Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d44e8ab5c34fff971412b45b7052f7b56af24b5248bf6ec7cce270d854f1032c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:38.437392 containerd[1540]: time="2024-12-13T13:34:38.437360348Z" level=info msg="RemovePodSandbox \"d44e8ab5c34fff971412b45b7052f7b56af24b5248bf6ec7cce270d854f1032c\" returns successfully" Dec 13 13:34:38.437578 containerd[1540]: time="2024-12-13T13:34:38.437486193Z" level=info msg="StopPodSandbox for \"363ffd66381413d5bd85c719e4a3d228427a9028d883ef008f1b6ec2d0ee71f8\"" Dec 13 13:34:38.437578 containerd[1540]: time="2024-12-13T13:34:38.437530166Z" level=info msg="TearDown network for sandbox \"363ffd66381413d5bd85c719e4a3d228427a9028d883ef008f1b6ec2d0ee71f8\" successfully" Dec 13 13:34:38.437578 containerd[1540]: time="2024-12-13T13:34:38.437536555Z" level=info msg="StopPodSandbox for \"363ffd66381413d5bd85c719e4a3d228427a9028d883ef008f1b6ec2d0ee71f8\" returns successfully" Dec 13 13:34:38.437647 containerd[1540]: time="2024-12-13T13:34:38.437629914Z" level=info msg="RemovePodSandbox for \"363ffd66381413d5bd85c719e4a3d228427a9028d883ef008f1b6ec2d0ee71f8\"" Dec 13 13:34:38.437647 containerd[1540]: time="2024-12-13T13:34:38.437642047Z" level=info msg="Forcibly stopping sandbox \"363ffd66381413d5bd85c719e4a3d228427a9028d883ef008f1b6ec2d0ee71f8\"" Dec 13 13:34:38.437691 containerd[1540]: time="2024-12-13T13:34:38.437671521Z" level=info msg="TearDown network for sandbox \"363ffd66381413d5bd85c719e4a3d228427a9028d883ef008f1b6ec2d0ee71f8\" successfully" Dec 13 13:34:38.438680 containerd[1540]: time="2024-12-13T13:34:38.438665497Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"363ffd66381413d5bd85c719e4a3d228427a9028d883ef008f1b6ec2d0ee71f8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:38.438759 containerd[1540]: time="2024-12-13T13:34:38.438685737Z" level=info msg="RemovePodSandbox \"363ffd66381413d5bd85c719e4a3d228427a9028d883ef008f1b6ec2d0ee71f8\" returns successfully" Dec 13 13:34:38.438943 containerd[1540]: time="2024-12-13T13:34:38.438845080Z" level=info msg="StopPodSandbox for \"9c73e609d1e184c6a8d67a7f217dec6cfe2db47a60b8873672c6aea8607a4de1\"" Dec 13 13:34:38.438943 containerd[1540]: time="2024-12-13T13:34:38.438915130Z" level=info msg="TearDown network for sandbox \"9c73e609d1e184c6a8d67a7f217dec6cfe2db47a60b8873672c6aea8607a4de1\" successfully" Dec 13 13:34:38.438943 containerd[1540]: time="2024-12-13T13:34:38.438920807Z" level=info msg="StopPodSandbox for \"9c73e609d1e184c6a8d67a7f217dec6cfe2db47a60b8873672c6aea8607a4de1\" returns successfully" Dec 13 13:34:38.439250 containerd[1540]: time="2024-12-13T13:34:38.439134715Z" level=info msg="RemovePodSandbox for \"9c73e609d1e184c6a8d67a7f217dec6cfe2db47a60b8873672c6aea8607a4de1\"" Dec 13 13:34:38.439250 containerd[1540]: time="2024-12-13T13:34:38.439146686Z" level=info msg="Forcibly stopping sandbox \"9c73e609d1e184c6a8d67a7f217dec6cfe2db47a60b8873672c6aea8607a4de1\"" Dec 13 13:34:38.439250 containerd[1540]: time="2024-12-13T13:34:38.439193989Z" level=info msg="TearDown network for sandbox \"9c73e609d1e184c6a8d67a7f217dec6cfe2db47a60b8873672c6aea8607a4de1\" successfully" Dec 13 13:34:38.440666 containerd[1540]: time="2024-12-13T13:34:38.440596485Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9c73e609d1e184c6a8d67a7f217dec6cfe2db47a60b8873672c6aea8607a4de1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:38.440666 containerd[1540]: time="2024-12-13T13:34:38.440631744Z" level=info msg="RemovePodSandbox \"9c73e609d1e184c6a8d67a7f217dec6cfe2db47a60b8873672c6aea8607a4de1\" returns successfully" Dec 13 13:34:38.440962 containerd[1540]: time="2024-12-13T13:34:38.440819173Z" level=info msg="StopPodSandbox for \"1b826aa3b1f3ae55bbafa8172c171a0658192d78431f4568b1172509d1bb6230\"" Dec 13 13:34:38.440962 containerd[1540]: time="2024-12-13T13:34:38.440916519Z" level=info msg="TearDown network for sandbox \"1b826aa3b1f3ae55bbafa8172c171a0658192d78431f4568b1172509d1bb6230\" successfully" Dec 13 13:34:38.440962 containerd[1540]: time="2024-12-13T13:34:38.440924032Z" level=info msg="StopPodSandbox for \"1b826aa3b1f3ae55bbafa8172c171a0658192d78431f4568b1172509d1bb6230\" returns successfully" Dec 13 13:34:38.441177 containerd[1540]: time="2024-12-13T13:34:38.441150214Z" level=info msg="RemovePodSandbox for \"1b826aa3b1f3ae55bbafa8172c171a0658192d78431f4568b1172509d1bb6230\"" Dec 13 13:34:38.441177 containerd[1540]: time="2024-12-13T13:34:38.441164693Z" level=info msg="Forcibly stopping sandbox \"1b826aa3b1f3ae55bbafa8172c171a0658192d78431f4568b1172509d1bb6230\"" Dec 13 13:34:38.441778 containerd[1540]: time="2024-12-13T13:34:38.441199282Z" level=info msg="TearDown network for sandbox \"1b826aa3b1f3ae55bbafa8172c171a0658192d78431f4568b1172509d1bb6230\" successfully" Dec 13 13:34:38.442345 containerd[1540]: time="2024-12-13T13:34:38.442327822Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1b826aa3b1f3ae55bbafa8172c171a0658192d78431f4568b1172509d1bb6230\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:38.442378 containerd[1540]: time="2024-12-13T13:34:38.442350570Z" level=info msg="RemovePodSandbox \"1b826aa3b1f3ae55bbafa8172c171a0658192d78431f4568b1172509d1bb6230\" returns successfully" Dec 13 13:34:38.442510 containerd[1540]: time="2024-12-13T13:34:38.442497460Z" level=info msg="StopPodSandbox for \"9440f629d3fadd2bec6a0fdf543fbb5d75d841326e7be4bf0c7907196eb63e30\"" Dec 13 13:34:38.442547 containerd[1540]: time="2024-12-13T13:34:38.442537027Z" level=info msg="TearDown network for sandbox \"9440f629d3fadd2bec6a0fdf543fbb5d75d841326e7be4bf0c7907196eb63e30\" successfully" Dec 13 13:34:38.442547 containerd[1540]: time="2024-12-13T13:34:38.442545717Z" level=info msg="StopPodSandbox for \"9440f629d3fadd2bec6a0fdf543fbb5d75d841326e7be4bf0c7907196eb63e30\" returns successfully" Dec 13 13:34:38.443149 containerd[1540]: time="2024-12-13T13:34:38.442788562Z" level=info msg="RemovePodSandbox for \"9440f629d3fadd2bec6a0fdf543fbb5d75d841326e7be4bf0c7907196eb63e30\"" Dec 13 13:34:38.443149 containerd[1540]: time="2024-12-13T13:34:38.442800794Z" level=info msg="Forcibly stopping sandbox \"9440f629d3fadd2bec6a0fdf543fbb5d75d841326e7be4bf0c7907196eb63e30\"" Dec 13 13:34:38.443149 containerd[1540]: time="2024-12-13T13:34:38.442834696Z" level=info msg="TearDown network for sandbox \"9440f629d3fadd2bec6a0fdf543fbb5d75d841326e7be4bf0c7907196eb63e30\" successfully" Dec 13 13:34:38.443911 containerd[1540]: time="2024-12-13T13:34:38.443896461Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9440f629d3fadd2bec6a0fdf543fbb5d75d841326e7be4bf0c7907196eb63e30\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:38.443937 containerd[1540]: time="2024-12-13T13:34:38.443917178Z" level=info msg="RemovePodSandbox \"9440f629d3fadd2bec6a0fdf543fbb5d75d841326e7be4bf0c7907196eb63e30\" returns successfully" Dec 13 13:34:38.444090 containerd[1540]: time="2024-12-13T13:34:38.444068202Z" level=info msg="StopPodSandbox for \"d7b4d845515e5010111f72d28fb6298ea56240c3f9e45380884271fce541ca12\"" Dec 13 13:34:38.445256 containerd[1540]: time="2024-12-13T13:34:38.444345503Z" level=info msg="TearDown network for sandbox \"d7b4d845515e5010111f72d28fb6298ea56240c3f9e45380884271fce541ca12\" successfully" Dec 13 13:34:38.445256 containerd[1540]: time="2024-12-13T13:34:38.444380167Z" level=info msg="StopPodSandbox for \"d7b4d845515e5010111f72d28fb6298ea56240c3f9e45380884271fce541ca12\" returns successfully" Dec 13 13:34:38.446458 containerd[1540]: time="2024-12-13T13:34:38.445864206Z" level=info msg="RemovePodSandbox for \"d7b4d845515e5010111f72d28fb6298ea56240c3f9e45380884271fce541ca12\"" Dec 13 13:34:38.446458 containerd[1540]: time="2024-12-13T13:34:38.445880198Z" level=info msg="Forcibly stopping sandbox \"d7b4d845515e5010111f72d28fb6298ea56240c3f9e45380884271fce541ca12\"" Dec 13 13:34:38.446458 containerd[1540]: time="2024-12-13T13:34:38.445924555Z" level=info msg="TearDown network for sandbox \"d7b4d845515e5010111f72d28fb6298ea56240c3f9e45380884271fce541ca12\" successfully" Dec 13 13:34:38.447860 containerd[1540]: time="2024-12-13T13:34:38.447840455Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d7b4d845515e5010111f72d28fb6298ea56240c3f9e45380884271fce541ca12\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:38.447929 containerd[1540]: time="2024-12-13T13:34:38.447919645Z" level=info msg="RemovePodSandbox \"d7b4d845515e5010111f72d28fb6298ea56240c3f9e45380884271fce541ca12\" returns successfully" Dec 13 13:34:38.448131 containerd[1540]: time="2024-12-13T13:34:38.448117698Z" level=info msg="StopPodSandbox for \"a174934634c9372a6809b18365887d5a7720a83e221795127c2e46157e91c642\"" Dec 13 13:34:38.448219 containerd[1540]: time="2024-12-13T13:34:38.448211173Z" level=info msg="TearDown network for sandbox \"a174934634c9372a6809b18365887d5a7720a83e221795127c2e46157e91c642\" successfully" Dec 13 13:34:38.449153 containerd[1540]: time="2024-12-13T13:34:38.448424726Z" level=info msg="StopPodSandbox for \"a174934634c9372a6809b18365887d5a7720a83e221795127c2e46157e91c642\" returns successfully" Dec 13 13:34:38.449153 containerd[1540]: time="2024-12-13T13:34:38.448558272Z" level=info msg="RemovePodSandbox for \"a174934634c9372a6809b18365887d5a7720a83e221795127c2e46157e91c642\"" Dec 13 13:34:38.449153 containerd[1540]: time="2024-12-13T13:34:38.448571580Z" level=info msg="Forcibly stopping sandbox \"a174934634c9372a6809b18365887d5a7720a83e221795127c2e46157e91c642\"" Dec 13 13:34:38.449153 containerd[1540]: time="2024-12-13T13:34:38.448613953Z" level=info msg="TearDown network for sandbox \"a174934634c9372a6809b18365887d5a7720a83e221795127c2e46157e91c642\" successfully" Dec 13 13:34:38.450176 containerd[1540]: time="2024-12-13T13:34:38.450068800Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a174934634c9372a6809b18365887d5a7720a83e221795127c2e46157e91c642\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:38.450176 containerd[1540]: time="2024-12-13T13:34:38.450091420Z" level=info msg="RemovePodSandbox \"a174934634c9372a6809b18365887d5a7720a83e221795127c2e46157e91c642\" returns successfully" Dec 13 13:34:38.450332 containerd[1540]: time="2024-12-13T13:34:38.450288298Z" level=info msg="StopPodSandbox for \"87af3552c621dcbb760efc629c7ee824a46040fe91c0ca0574f446a598d03412\"" Dec 13 13:34:38.450466 containerd[1540]: time="2024-12-13T13:34:38.450403870Z" level=info msg="TearDown network for sandbox \"87af3552c621dcbb760efc629c7ee824a46040fe91c0ca0574f446a598d03412\" successfully" Dec 13 13:34:38.450466 containerd[1540]: time="2024-12-13T13:34:38.450432748Z" level=info msg="StopPodSandbox for \"87af3552c621dcbb760efc629c7ee824a46040fe91c0ca0574f446a598d03412\" returns successfully" Dec 13 13:34:38.451809 containerd[1540]: time="2024-12-13T13:34:38.451791627Z" level=info msg="RemovePodSandbox for \"87af3552c621dcbb760efc629c7ee824a46040fe91c0ca0574f446a598d03412\"" Dec 13 13:34:38.451809 containerd[1540]: time="2024-12-13T13:34:38.451807718Z" level=info msg="Forcibly stopping sandbox \"87af3552c621dcbb760efc629c7ee824a46040fe91c0ca0574f446a598d03412\"" Dec 13 13:34:38.451865 containerd[1540]: time="2024-12-13T13:34:38.451840012Z" level=info msg="TearDown network for sandbox \"87af3552c621dcbb760efc629c7ee824a46040fe91c0ca0574f446a598d03412\" successfully" Dec 13 13:34:38.453031 containerd[1540]: time="2024-12-13T13:34:38.453015731Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"87af3552c621dcbb760efc629c7ee824a46040fe91c0ca0574f446a598d03412\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:38.453067 containerd[1540]: time="2024-12-13T13:34:38.453038470Z" level=info msg="RemovePodSandbox \"87af3552c621dcbb760efc629c7ee824a46040fe91c0ca0574f446a598d03412\" returns successfully" Dec 13 13:34:38.453205 containerd[1540]: time="2024-12-13T13:34:38.453189128Z" level=info msg="StopPodSandbox for \"de2a3220a48c9e9cc9b729ae9f0eb51a9e87590aadfcb088a4ac339b6d2a4bcb\"" Dec 13 13:34:38.453333 containerd[1540]: time="2024-12-13T13:34:38.453229479Z" level=info msg="TearDown network for sandbox \"de2a3220a48c9e9cc9b729ae9f0eb51a9e87590aadfcb088a4ac339b6d2a4bcb\" successfully" Dec 13 13:34:38.453333 containerd[1540]: time="2024-12-13T13:34:38.453246373Z" level=info msg="StopPodSandbox for \"de2a3220a48c9e9cc9b729ae9f0eb51a9e87590aadfcb088a4ac339b6d2a4bcb\" returns successfully" Dec 13 13:34:38.453377 containerd[1540]: time="2024-12-13T13:34:38.453366534Z" level=info msg="RemovePodSandbox for \"de2a3220a48c9e9cc9b729ae9f0eb51a9e87590aadfcb088a4ac339b6d2a4bcb\"" Dec 13 13:34:38.453852 containerd[1540]: time="2024-12-13T13:34:38.453376532Z" level=info msg="Forcibly stopping sandbox \"de2a3220a48c9e9cc9b729ae9f0eb51a9e87590aadfcb088a4ac339b6d2a4bcb\"" Dec 13 13:34:38.453852 containerd[1540]: time="2024-12-13T13:34:38.453408106Z" level=info msg="TearDown network for sandbox \"de2a3220a48c9e9cc9b729ae9f0eb51a9e87590aadfcb088a4ac339b6d2a4bcb\" successfully" Dec 13 13:34:38.454745 containerd[1540]: time="2024-12-13T13:34:38.454729711Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"de2a3220a48c9e9cc9b729ae9f0eb51a9e87590aadfcb088a4ac339b6d2a4bcb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:38.454786 containerd[1540]: time="2024-12-13T13:34:38.454750819Z" level=info msg="RemovePodSandbox \"de2a3220a48c9e9cc9b729ae9f0eb51a9e87590aadfcb088a4ac339b6d2a4bcb\" returns successfully" Dec 13 13:34:38.455020 containerd[1540]: time="2024-12-13T13:34:38.454904532Z" level=info msg="StopPodSandbox for \"31b9e46dd01ac8c3e96533dd860cad6d5ba5c4f5c8e5c992dc17ab64757b4ac1\"" Dec 13 13:34:38.455020 containerd[1540]: time="2024-12-13T13:34:38.454955628Z" level=info msg="TearDown network for sandbox \"31b9e46dd01ac8c3e96533dd860cad6d5ba5c4f5c8e5c992dc17ab64757b4ac1\" successfully" Dec 13 13:34:38.455020 containerd[1540]: time="2024-12-13T13:34:38.454962170Z" level=info msg="StopPodSandbox for \"31b9e46dd01ac8c3e96533dd860cad6d5ba5c4f5c8e5c992dc17ab64757b4ac1\" returns successfully" Dec 13 13:34:38.455279 containerd[1540]: time="2024-12-13T13:34:38.455146408Z" level=info msg="RemovePodSandbox for \"31b9e46dd01ac8c3e96533dd860cad6d5ba5c4f5c8e5c992dc17ab64757b4ac1\"" Dec 13 13:34:38.455279 containerd[1540]: time="2024-12-13T13:34:38.455157465Z" level=info msg="Forcibly stopping sandbox \"31b9e46dd01ac8c3e96533dd860cad6d5ba5c4f5c8e5c992dc17ab64757b4ac1\"" Dec 13 13:34:38.455279 containerd[1540]: time="2024-12-13T13:34:38.455240088Z" level=info msg="TearDown network for sandbox \"31b9e46dd01ac8c3e96533dd860cad6d5ba5c4f5c8e5c992dc17ab64757b4ac1\" successfully" Dec 13 13:34:38.456403 containerd[1540]: time="2024-12-13T13:34:38.456339818Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"31b9e46dd01ac8c3e96533dd860cad6d5ba5c4f5c8e5c992dc17ab64757b4ac1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:38.456403 containerd[1540]: time="2024-12-13T13:34:38.456358136Z" level=info msg="RemovePodSandbox \"31b9e46dd01ac8c3e96533dd860cad6d5ba5c4f5c8e5c992dc17ab64757b4ac1\" returns successfully" Dec 13 13:34:38.456594 containerd[1540]: time="2024-12-13T13:34:38.456560210Z" level=info msg="StopPodSandbox for \"38cd3764dd11dd796f443558251e2d1af831465a7cccb775e4565ed9f9116d58\"" Dec 13 13:34:38.456659 containerd[1540]: time="2024-12-13T13:34:38.456622690Z" level=info msg="TearDown network for sandbox \"38cd3764dd11dd796f443558251e2d1af831465a7cccb775e4565ed9f9116d58\" successfully" Dec 13 13:34:38.456747 containerd[1540]: time="2024-12-13T13:34:38.456657413Z" level=info msg="StopPodSandbox for \"38cd3764dd11dd796f443558251e2d1af831465a7cccb775e4565ed9f9116d58\" returns successfully" Dec 13 13:34:38.456776 containerd[1540]: time="2024-12-13T13:34:38.456762706Z" level=info msg="RemovePodSandbox for \"38cd3764dd11dd796f443558251e2d1af831465a7cccb775e4565ed9f9116d58\"" Dec 13 13:34:38.456796 containerd[1540]: time="2024-12-13T13:34:38.456777934Z" level=info msg="Forcibly stopping sandbox \"38cd3764dd11dd796f443558251e2d1af831465a7cccb775e4565ed9f9116d58\"" Dec 13 13:34:38.456868 containerd[1540]: time="2024-12-13T13:34:38.456805811Z" level=info msg="TearDown network for sandbox \"38cd3764dd11dd796f443558251e2d1af831465a7cccb775e4565ed9f9116d58\" successfully" Dec 13 13:34:38.457843 containerd[1540]: time="2024-12-13T13:34:38.457828824Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"38cd3764dd11dd796f443558251e2d1af831465a7cccb775e4565ed9f9116d58\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:38.457879 containerd[1540]: time="2024-12-13T13:34:38.457848682Z" level=info msg="RemovePodSandbox \"38cd3764dd11dd796f443558251e2d1af831465a7cccb775e4565ed9f9116d58\" returns successfully" Dec 13 13:34:38.458127 containerd[1540]: time="2024-12-13T13:34:38.457984806Z" level=info msg="StopPodSandbox for \"2c10e14c46ce3ec5b997f10afea52ab894699d44aedc127a1a0a5d8708fe4938\"" Dec 13 13:34:38.458127 containerd[1540]: time="2024-12-13T13:34:38.458027292Z" level=info msg="TearDown network for sandbox \"2c10e14c46ce3ec5b997f10afea52ab894699d44aedc127a1a0a5d8708fe4938\" successfully" Dec 13 13:34:38.458127 containerd[1540]: time="2024-12-13T13:34:38.458033922Z" level=info msg="StopPodSandbox for \"2c10e14c46ce3ec5b997f10afea52ab894699d44aedc127a1a0a5d8708fe4938\" returns successfully" Dec 13 13:34:38.458260 containerd[1540]: time="2024-12-13T13:34:38.458161806Z" level=info msg="RemovePodSandbox for \"2c10e14c46ce3ec5b997f10afea52ab894699d44aedc127a1a0a5d8708fe4938\"" Dec 13 13:34:38.458260 containerd[1540]: time="2024-12-13T13:34:38.458171531Z" level=info msg="Forcibly stopping sandbox \"2c10e14c46ce3ec5b997f10afea52ab894699d44aedc127a1a0a5d8708fe4938\"" Dec 13 13:34:38.458260 containerd[1540]: time="2024-12-13T13:34:38.458201236Z" level=info msg="TearDown network for sandbox \"2c10e14c46ce3ec5b997f10afea52ab894699d44aedc127a1a0a5d8708fe4938\" successfully" Dec 13 13:34:38.459289 containerd[1540]: time="2024-12-13T13:34:38.459274419Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2c10e14c46ce3ec5b997f10afea52ab894699d44aedc127a1a0a5d8708fe4938\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:38.459316 containerd[1540]: time="2024-12-13T13:34:38.459294832Z" level=info msg="RemovePodSandbox \"2c10e14c46ce3ec5b997f10afea52ab894699d44aedc127a1a0a5d8708fe4938\" returns successfully" Dec 13 13:34:38.459449 containerd[1540]: time="2024-12-13T13:34:38.459435984Z" level=info msg="StopPodSandbox for \"08f37eb07a7cae4dcef392eb4a16301e2226023737c359d66ef523c02f90517b\"" Dec 13 13:34:38.459486 containerd[1540]: time="2024-12-13T13:34:38.459475633Z" level=info msg="TearDown network for sandbox \"08f37eb07a7cae4dcef392eb4a16301e2226023737c359d66ef523c02f90517b\" successfully" Dec 13 13:34:38.459486 containerd[1540]: time="2024-12-13T13:34:38.459483099Z" level=info msg="StopPodSandbox for \"08f37eb07a7cae4dcef392eb4a16301e2226023737c359d66ef523c02f90517b\" returns successfully" Dec 13 13:34:38.459709 containerd[1540]: time="2024-12-13T13:34:38.459679451Z" level=info msg="RemovePodSandbox for \"08f37eb07a7cae4dcef392eb4a16301e2226023737c359d66ef523c02f90517b\"" Dec 13 13:34:38.459709 containerd[1540]: time="2024-12-13T13:34:38.459690427Z" level=info msg="Forcibly stopping sandbox \"08f37eb07a7cae4dcef392eb4a16301e2226023737c359d66ef523c02f90517b\"" Dec 13 13:34:38.459758 containerd[1540]: time="2024-12-13T13:34:38.459719116Z" level=info msg="TearDown network for sandbox \"08f37eb07a7cae4dcef392eb4a16301e2226023737c359d66ef523c02f90517b\" successfully" Dec 13 13:34:38.460783 containerd[1540]: time="2024-12-13T13:34:38.460767785Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"08f37eb07a7cae4dcef392eb4a16301e2226023737c359d66ef523c02f90517b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:38.460807 containerd[1540]: time="2024-12-13T13:34:38.460791045Z" level=info msg="RemovePodSandbox \"08f37eb07a7cae4dcef392eb4a16301e2226023737c359d66ef523c02f90517b\" returns successfully" Dec 13 13:34:38.461010 containerd[1540]: time="2024-12-13T13:34:38.460918614Z" level=info msg="StopPodSandbox for \"6646c23a004b554d48a3e5ff96163dfcc04eb006aca168bfb75f4b706a895e8b\"" Dec 13 13:34:38.461010 containerd[1540]: time="2024-12-13T13:34:38.460958347Z" level=info msg="TearDown network for sandbox \"6646c23a004b554d48a3e5ff96163dfcc04eb006aca168bfb75f4b706a895e8b\" successfully" Dec 13 13:34:38.461010 containerd[1540]: time="2024-12-13T13:34:38.460964468Z" level=info msg="StopPodSandbox for \"6646c23a004b554d48a3e5ff96163dfcc04eb006aca168bfb75f4b706a895e8b\" returns successfully" Dec 13 13:34:38.461226 containerd[1540]: time="2024-12-13T13:34:38.461208353Z" level=info msg="RemovePodSandbox for \"6646c23a004b554d48a3e5ff96163dfcc04eb006aca168bfb75f4b706a895e8b\"" Dec 13 13:34:38.461277 containerd[1540]: time="2024-12-13T13:34:38.461227384Z" level=info msg="Forcibly stopping sandbox \"6646c23a004b554d48a3e5ff96163dfcc04eb006aca168bfb75f4b706a895e8b\"" Dec 13 13:34:38.461277 containerd[1540]: time="2024-12-13T13:34:38.461266033Z" level=info msg="TearDown network for sandbox \"6646c23a004b554d48a3e5ff96163dfcc04eb006aca168bfb75f4b706a895e8b\" successfully" Dec 13 13:34:38.462552 containerd[1540]: time="2024-12-13T13:34:38.462538269Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6646c23a004b554d48a3e5ff96163dfcc04eb006aca168bfb75f4b706a895e8b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:38.462591 containerd[1540]: time="2024-12-13T13:34:38.462559558Z" level=info msg="RemovePodSandbox \"6646c23a004b554d48a3e5ff96163dfcc04eb006aca168bfb75f4b706a895e8b\" returns successfully" Dec 13 13:34:38.462721 containerd[1540]: time="2024-12-13T13:34:38.462708252Z" level=info msg="StopPodSandbox for \"cdd50c9de02b78e81c8129b164d9902caeccd7a5f6cf84cfbc17155ba9403a75\"" Dec 13 13:34:38.462762 containerd[1540]: time="2024-12-13T13:34:38.462746636Z" level=info msg="TearDown network for sandbox \"cdd50c9de02b78e81c8129b164d9902caeccd7a5f6cf84cfbc17155ba9403a75\" successfully" Dec 13 13:34:38.462762 containerd[1540]: time="2024-12-13T13:34:38.462754168Z" level=info msg="StopPodSandbox for \"cdd50c9de02b78e81c8129b164d9902caeccd7a5f6cf84cfbc17155ba9403a75\" returns successfully" Dec 13 13:34:38.463018 containerd[1540]: time="2024-12-13T13:34:38.462973075Z" level=info msg="RemovePodSandbox for \"cdd50c9de02b78e81c8129b164d9902caeccd7a5f6cf84cfbc17155ba9403a75\"" Dec 13 13:34:38.463804 containerd[1540]: time="2024-12-13T13:34:38.463052090Z" level=info msg="Forcibly stopping sandbox \"cdd50c9de02b78e81c8129b164d9902caeccd7a5f6cf84cfbc17155ba9403a75\"" Dec 13 13:34:38.463804 containerd[1540]: time="2024-12-13T13:34:38.463090278Z" level=info msg="TearDown network for sandbox \"cdd50c9de02b78e81c8129b164d9902caeccd7a5f6cf84cfbc17155ba9403a75\" successfully" Dec 13 13:34:38.464362 containerd[1540]: time="2024-12-13T13:34:38.464349763Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cdd50c9de02b78e81c8129b164d9902caeccd7a5f6cf84cfbc17155ba9403a75\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:38.464409 containerd[1540]: time="2024-12-13T13:34:38.464401710Z" level=info msg="RemovePodSandbox \"cdd50c9de02b78e81c8129b164d9902caeccd7a5f6cf84cfbc17155ba9403a75\" returns successfully" Dec 13 13:34:38.464585 containerd[1540]: time="2024-12-13T13:34:38.464573480Z" level=info msg="StopPodSandbox for \"0dabb5f4b1ea298c91cbde131d95492354ec96009d66dab860652a5ca3101e2a\"" Dec 13 13:34:38.464617 containerd[1540]: time="2024-12-13T13:34:38.464610877Z" level=info msg="TearDown network for sandbox \"0dabb5f4b1ea298c91cbde131d95492354ec96009d66dab860652a5ca3101e2a\" successfully" Dec 13 13:34:38.464635 containerd[1540]: time="2024-12-13T13:34:38.464616706Z" level=info msg="StopPodSandbox for \"0dabb5f4b1ea298c91cbde131d95492354ec96009d66dab860652a5ca3101e2a\" returns successfully" Dec 13 13:34:38.465263 containerd[1540]: time="2024-12-13T13:34:38.464736502Z" level=info msg="RemovePodSandbox for \"0dabb5f4b1ea298c91cbde131d95492354ec96009d66dab860652a5ca3101e2a\"" Dec 13 13:34:38.465263 containerd[1540]: time="2024-12-13T13:34:38.464752147Z" level=info msg="Forcibly stopping sandbox \"0dabb5f4b1ea298c91cbde131d95492354ec96009d66dab860652a5ca3101e2a\"" Dec 13 13:34:38.465263 containerd[1540]: time="2024-12-13T13:34:38.464799251Z" level=info msg="TearDown network for sandbox \"0dabb5f4b1ea298c91cbde131d95492354ec96009d66dab860652a5ca3101e2a\" successfully" Dec 13 13:34:38.465926 containerd[1540]: time="2024-12-13T13:34:38.465915602Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0dabb5f4b1ea298c91cbde131d95492354ec96009d66dab860652a5ca3101e2a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:38.465983 containerd[1540]: time="2024-12-13T13:34:38.465975122Z" level=info msg="RemovePodSandbox \"0dabb5f4b1ea298c91cbde131d95492354ec96009d66dab860652a5ca3101e2a\" returns successfully" Dec 13 13:34:38.466140 containerd[1540]: time="2024-12-13T13:34:38.466126498Z" level=info msg="StopPodSandbox for \"f2395e64add4f1db0b85782980bdcdaf9699429f7df719adb1fd3c24cddc5aab\"" Dec 13 13:34:38.466178 containerd[1540]: time="2024-12-13T13:34:38.466167253Z" level=info msg="TearDown network for sandbox \"f2395e64add4f1db0b85782980bdcdaf9699429f7df719adb1fd3c24cddc5aab\" successfully" Dec 13 13:34:38.466178 containerd[1540]: time="2024-12-13T13:34:38.466175588Z" level=info msg="StopPodSandbox for \"f2395e64add4f1db0b85782980bdcdaf9699429f7df719adb1fd3c24cddc5aab\" returns successfully" Dec 13 13:34:38.466319 containerd[1540]: time="2024-12-13T13:34:38.466304251Z" level=info msg="RemovePodSandbox for \"f2395e64add4f1db0b85782980bdcdaf9699429f7df719adb1fd3c24cddc5aab\"" Dec 13 13:34:38.466377 containerd[1540]: time="2024-12-13T13:34:38.466320157Z" level=info msg="Forcibly stopping sandbox \"f2395e64add4f1db0b85782980bdcdaf9699429f7df719adb1fd3c24cddc5aab\"" Dec 13 13:34:38.466417 containerd[1540]: time="2024-12-13T13:34:38.466395664Z" level=info msg="TearDown network for sandbox \"f2395e64add4f1db0b85782980bdcdaf9699429f7df719adb1fd3c24cddc5aab\" successfully" Dec 13 13:34:38.467458 containerd[1540]: time="2024-12-13T13:34:38.467443713Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f2395e64add4f1db0b85782980bdcdaf9699429f7df719adb1fd3c24cddc5aab\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:38.467488 containerd[1540]: time="2024-12-13T13:34:38.467464266Z" level=info msg="RemovePodSandbox \"f2395e64add4f1db0b85782980bdcdaf9699429f7df719adb1fd3c24cddc5aab\" returns successfully" Dec 13 13:34:38.467678 containerd[1540]: time="2024-12-13T13:34:38.467621774Z" level=info msg="StopPodSandbox for \"d999310267a45ca66e6bc057e30db6969005bfdb7c001c4e75972ba37074eb74\"" Dec 13 13:34:38.467678 containerd[1540]: time="2024-12-13T13:34:38.467661409Z" level=info msg="TearDown network for sandbox \"d999310267a45ca66e6bc057e30db6969005bfdb7c001c4e75972ba37074eb74\" successfully" Dec 13 13:34:38.467678 containerd[1540]: time="2024-12-13T13:34:38.467667824Z" level=info msg="StopPodSandbox for \"d999310267a45ca66e6bc057e30db6969005bfdb7c001c4e75972ba37074eb74\" returns successfully" Dec 13 13:34:38.467937 containerd[1540]: time="2024-12-13T13:34:38.467905919Z" level=info msg="RemovePodSandbox for \"d999310267a45ca66e6bc057e30db6969005bfdb7c001c4e75972ba37074eb74\"" Dec 13 13:34:38.467937 containerd[1540]: time="2024-12-13T13:34:38.467917658Z" level=info msg="Forcibly stopping sandbox \"d999310267a45ca66e6bc057e30db6969005bfdb7c001c4e75972ba37074eb74\"" Dec 13 13:34:38.468010 containerd[1540]: time="2024-12-13T13:34:38.467966941Z" level=info msg="TearDown network for sandbox \"d999310267a45ca66e6bc057e30db6969005bfdb7c001c4e75972ba37074eb74\" successfully" Dec 13 13:34:38.469046 containerd[1540]: time="2024-12-13T13:34:38.469031624Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d999310267a45ca66e6bc057e30db6969005bfdb7c001c4e75972ba37074eb74\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:38.469070 containerd[1540]: time="2024-12-13T13:34:38.469053199Z" level=info msg="RemovePodSandbox \"d999310267a45ca66e6bc057e30db6969005bfdb7c001c4e75972ba37074eb74\" returns successfully" Dec 13 13:34:38.469327 containerd[1540]: time="2024-12-13T13:34:38.469231336Z" level=info msg="StopPodSandbox for \"16d6a944a79031ebe24aa26205b00e27648ec9751873f338c25830e1dd94743b\"" Dec 13 13:34:38.469327 containerd[1540]: time="2024-12-13T13:34:38.469280954Z" level=info msg="TearDown network for sandbox \"16d6a944a79031ebe24aa26205b00e27648ec9751873f338c25830e1dd94743b\" successfully" Dec 13 13:34:38.469327 containerd[1540]: time="2024-12-13T13:34:38.469287310Z" level=info msg="StopPodSandbox for \"16d6a944a79031ebe24aa26205b00e27648ec9751873f338c25830e1dd94743b\" returns successfully" Dec 13 13:34:38.469622 containerd[1540]: time="2024-12-13T13:34:38.469486113Z" level=info msg="RemovePodSandbox for \"16d6a944a79031ebe24aa26205b00e27648ec9751873f338c25830e1dd94743b\"" Dec 13 13:34:38.469622 containerd[1540]: time="2024-12-13T13:34:38.469496880Z" level=info msg="Forcibly stopping sandbox \"16d6a944a79031ebe24aa26205b00e27648ec9751873f338c25830e1dd94743b\"" Dec 13 13:34:38.469622 containerd[1540]: time="2024-12-13T13:34:38.469565755Z" level=info msg="TearDown network for sandbox \"16d6a944a79031ebe24aa26205b00e27648ec9751873f338c25830e1dd94743b\" successfully" Dec 13 13:34:38.470633 containerd[1540]: time="2024-12-13T13:34:38.470618576Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"16d6a944a79031ebe24aa26205b00e27648ec9751873f338c25830e1dd94743b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:38.470710 containerd[1540]: time="2024-12-13T13:34:38.470685400Z" level=info msg="RemovePodSandbox \"16d6a944a79031ebe24aa26205b00e27648ec9751873f338c25830e1dd94743b\" returns successfully" Dec 13 13:34:38.471013 containerd[1540]: time="2024-12-13T13:34:38.470949141Z" level=info msg="StopPodSandbox for \"3c1544c76631d5bce0905b816ace4eddc61c4ba221c37d7d5aa352c6931e51bd\"" Dec 13 13:34:38.471013 containerd[1540]: time="2024-12-13T13:34:38.470986274Z" level=info msg="TearDown network for sandbox \"3c1544c76631d5bce0905b816ace4eddc61c4ba221c37d7d5aa352c6931e51bd\" successfully" Dec 13 13:34:38.471013 containerd[1540]: time="2024-12-13T13:34:38.470992191Z" level=info msg="StopPodSandbox for \"3c1544c76631d5bce0905b816ace4eddc61c4ba221c37d7d5aa352c6931e51bd\" returns successfully" Dec 13 13:34:38.471279 containerd[1540]: time="2024-12-13T13:34:38.471170637Z" level=info msg="RemovePodSandbox for \"3c1544c76631d5bce0905b816ace4eddc61c4ba221c37d7d5aa352c6931e51bd\"" Dec 13 13:34:38.471279 containerd[1540]: time="2024-12-13T13:34:38.471181270Z" level=info msg="Forcibly stopping sandbox \"3c1544c76631d5bce0905b816ace4eddc61c4ba221c37d7d5aa352c6931e51bd\"" Dec 13 13:34:38.471279 containerd[1540]: time="2024-12-13T13:34:38.471228136Z" level=info msg="TearDown network for sandbox \"3c1544c76631d5bce0905b816ace4eddc61c4ba221c37d7d5aa352c6931e51bd\" successfully" Dec 13 13:34:38.472379 containerd[1540]: time="2024-12-13T13:34:38.472330833Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3c1544c76631d5bce0905b816ace4eddc61c4ba221c37d7d5aa352c6931e51bd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:38.472379 containerd[1540]: time="2024-12-13T13:34:38.472350347Z" level=info msg="RemovePodSandbox \"3c1544c76631d5bce0905b816ace4eddc61c4ba221c37d7d5aa352c6931e51bd\" returns successfully" Dec 13 13:34:42.840714 kubelet[2835]: I1213 13:34:42.840598 2835 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 13:34:52.431452 systemd[1]: Started sshd@7-139.178.70.100:22-139.178.89.65:53558.service - OpenSSH per-connection server daemon (139.178.89.65:53558). Dec 13 13:34:52.527656 sshd[5900]: Accepted publickey for core from 139.178.89.65 port 53558 ssh2: RSA SHA256:EtsYlJgVzDm1FrvTYRxrQroWNl1HlwlEKuzBOqrtR6c Dec 13 13:34:52.528873 sshd-session[5900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:34:52.531865 systemd-logind[1520]: New session 10 of user core. Dec 13 13:34:52.541529 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 13:34:53.005434 sshd[5902]: Connection closed by 139.178.89.65 port 53558 Dec 13 13:34:53.006170 sshd-session[5900]: pam_unix(sshd:session): session closed for user core Dec 13 13:34:53.007807 systemd-logind[1520]: Session 10 logged out. Waiting for processes to exit. Dec 13 13:34:53.008685 systemd[1]: sshd@7-139.178.70.100:22-139.178.89.65:53558.service: Deactivated successfully. Dec 13 13:34:53.009755 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 13:34:53.010480 systemd-logind[1520]: Removed session 10. Dec 13 13:34:58.019634 systemd[1]: Started sshd@8-139.178.70.100:22-139.178.89.65:49424.service - OpenSSH per-connection server daemon (139.178.89.65:49424). Dec 13 13:34:58.072321 sshd[5936]: Accepted publickey for core from 139.178.89.65 port 49424 ssh2: RSA SHA256:EtsYlJgVzDm1FrvTYRxrQroWNl1HlwlEKuzBOqrtR6c Dec 13 13:34:58.073117 sshd-session[5936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:34:58.075758 systemd-logind[1520]: New session 11 of user core. Dec 13 13:34:58.080314 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 13:34:58.177742 sshd[5938]: Connection closed by 139.178.89.65 port 49424 Dec 13 13:34:58.178066 sshd-session[5936]: pam_unix(sshd:session): session closed for user core Dec 13 13:34:58.180183 systemd-logind[1520]: Session 11 logged out. Waiting for processes to exit. Dec 13 13:34:58.180396 systemd[1]: sshd@8-139.178.70.100:22-139.178.89.65:49424.service: Deactivated successfully. Dec 13 13:34:58.181490 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 13:34:58.182042 systemd-logind[1520]: Removed session 11. Dec 13 13:35:03.189781 systemd[1]: Started sshd@9-139.178.70.100:22-139.178.89.65:49426.service - OpenSSH per-connection server daemon (139.178.89.65:49426). Dec 13 13:35:03.234065 sshd[5959]: Accepted publickey for core from 139.178.89.65 port 49426 ssh2: RSA SHA256:EtsYlJgVzDm1FrvTYRxrQroWNl1HlwlEKuzBOqrtR6c Dec 13 13:35:03.234820 sshd-session[5959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:35:03.237816 systemd-logind[1520]: New session 12 of user core. Dec 13 13:35:03.246329 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 13:35:03.352984 sshd[5961]: Connection closed by 139.178.89.65 port 49426 Dec 13 13:35:03.354333 sshd-session[5959]: pam_unix(sshd:session): session closed for user core Dec 13 13:35:03.358839 systemd[1]: sshd@9-139.178.70.100:22-139.178.89.65:49426.service: Deactivated successfully. Dec 13 13:35:03.360941 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 13:35:03.361569 systemd-logind[1520]: Session 12 logged out. Waiting for processes to exit. Dec 13 13:35:03.367412 systemd[1]: Started sshd@10-139.178.70.100:22-139.178.89.65:49434.service - OpenSSH per-connection server daemon (139.178.89.65:49434). Dec 13 13:35:03.368534 systemd-logind[1520]: Removed session 12. Dec 13 13:35:03.396158 sshd[5973]: Accepted publickey for core from 139.178.89.65 port 49434 ssh2: RSA SHA256:EtsYlJgVzDm1FrvTYRxrQroWNl1HlwlEKuzBOqrtR6c Dec 13 13:35:03.397249 sshd-session[5973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:35:03.399694 systemd-logind[1520]: New session 13 of user core. Dec 13 13:35:03.404330 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 13:35:03.519633 sshd[5975]: Connection closed by 139.178.89.65 port 49434 Dec 13 13:35:03.520475 sshd-session[5973]: pam_unix(sshd:session): session closed for user core Dec 13 13:35:03.528131 systemd[1]: sshd@10-139.178.70.100:22-139.178.89.65:49434.service: Deactivated successfully. Dec 13 13:35:03.530280 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 13:35:03.534294 systemd-logind[1520]: Session 13 logged out. Waiting for processes to exit. Dec 13 13:35:03.542768 systemd[1]: Started sshd@11-139.178.70.100:22-139.178.89.65:49450.service - OpenSSH per-connection server daemon (139.178.89.65:49450). Dec 13 13:35:03.544810 systemd-logind[1520]: Removed session 13. Dec 13 13:35:03.598564 sshd[5986]: Accepted publickey for core from 139.178.89.65 port 49450 ssh2: RSA SHA256:EtsYlJgVzDm1FrvTYRxrQroWNl1HlwlEKuzBOqrtR6c Dec 13 13:35:03.599517 sshd-session[5986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:35:03.603717 systemd-logind[1520]: New session 14 of user core. Dec 13 13:35:03.607346 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 13:35:03.703913 sshd[5988]: Connection closed by 139.178.89.65 port 49450 Dec 13 13:35:03.704281 sshd-session[5986]: pam_unix(sshd:session): session closed for user core Dec 13 13:35:03.705986 systemd-logind[1520]: Session 14 logged out. Waiting for processes to exit. Dec 13 13:35:03.706545 systemd[1]: sshd@11-139.178.70.100:22-139.178.89.65:49450.service: Deactivated successfully. Dec 13 13:35:03.708655 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 13:35:03.710760 systemd-logind[1520]: Removed session 14. Dec 13 13:35:08.714215 systemd[1]: Started sshd@12-139.178.70.100:22-139.178.89.65:51342.service - OpenSSH per-connection server daemon (139.178.89.65:51342). Dec 13 13:35:08.761252 sshd[6017]: Accepted publickey for core from 139.178.89.65 port 51342 ssh2: RSA SHA256:EtsYlJgVzDm1FrvTYRxrQroWNl1HlwlEKuzBOqrtR6c Dec 13 13:35:08.762053 sshd-session[6017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:35:08.765540 systemd-logind[1520]: New session 15 of user core. Dec 13 13:35:08.769407 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 13:35:08.860677 sshd[6022]: Connection closed by 139.178.89.65 port 51342 Dec 13 13:35:08.861050 sshd-session[6017]: pam_unix(sshd:session): session closed for user core Dec 13 13:35:08.863603 systemd[1]: sshd@12-139.178.70.100:22-139.178.89.65:51342.service: Deactivated successfully. Dec 13 13:35:08.864859 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 13:35:08.865412 systemd-logind[1520]: Session 15 logged out. Waiting for processes to exit. Dec 13 13:35:08.865946 systemd-logind[1520]: Removed session 15. Dec 13 13:35:13.868470 systemd[1]: Started sshd@13-139.178.70.100:22-139.178.89.65:51344.service - OpenSSH per-connection server daemon (139.178.89.65:51344). Dec 13 13:35:13.934469 sshd[6057]: Accepted publickey for core from 139.178.89.65 port 51344 ssh2: RSA SHA256:EtsYlJgVzDm1FrvTYRxrQroWNl1HlwlEKuzBOqrtR6c Dec 13 13:35:13.935276 sshd-session[6057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:35:13.937710 systemd-logind[1520]: New session 16 of user core. Dec 13 13:35:13.940321 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 13:35:14.047032 sshd[6059]: Connection closed by 139.178.89.65 port 51344 Dec 13 13:35:14.049762 sshd-session[6057]: pam_unix(sshd:session): session closed for user core Dec 13 13:35:14.051923 systemd-logind[1520]: Session 16 logged out. Waiting for processes to exit. Dec 13 13:35:14.051986 systemd[1]: sshd@13-139.178.70.100:22-139.178.89.65:51344.service: Deactivated successfully. Dec 13 13:35:14.053010 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 13:35:14.053652 systemd-logind[1520]: Removed session 16. Dec 13 13:35:19.057344 systemd[1]: Started sshd@14-139.178.70.100:22-139.178.89.65:48670.service - OpenSSH per-connection server daemon (139.178.89.65:48670). Dec 13 13:35:19.128427 sshd[6070]: Accepted publickey for core from 139.178.89.65 port 48670 ssh2: RSA SHA256:EtsYlJgVzDm1FrvTYRxrQroWNl1HlwlEKuzBOqrtR6c Dec 13 13:35:19.129144 sshd-session[6070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:35:19.132483 systemd-logind[1520]: New session 17 of user core. Dec 13 13:35:19.137443 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 13:35:19.243859 sshd[6072]: Connection closed by 139.178.89.65 port 48670 Dec 13 13:35:19.246574 sshd-session[6070]: pam_unix(sshd:session): session closed for user core Dec 13 13:35:19.250856 systemd[1]: sshd@14-139.178.70.100:22-139.178.89.65:48670.service: Deactivated successfully. Dec 13 13:35:19.252031 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 13:35:19.252446 systemd-logind[1520]: Session 17 logged out. Waiting for processes to exit. Dec 13 13:35:19.258416 systemd[1]: Started sshd@15-139.178.70.100:22-139.178.89.65:48672.service - OpenSSH per-connection server daemon (139.178.89.65:48672). Dec 13 13:35:19.259453 systemd-logind[1520]: Removed session 17. Dec 13 13:35:19.301486 sshd[6083]: Accepted publickey for core from 139.178.89.65 port 48672 ssh2: RSA SHA256:EtsYlJgVzDm1FrvTYRxrQroWNl1HlwlEKuzBOqrtR6c Dec 13 13:35:19.302133 sshd-session[6083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:35:19.304520 systemd-logind[1520]: New session 18 of user core. Dec 13 13:35:19.311351 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 13:35:19.669407 sshd[6085]: Connection closed by 139.178.89.65 port 48672 Dec 13 13:35:19.670590 sshd-session[6083]: pam_unix(sshd:session): session closed for user core Dec 13 13:35:19.675733 systemd[1]: sshd@15-139.178.70.100:22-139.178.89.65:48672.service: Deactivated successfully. Dec 13 13:35:19.676787 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 13:35:19.678245 systemd-logind[1520]: Session 18 logged out. Waiting for processes to exit. Dec 13 13:35:19.680452 systemd[1]: Started sshd@16-139.178.70.100:22-139.178.89.65:48686.service - OpenSSH per-connection server daemon (139.178.89.65:48686). Dec 13 13:35:19.681612 systemd-logind[1520]: Removed session 18. Dec 13 13:35:19.975299 sshd[6098]: Accepted publickey for core from 139.178.89.65 port 48686 ssh2: RSA SHA256:EtsYlJgVzDm1FrvTYRxrQroWNl1HlwlEKuzBOqrtR6c Dec 13 13:35:19.977066 sshd-session[6098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:35:19.982312 systemd-logind[1520]: New session 19 of user core. Dec 13 13:35:19.991402 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 13:35:21.237589 sshd[6100]: Connection closed by 139.178.89.65 port 48686 Dec 13 13:35:21.247138 systemd[1]: Started sshd@17-139.178.70.100:22-139.178.89.65:48700.service - OpenSSH per-connection server daemon (139.178.89.65:48700). Dec 13 13:35:21.255715 sshd-session[6098]: pam_unix(sshd:session): session closed for user core Dec 13 13:35:21.267390 systemd[1]: sshd@16-139.178.70.100:22-139.178.89.65:48686.service: Deactivated successfully. Dec 13 13:35:21.270925 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 13:35:21.274599 systemd-logind[1520]: Session 19 logged out. Waiting for processes to exit. Dec 13 13:35:21.280205 systemd-logind[1520]: Removed session 19. Dec 13 13:35:21.332819 sshd[6112]: Accepted publickey for core from 139.178.89.65 port 48700 ssh2: RSA SHA256:EtsYlJgVzDm1FrvTYRxrQroWNl1HlwlEKuzBOqrtR6c Dec 13 13:35:21.334584 sshd-session[6112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:35:21.340153 systemd-logind[1520]: New session 20 of user core. Dec 13 13:35:21.345311 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 13:35:21.743793 sshd[6118]: Connection closed by 139.178.89.65 port 48700 Dec 13 13:35:21.744631 sshd-session[6112]: pam_unix(sshd:session): session closed for user core Dec 13 13:35:21.750939 systemd[1]: sshd@17-139.178.70.100:22-139.178.89.65:48700.service: Deactivated successfully. Dec 13 13:35:21.752555 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 13:35:21.754395 systemd-logind[1520]: Session 20 logged out. Waiting for processes to exit. Dec 13 13:35:21.759529 systemd[1]: Started sshd@18-139.178.70.100:22-139.178.89.65:48708.service - OpenSSH per-connection server daemon (139.178.89.65:48708). Dec 13 13:35:21.760870 systemd-logind[1520]: Removed session 20. Dec 13 13:35:21.798049 sshd[6127]: Accepted publickey for core from 139.178.89.65 port 48708 ssh2: RSA SHA256:EtsYlJgVzDm1FrvTYRxrQroWNl1HlwlEKuzBOqrtR6c Dec 13 13:35:21.799020 sshd-session[6127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:35:21.802211 systemd-logind[1520]: New session 21 of user core. Dec 13 13:35:21.812410 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 13:35:21.947374 sshd[6129]: Connection closed by 139.178.89.65 port 48708 Dec 13 13:35:21.948186 sshd-session[6127]: pam_unix(sshd:session): session closed for user core Dec 13 13:35:21.950813 systemd[1]: sshd@18-139.178.70.100:22-139.178.89.65:48708.service: Deactivated successfully. Dec 13 13:35:21.951909 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 13:35:21.952336 systemd-logind[1520]: Session 21 logged out. Waiting for processes to exit. Dec 13 13:35:21.953047 systemd-logind[1520]: Removed session 21. Dec 13 13:35:26.958109 systemd[1]: Started sshd@19-139.178.70.100:22-139.178.89.65:48718.service - OpenSSH per-connection server daemon (139.178.89.65:48718). Dec 13 13:35:27.002635 sshd[6146]: Accepted publickey for core from 139.178.89.65 port 48718 ssh2: RSA SHA256:EtsYlJgVzDm1FrvTYRxrQroWNl1HlwlEKuzBOqrtR6c Dec 13 13:35:27.003362 sshd-session[6146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:35:27.006347 systemd-logind[1520]: New session 22 of user core. Dec 13 13:35:27.011318 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 13:35:27.118812 sshd[6148]: Connection closed by 139.178.89.65 port 48718 Dec 13 13:35:27.119589 sshd-session[6146]: pam_unix(sshd:session): session closed for user core Dec 13 13:35:27.121753 systemd[1]: sshd@19-139.178.70.100:22-139.178.89.65:48718.service: Deactivated successfully. Dec 13 13:35:27.122864 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 13:35:27.123321 systemd-logind[1520]: Session 22 logged out. Waiting for processes to exit. Dec 13 13:35:27.123836 systemd-logind[1520]: Removed session 22. Dec 13 13:35:32.130344 systemd[1]: Started sshd@20-139.178.70.100:22-139.178.89.65:45334.service - OpenSSH per-connection server daemon (139.178.89.65:45334). Dec 13 13:35:32.199712 sshd[6159]: Accepted publickey for core from 139.178.89.65 port 45334 ssh2: RSA SHA256:EtsYlJgVzDm1FrvTYRxrQroWNl1HlwlEKuzBOqrtR6c Dec 13 13:35:32.200398 sshd-session[6159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:35:32.203210 systemd-logind[1520]: New session 23 of user core. Dec 13 13:35:32.208644 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 13:35:32.307146 sshd[6161]: Connection closed by 139.178.89.65 port 45334 Dec 13 13:35:32.307552 sshd-session[6159]: pam_unix(sshd:session): session closed for user core Dec 13 13:35:32.309280 systemd[1]: sshd@20-139.178.70.100:22-139.178.89.65:45334.service: Deactivated successfully. Dec 13 13:35:32.310688 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 13:35:32.311921 systemd-logind[1520]: Session 23 logged out. Waiting for processes to exit. Dec 13 13:35:32.312740 systemd-logind[1520]: Removed session 23. Dec 13 13:35:37.322510 systemd[1]: Started sshd@21-139.178.70.100:22-139.178.89.65:45342.service - OpenSSH per-connection server daemon (139.178.89.65:45342). Dec 13 13:35:37.453783 sshd[6174]: Accepted publickey for core from 139.178.89.65 port 45342 ssh2: RSA SHA256:EtsYlJgVzDm1FrvTYRxrQroWNl1HlwlEKuzBOqrtR6c Dec 13 13:35:37.457587 sshd-session[6174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:35:37.465437 systemd-logind[1520]: New session 24 of user core. Dec 13 13:35:37.468923 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 13:35:37.624094 sshd[6192]: Connection closed by 139.178.89.65 port 45342 Dec 13 13:35:37.624647 sshd-session[6174]: pam_unix(sshd:session): session closed for user core Dec 13 13:35:37.626371 systemd[1]: sshd@21-139.178.70.100:22-139.178.89.65:45342.service: Deactivated successfully. Dec 13 13:35:37.627462 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 13:35:37.628197 systemd-logind[1520]: Session 24 logged out. Waiting for processes to exit. Dec 13 13:35:37.629083 systemd-logind[1520]: Removed session 24.