Dec 13 01:40:56.723114 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:40:56.723130 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:40:56.723136 kernel: Disabled fast string operations Dec 13 01:40:56.723140 kernel: BIOS-provided physical RAM map: Dec 13 01:40:56.723144 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Dec 13 01:40:56.723148 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Dec 13 01:40:56.723154 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Dec 13 01:40:56.723158 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Dec 13 01:40:56.723162 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Dec 13 01:40:56.723166 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Dec 13 01:40:56.723171 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Dec 13 01:40:56.723175 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Dec 13 01:40:56.723179 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Dec 13 01:40:56.723183 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Dec 13 01:40:56.723189 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Dec 13 01:40:56.723194 kernel: NX (Execute Disable) protection: active Dec 13 01:40:56.723199 kernel: APIC: Static calls initialized Dec 13 01:40:56.723203 kernel: SMBIOS 2.7 present. Dec 13 01:40:56.723208 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Dec 13 01:40:56.723213 kernel: vmware: hypercall mode: 0x00 Dec 13 01:40:56.723218 kernel: Hypervisor detected: VMware Dec 13 01:40:56.723222 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Dec 13 01:40:56.723228 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Dec 13 01:40:56.723233 kernel: vmware: using clock offset of 2471440218 ns Dec 13 01:40:56.723238 kernel: tsc: Detected 3408.000 MHz processor Dec 13 01:40:56.723243 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:40:56.723248 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:40:56.723253 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Dec 13 01:40:56.723258 kernel: total RAM covered: 3072M Dec 13 01:40:56.723263 kernel: Found optimal setting for mtrr clean up Dec 13 01:40:56.723268 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Dec 13 01:40:56.723274 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Dec 13 01:40:56.723279 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:40:56.723284 kernel: Using GB pages for direct mapping Dec 13 01:40:56.723288 kernel: ACPI: Early table checksum verification disabled Dec 13 01:40:56.723293 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Dec 13 01:40:56.723297 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Dec 13 01:40:56.723302 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Dec 13 01:40:56.723307 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Dec 13 01:40:56.723312 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Dec 13 01:40:56.723319 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Dec 13 01:40:56.723324 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Dec 13 01:40:56.723329 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Dec 13 01:40:56.723334 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Dec 13 01:40:56.723339 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Dec 13 01:40:56.723345 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Dec 13 01:40:56.723350 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Dec 13 01:40:56.723355 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Dec 13 01:40:56.723360 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Dec 13 01:40:56.723365 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Dec 13 01:40:56.723370 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Dec 13 01:40:56.723375 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Dec 13 01:40:56.723380 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Dec 13 01:40:56.723385 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Dec 13 01:40:56.723390 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Dec 13 01:40:56.723396 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Dec 13 01:40:56.723401 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Dec 13 01:40:56.723406 kernel: system APIC only can use physical flat Dec 13 01:40:56.723411 kernel: APIC: Switched APIC routing to: physical flat Dec 13 01:40:56.723416 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 01:40:56.723421 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Dec 13 01:40:56.723425 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Dec 13 01:40:56.723430 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Dec 13 01:40:56.723435 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Dec 13 01:40:56.723441 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Dec 13 01:40:56.723446 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Dec 13 01:40:56.723451 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Dec 13 01:40:56.723456 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Dec 13 01:40:56.723460 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Dec 13 01:40:56.723465 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Dec 13 01:40:56.723470 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Dec 13 01:40:56.723475 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Dec 13 01:40:56.723480 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Dec 13 01:40:56.723485 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Dec 13 01:40:56.723491 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Dec 13 01:40:56.723496 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Dec 13 01:40:56.723500 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Dec 13 01:40:56.723505 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Dec 13 01:40:56.723510 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Dec 13 01:40:56.723515 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Dec 13 01:40:56.723520 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Dec 13 01:40:56.723525 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Dec 13 01:40:56.723529 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Dec 13 01:40:56.723534 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Dec 13 01:40:56.723539 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Dec 13 01:40:56.723545 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Dec 13 01:40:56.723550 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Dec 13 01:40:56.723555 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Dec 13 01:40:56.723560 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Dec 13 01:40:56.723564 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Dec 13 01:40:56.723569 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Dec 13 01:40:56.723574 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Dec 13 01:40:56.723579 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Dec 13 01:40:56.723584 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Dec 13 01:40:56.723589 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Dec 13 01:40:56.723604 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Dec 13 01:40:56.723610 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Dec 13 01:40:56.723615 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Dec 13 01:40:56.723620 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Dec 13 01:40:56.723625 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Dec 13 01:40:56.723630 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Dec 13 01:40:56.723635 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Dec 13 01:40:56.723640 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Dec 13 01:40:56.723645 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Dec 13 01:40:56.723649 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Dec 13 01:40:56.723656 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Dec 13 01:40:56.723661 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Dec 13 01:40:56.723666 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Dec 13 01:40:56.723671 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Dec 13 01:40:56.723676 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Dec 13 01:40:56.723681 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Dec 13 01:40:56.723686 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Dec 13 01:40:56.723690 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Dec 13 01:40:56.723695 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Dec 13 01:40:56.723700 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Dec 13 01:40:56.723706 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Dec 13 01:40:56.723711 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Dec 13 01:40:56.723716 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Dec 13 01:40:56.723725 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Dec 13 01:40:56.723731 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Dec 13 01:40:56.723737 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Dec 13 01:40:56.723742 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Dec 13 01:40:56.723747 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Dec 13 01:40:56.723753 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Dec 13 01:40:56.723758 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Dec 13 01:40:56.723764 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Dec 13 01:40:56.723769 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Dec 13 01:40:56.723774 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Dec 13 01:40:56.723779 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Dec 13 01:40:56.723785 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Dec 13 01:40:56.723790 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Dec 13 01:40:56.723795 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Dec 13 01:40:56.723800 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Dec 13 01:40:56.723805 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Dec 13 01:40:56.723812 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Dec 13 01:40:56.723817 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Dec 13 01:40:56.723822 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Dec 13 01:40:56.723828 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Dec 13 01:40:56.723833 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Dec 13 01:40:56.723838 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Dec 13 01:40:56.723843 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Dec 13 01:40:56.723848 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Dec 13 01:40:56.723854 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Dec 13 01:40:56.723859 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Dec 13 01:40:56.723865 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Dec 13 01:40:56.723870 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Dec 13 01:40:56.723876 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Dec 13 01:40:56.723881 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Dec 13 01:40:56.723886 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Dec 13 01:40:56.723891 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Dec 13 01:40:56.723896 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Dec 13 01:40:56.723901 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Dec 13 01:40:56.723907 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Dec 13 01:40:56.723912 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Dec 13 01:40:56.723918 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Dec 13 01:40:56.723924 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Dec 13 01:40:56.723929 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Dec 13 01:40:56.723934 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Dec 13 01:40:56.723939 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Dec 13 01:40:56.723944 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Dec 13 01:40:56.723950 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Dec 13 01:40:56.723955 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Dec 13 01:40:56.723960 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Dec 13 01:40:56.723965 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Dec 13 01:40:56.723971 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Dec 13 01:40:56.723976 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Dec 13 01:40:56.723982 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Dec 13 01:40:56.723987 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Dec 13 01:40:56.723992 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Dec 13 01:40:56.723997 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Dec 13 01:40:56.724002 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Dec 13 01:40:56.724007 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Dec 13 01:40:56.724013 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Dec 13 01:40:56.724018 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Dec 13 01:40:56.724024 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Dec 13 01:40:56.724029 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Dec 13 01:40:56.724035 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Dec 13 01:40:56.724040 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Dec 13 01:40:56.724045 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Dec 13 01:40:56.724050 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Dec 13 01:40:56.724055 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Dec 13 01:40:56.724060 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Dec 13 01:40:56.724065 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Dec 13 01:40:56.724071 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Dec 13 01:40:56.724077 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Dec 13 01:40:56.724082 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Dec 13 01:40:56.724087 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Dec 13 01:40:56.724093 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 01:40:56.724098 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 13 01:40:56.724103 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Dec 13 01:40:56.724109 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Dec 13 01:40:56.724114 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Dec 13 01:40:56.724120 kernel: Zone ranges: Dec 13 01:40:56.724125 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:40:56.724132 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Dec 13 01:40:56.724137 kernel: Normal empty Dec 13 01:40:56.724142 kernel: Movable zone start for each node Dec 13 01:40:56.724148 kernel: Early memory node ranges Dec 13 01:40:56.724153 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Dec 13 01:40:56.724158 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Dec 13 01:40:56.724164 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Dec 13 01:40:56.724169 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Dec 13 01:40:56.724174 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:40:56.724180 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Dec 13 01:40:56.724186 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Dec 13 01:40:56.724191 kernel: ACPI: PM-Timer IO Port: 0x1008 Dec 13 01:40:56.724197 kernel: system APIC only can use physical flat Dec 13 01:40:56.724202 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Dec 13 01:40:56.724207 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Dec 13 01:40:56.724213 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Dec 13 01:40:56.724218 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Dec 13 01:40:56.724223 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Dec 13 01:40:56.724228 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Dec 13 01:40:56.724235 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Dec 13 01:40:56.724240 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Dec 13 01:40:56.724245 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Dec 13 01:40:56.724250 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Dec 13 01:40:56.724255 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Dec 13 01:40:56.724261 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Dec 13 01:40:56.724266 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Dec 13 01:40:56.724271 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Dec 13 01:40:56.724276 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Dec 13 01:40:56.724283 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Dec 13 01:40:56.724288 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Dec 13 01:40:56.724293 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Dec 13 01:40:56.724299 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Dec 13 01:40:56.724304 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Dec 13 01:40:56.724309 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Dec 13 01:40:56.724314 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Dec 13 01:40:56.724319 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Dec 13 01:40:56.724325 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Dec 13 01:40:56.724330 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Dec 13 01:40:56.724336 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Dec 13 01:40:56.724341 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Dec 13 01:40:56.724347 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Dec 13 01:40:56.724352 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Dec 13 01:40:56.724357 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Dec 13 01:40:56.724362 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Dec 13 01:40:56.724368 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Dec 13 01:40:56.724373 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Dec 13 01:40:56.724378 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Dec 13 01:40:56.724383 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Dec 13 01:40:56.724390 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Dec 13 01:40:56.724395 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Dec 13 01:40:56.724400 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Dec 13 01:40:56.724405 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Dec 13 01:40:56.724411 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Dec 13 01:40:56.724416 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Dec 13 01:40:56.724421 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Dec 13 01:40:56.724426 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Dec 13 01:40:56.724431 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Dec 13 01:40:56.724438 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Dec 13 01:40:56.724443 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Dec 13 01:40:56.724448 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Dec 13 01:40:56.724453 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Dec 13 01:40:56.724459 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Dec 13 01:40:56.724464 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Dec 13 01:40:56.724469 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Dec 13 01:40:56.724474 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Dec 13 01:40:56.724483 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Dec 13 01:40:56.724489 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Dec 13 01:40:56.724495 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Dec 13 01:40:56.724500 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Dec 13 01:40:56.724506 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Dec 13 01:40:56.724511 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Dec 13 01:40:56.724516 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Dec 13 01:40:56.724521 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Dec 13 01:40:56.724527 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Dec 13 01:40:56.724537 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Dec 13 01:40:56.724543 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Dec 13 01:40:56.724550 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Dec 13 01:40:56.724556 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Dec 13 01:40:56.724561 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Dec 13 01:40:56.724566 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Dec 13 01:40:56.724572 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Dec 13 01:40:56.724577 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Dec 13 01:40:56.724582 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Dec 13 01:40:56.724587 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Dec 13 01:40:56.724593 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Dec 13 01:40:56.724609 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Dec 13 01:40:56.724616 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Dec 13 01:40:56.724621 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Dec 13 01:40:56.724627 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Dec 13 01:40:56.724632 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Dec 13 01:40:56.724637 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Dec 13 01:40:56.724643 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Dec 13 01:40:56.724648 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Dec 13 01:40:56.724653 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Dec 13 01:40:56.724658 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Dec 13 01:40:56.724665 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Dec 13 01:40:56.724670 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Dec 13 01:40:56.724676 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Dec 13 01:40:56.724681 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Dec 13 01:40:56.724686 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Dec 13 01:40:56.724691 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Dec 13 01:40:56.724696 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Dec 13 01:40:56.724702 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Dec 13 01:40:56.724707 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Dec 13 01:40:56.724712 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Dec 13 01:40:56.724718 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Dec 13 01:40:56.724746 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Dec 13 01:40:56.724751 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Dec 13 01:40:56.724757 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Dec 13 01:40:56.724762 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Dec 13 01:40:56.724768 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Dec 13 01:40:56.724773 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Dec 13 01:40:56.724778 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Dec 13 01:40:56.724784 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Dec 13 01:40:56.724789 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Dec 13 01:40:56.724795 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Dec 13 01:40:56.724801 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Dec 13 01:40:56.724806 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Dec 13 01:40:56.724811 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Dec 13 01:40:56.724832 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Dec 13 01:40:56.724837 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Dec 13 01:40:56.724843 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Dec 13 01:40:56.724848 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Dec 13 01:40:56.724853 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Dec 13 01:40:56.724860 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Dec 13 01:40:56.724865 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Dec 13 01:40:56.724870 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Dec 13 01:40:56.724875 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Dec 13 01:40:56.724881 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Dec 13 01:40:56.724886 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Dec 13 01:40:56.724891 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Dec 13 01:40:56.724897 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Dec 13 01:40:56.724902 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Dec 13 01:40:56.724907 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Dec 13 01:40:56.724914 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Dec 13 01:40:56.724919 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Dec 13 01:40:56.724925 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Dec 13 01:40:56.724930 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Dec 13 01:40:56.724935 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Dec 13 01:40:56.724940 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Dec 13 01:40:56.724946 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Dec 13 01:40:56.724951 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:40:56.724956 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Dec 13 01:40:56.724963 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:40:56.724968 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Dec 13 01:40:56.724974 kernel: TSC deadline timer available Dec 13 01:40:56.724979 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Dec 13 01:40:56.724984 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Dec 13 01:40:56.724990 kernel: Booting paravirtualized kernel on VMware hypervisor Dec 13 01:40:56.724995 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:40:56.725000 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Dec 13 01:40:56.725006 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Dec 13 01:40:56.725011 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Dec 13 01:40:56.725017 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Dec 13 01:40:56.725023 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Dec 13 01:40:56.725028 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Dec 13 01:40:56.725033 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Dec 13 01:40:56.725039 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Dec 13 01:40:56.725051 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Dec 13 01:40:56.725057 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Dec 13 01:40:56.725063 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Dec 13 01:40:56.725069 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Dec 13 01:40:56.725075 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Dec 13 01:40:56.725080 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Dec 13 01:40:56.725086 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Dec 13 01:40:56.725091 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Dec 13 01:40:56.725097 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Dec 13 01:40:56.725103 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Dec 13 01:40:56.725108 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Dec 13 01:40:56.725115 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:40:56.725122 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:40:56.725127 kernel: random: crng init done Dec 13 01:40:56.725133 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Dec 13 01:40:56.725138 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Dec 13 01:40:56.725144 kernel: printk: log_buf_len min size: 262144 bytes Dec 13 01:40:56.725150 kernel: printk: log_buf_len: 1048576 bytes Dec 13 01:40:56.725155 kernel: printk: early log buf free: 239648(91%) Dec 13 01:40:56.725161 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:40:56.725168 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 01:40:56.725174 kernel: Fallback order for Node 0: 0 Dec 13 01:40:56.725179 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Dec 13 01:40:56.725185 kernel: Policy zone: DMA32 Dec 13 01:40:56.725191 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:40:56.725197 kernel: Memory: 1936352K/2096628K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 160016K reserved, 0K cma-reserved) Dec 13 01:40:56.725204 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Dec 13 01:40:56.725210 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:40:56.725215 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:40:56.725221 kernel: Dynamic Preempt: voluntary Dec 13 01:40:56.725227 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:40:56.725233 kernel: rcu: RCU event tracing is enabled. Dec 13 01:40:56.725240 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Dec 13 01:40:56.725245 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:40:56.725251 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:40:56.725258 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:40:56.725264 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:40:56.725269 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Dec 13 01:40:56.725275 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Dec 13 01:40:56.725281 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Dec 13 01:40:56.725286 kernel: Console: colour VGA+ 80x25 Dec 13 01:40:56.725292 kernel: printk: console [tty0] enabled Dec 13 01:40:56.725298 kernel: printk: console [ttyS0] enabled Dec 13 01:40:56.725304 kernel: ACPI: Core revision 20230628 Dec 13 01:40:56.725311 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Dec 13 01:40:56.725316 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:40:56.725322 kernel: x2apic enabled Dec 13 01:40:56.725328 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:40:56.725333 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 01:40:56.725339 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Dec 13 01:40:56.725345 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Dec 13 01:40:56.725351 kernel: Disabled fast string operations Dec 13 01:40:56.725357 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 01:40:56.725362 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 01:40:56.725369 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:40:56.725375 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Dec 13 01:40:56.725380 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Dec 13 01:40:56.725386 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Dec 13 01:40:56.725392 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:40:56.725398 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Dec 13 01:40:56.725403 kernel: RETBleed: Mitigation: Enhanced IBRS Dec 13 01:40:56.725409 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:40:56.725416 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 01:40:56.725421 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 01:40:56.725427 kernel: SRBDS: Unknown: Dependent on hypervisor status Dec 13 01:40:56.725433 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 01:40:56.725439 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:40:56.725444 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:40:56.725450 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:40:56.725456 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:40:56.725462 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 13 01:40:56.725468 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:40:56.725474 kernel: pid_max: default: 131072 minimum: 1024 Dec 13 01:40:56.725480 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:40:56.725485 kernel: landlock: Up and running. Dec 13 01:40:56.725491 kernel: SELinux: Initializing. Dec 13 01:40:56.725497 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 01:40:56.725503 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 01:40:56.725509 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Dec 13 01:40:56.725514 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Dec 13 01:40:56.725521 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Dec 13 01:40:56.725527 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Dec 13 01:40:56.725533 kernel: Performance Events: Skylake events, core PMU driver. Dec 13 01:40:56.725538 kernel: core: CPUID marked event: 'cpu cycles' unavailable Dec 13 01:40:56.725544 kernel: core: CPUID marked event: 'instructions' unavailable Dec 13 01:40:56.725549 kernel: core: CPUID marked event: 'bus cycles' unavailable Dec 13 01:40:56.725555 kernel: core: CPUID marked event: 'cache references' unavailable Dec 13 01:40:56.725560 kernel: core: CPUID marked event: 'cache misses' unavailable Dec 13 01:40:56.725567 kernel: core: CPUID marked event: 'branch instructions' unavailable Dec 13 01:40:56.725572 kernel: core: CPUID marked event: 'branch misses' unavailable Dec 13 01:40:56.725578 kernel: ... version: 1 Dec 13 01:40:56.725584 kernel: ... bit width: 48 Dec 13 01:40:56.725589 kernel: ... generic registers: 4 Dec 13 01:40:56.725602 kernel: ... value mask: 0000ffffffffffff Dec 13 01:40:56.725609 kernel: ... max period: 000000007fffffff Dec 13 01:40:56.725615 kernel: ... fixed-purpose events: 0 Dec 13 01:40:56.725620 kernel: ... event mask: 000000000000000f Dec 13 01:40:56.725626 kernel: signal: max sigframe size: 1776 Dec 13 01:40:56.725633 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:40:56.725640 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:40:56.725646 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 01:40:56.725651 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:40:56.725657 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:40:56.725663 kernel: .... node #0, CPUs: #1 Dec 13 01:40:56.725669 kernel: Disabled fast string operations Dec 13 01:40:56.725674 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Dec 13 01:40:56.725680 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Dec 13 01:40:56.725686 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:40:56.725692 kernel: smpboot: Max logical packages: 128 Dec 13 01:40:56.725698 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Dec 13 01:40:56.725703 kernel: devtmpfs: initialized Dec 13 01:40:56.725709 kernel: x86/mm: Memory block size: 128MB Dec 13 01:40:56.725715 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Dec 13 01:40:56.725721 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:40:56.725727 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Dec 13 01:40:56.725733 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:40:56.725739 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:40:56.725745 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:40:56.725751 kernel: audit: type=2000 audit(1734054054.066:1): state=initialized audit_enabled=0 res=1 Dec 13 01:40:56.725756 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:40:56.725762 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:40:56.725771 kernel: cpuidle: using governor menu Dec 13 01:40:56.725778 kernel: Simple Boot Flag at 0x36 set to 0x80 Dec 13 01:40:56.725795 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:40:56.725813 kernel: dca service started, version 1.12.1 Dec 13 01:40:56.725825 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Dec 13 01:40:56.725831 kernel: PCI: Using configuration type 1 for base access Dec 13 01:40:56.725842 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:40:56.725848 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:40:56.725854 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:40:56.725860 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:40:56.725865 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:40:56.725871 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:40:56.725877 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:40:56.725884 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:40:56.725890 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:40:56.725895 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:40:56.725901 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Dec 13 01:40:56.725907 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:40:56.725912 kernel: ACPI: Interpreter enabled Dec 13 01:40:56.725918 kernel: ACPI: PM: (supports S0 S1 S5) Dec 13 01:40:56.725924 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:40:56.725930 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:40:56.725936 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 01:40:56.725942 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Dec 13 01:40:56.725948 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Dec 13 01:40:56.726024 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:40:56.726078 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Dec 13 01:40:56.726126 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Dec 13 01:40:56.726135 kernel: PCI host bridge to bus 0000:00 Dec 13 01:40:56.726185 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:40:56.726229 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Dec 13 01:40:56.726272 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 01:40:56.726315 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:40:56.726357 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Dec 13 01:40:56.726399 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Dec 13 01:40:56.726455 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Dec 13 01:40:56.726511 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Dec 13 01:40:56.726566 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Dec 13 01:40:56.726689 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Dec 13 01:40:56.726745 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Dec 13 01:40:56.726803 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Dec 13 01:40:56.726853 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Dec 13 01:40:56.726914 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Dec 13 01:40:56.726964 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Dec 13 01:40:56.727017 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Dec 13 01:40:56.727066 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Dec 13 01:40:56.727114 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Dec 13 01:40:56.727166 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Dec 13 01:40:56.727216 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Dec 13 01:40:56.727267 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Dec 13 01:40:56.727318 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Dec 13 01:40:56.727367 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Dec 13 01:40:56.727414 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Dec 13 01:40:56.727484 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Dec 13 01:40:56.727745 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Dec 13 01:40:56.727798 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 01:40:56.727851 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Dec 13 01:40:56.727904 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.727953 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.728006 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.728056 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.728110 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.728162 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.728239 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.728297 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.728350 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.728398 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.728451 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.728531 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.728617 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.728669 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.728721 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.728770 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.728822 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.728875 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.728929 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.728978 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.729030 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.729079 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.729135 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.729184 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.729237 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.729286 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.729338 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.729387 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.729440 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.729492 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.729562 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.729638 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.729692 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.729741 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.729794 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.729846 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.729913 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.729962 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.730017 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.730066 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.730118 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.730169 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.730222 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.730270 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.730324 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.730373 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.730425 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.730477 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.730561 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.730624 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.730696 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.730761 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.730813 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.730865 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.730945 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.731011 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.731063 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.731112 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.731163 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.731212 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.731269 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.731319 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.731371 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.731420 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.731471 kernel: pci_bus 0000:01: extended config space not accessible Dec 13 01:40:56.731531 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 13 01:40:56.731587 kernel: pci_bus 0000:02: extended config space not accessible Dec 13 01:40:56.731634 kernel: acpiphp: Slot [32] registered Dec 13 01:40:56.731641 kernel: acpiphp: Slot [33] registered Dec 13 01:40:56.731647 kernel: acpiphp: Slot [34] registered Dec 13 01:40:56.731652 kernel: acpiphp: Slot [35] registered Dec 13 01:40:56.731658 kernel: acpiphp: Slot [36] registered Dec 13 01:40:56.731664 kernel: acpiphp: Slot [37] registered Dec 13 01:40:56.731670 kernel: acpiphp: Slot [38] registered Dec 13 01:40:56.731675 kernel: acpiphp: Slot [39] registered Dec 13 01:40:56.731683 kernel: acpiphp: Slot [40] registered Dec 13 01:40:56.731689 kernel: acpiphp: Slot [41] registered Dec 13 01:40:56.731695 kernel: acpiphp: Slot [42] registered Dec 13 01:40:56.731701 kernel: acpiphp: Slot [43] registered Dec 13 01:40:56.731706 kernel: acpiphp: Slot [44] registered Dec 13 01:40:56.731712 kernel: acpiphp: Slot [45] registered Dec 13 01:40:56.731717 kernel: acpiphp: Slot [46] registered Dec 13 01:40:56.731723 kernel: acpiphp: Slot [47] registered Dec 13 01:40:56.731729 kernel: acpiphp: Slot [48] registered Dec 13 01:40:56.731735 kernel: acpiphp: Slot [49] registered Dec 13 01:40:56.731741 kernel: acpiphp: Slot [50] registered Dec 13 01:40:56.731747 kernel: acpiphp: Slot [51] registered Dec 13 01:40:56.731752 kernel: acpiphp: Slot [52] registered Dec 13 01:40:56.731758 kernel: acpiphp: Slot [53] registered Dec 13 01:40:56.731764 kernel: acpiphp: Slot [54] registered Dec 13 01:40:56.731769 kernel: acpiphp: Slot [55] registered Dec 13 01:40:56.731775 kernel: acpiphp: Slot [56] registered Dec 13 01:40:56.731781 kernel: acpiphp: Slot [57] registered Dec 13 01:40:56.731786 kernel: acpiphp: Slot [58] registered Dec 13 01:40:56.731793 kernel: acpiphp: Slot [59] registered Dec 13 01:40:56.731799 kernel: acpiphp: Slot [60] registered Dec 13 01:40:56.731804 kernel: acpiphp: Slot [61] registered Dec 13 01:40:56.731810 kernel: acpiphp: Slot [62] registered Dec 13 01:40:56.731816 kernel: acpiphp: Slot [63] registered Dec 13 01:40:56.731867 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Dec 13 01:40:56.731916 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Dec 13 01:40:56.731963 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Dec 13 01:40:56.732013 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Dec 13 01:40:56.732059 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Dec 13 01:40:56.732107 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Dec 13 01:40:56.732154 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Dec 13 01:40:56.732202 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Dec 13 01:40:56.732249 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Dec 13 01:40:56.732302 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Dec 13 01:40:56.732354 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Dec 13 01:40:56.732403 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Dec 13 01:40:56.732452 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Dec 13 01:40:56.732500 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.732549 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Dec 13 01:40:56.732617 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Dec 13 01:40:56.735056 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Dec 13 01:40:56.735111 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Dec 13 01:40:56.735166 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Dec 13 01:40:56.735216 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Dec 13 01:40:56.735264 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Dec 13 01:40:56.735313 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Dec 13 01:40:56.735362 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Dec 13 01:40:56.735411 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Dec 13 01:40:56.735459 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Dec 13 01:40:56.735507 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Dec 13 01:40:56.735560 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Dec 13 01:40:56.735916 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Dec 13 01:40:56.735971 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Dec 13 01:40:56.736023 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Dec 13 01:40:56.736072 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Dec 13 01:40:56.736120 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Dec 13 01:40:56.736173 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Dec 13 01:40:56.736221 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Dec 13 01:40:56.736270 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Dec 13 01:40:56.736320 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Dec 13 01:40:56.736368 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Dec 13 01:40:56.736416 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Dec 13 01:40:56.736468 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Dec 13 01:40:56.736521 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Dec 13 01:40:56.736570 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Dec 13 01:40:56.736650 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Dec 13 01:40:56.736703 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Dec 13 01:40:56.736771 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Dec 13 01:40:56.736823 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Dec 13 01:40:56.736892 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Dec 13 01:40:56.736942 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Dec 13 01:40:56.736992 kernel: pci 0000:0b:00.0: supports D1 D2 Dec 13 01:40:56.737041 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 01:40:56.737091 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Dec 13 01:40:56.737141 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Dec 13 01:40:56.737189 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Dec 13 01:40:56.737238 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Dec 13 01:40:56.737308 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Dec 13 01:40:56.737358 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Dec 13 01:40:56.737407 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Dec 13 01:40:56.737456 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Dec 13 01:40:56.737511 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Dec 13 01:40:56.737562 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Dec 13 01:40:56.737744 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Dec 13 01:40:56.737796 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Dec 13 01:40:56.737851 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Dec 13 01:40:56.737901 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Dec 13 01:40:56.737950 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Dec 13 01:40:56.738024 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Dec 13 01:40:56.738079 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Dec 13 01:40:56.738129 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Dec 13 01:40:56.738179 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Dec 13 01:40:56.738232 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Dec 13 01:40:56.738281 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Dec 13 01:40:56.738331 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Dec 13 01:40:56.738380 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Dec 13 01:40:56.738430 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Dec 13 01:40:56.738484 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Dec 13 01:40:56.738535 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Dec 13 01:40:56.738584 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Dec 13 01:40:56.739664 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Dec 13 01:40:56.739718 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Dec 13 01:40:56.739770 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Dec 13 01:40:56.739819 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Dec 13 01:40:56.739869 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Dec 13 01:40:56.739919 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Dec 13 01:40:56.739969 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Dec 13 01:40:56.740019 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Dec 13 01:40:56.740074 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Dec 13 01:40:56.740124 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Dec 13 01:40:56.740174 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Dec 13 01:40:56.740224 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Dec 13 01:40:56.740274 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Dec 13 01:40:56.740325 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Dec 13 01:40:56.740374 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Dec 13 01:40:56.740424 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Dec 13 01:40:56.740476 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Dec 13 01:40:56.740526 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Dec 13 01:40:56.740576 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Dec 13 01:40:56.742692 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Dec 13 01:40:56.742751 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Dec 13 01:40:56.742806 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Dec 13 01:40:56.742857 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Dec 13 01:40:56.742906 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Dec 13 01:40:56.742961 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Dec 13 01:40:56.743011 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Dec 13 01:40:56.743060 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Dec 13 01:40:56.743110 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Dec 13 01:40:56.743159 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Dec 13 01:40:56.743208 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Dec 13 01:40:56.743256 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Dec 13 01:40:56.743307 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Dec 13 01:40:56.743359 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Dec 13 01:40:56.743409 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Dec 13 01:40:56.743458 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Dec 13 01:40:56.743508 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Dec 13 01:40:56.744902 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Dec 13 01:40:56.744970 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Dec 13 01:40:56.745041 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Dec 13 01:40:56.745095 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Dec 13 01:40:56.745145 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Dec 13 01:40:56.745196 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Dec 13 01:40:56.745261 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Dec 13 01:40:56.745324 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Dec 13 01:40:56.745377 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Dec 13 01:40:56.745427 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Dec 13 01:40:56.745476 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Dec 13 01:40:56.745531 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Dec 13 01:40:56.745580 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Dec 13 01:40:56.745674 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Dec 13 01:40:56.745727 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Dec 13 01:40:56.745775 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Dec 13 01:40:56.745823 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Dec 13 01:40:56.745832 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Dec 13 01:40:56.745838 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Dec 13 01:40:56.745844 kernel: ACPI: PCI: Interrupt link LNKB disabled Dec 13 01:40:56.745852 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:40:56.745858 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Dec 13 01:40:56.745864 kernel: iommu: Default domain type: Translated Dec 13 01:40:56.745870 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:40:56.745876 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:40:56.745882 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:40:56.745887 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Dec 13 01:40:56.745893 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Dec 13 01:40:56.745941 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Dec 13 01:40:56.745992 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Dec 13 01:40:56.746040 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 01:40:56.746049 kernel: vgaarb: loaded Dec 13 01:40:56.746055 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Dec 13 01:40:56.746061 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Dec 13 01:40:56.746067 kernel: clocksource: Switched to clocksource tsc-early Dec 13 01:40:56.746073 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:40:56.746079 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:40:56.746085 kernel: pnp: PnP ACPI init Dec 13 01:40:56.746138 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Dec 13 01:40:56.746184 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Dec 13 01:40:56.746227 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Dec 13 01:40:56.746274 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Dec 13 01:40:56.746324 kernel: pnp 00:06: [dma 2] Dec 13 01:40:56.746371 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Dec 13 01:40:56.746435 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Dec 13 01:40:56.747979 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Dec 13 01:40:56.747990 kernel: pnp: PnP ACPI: found 8 devices Dec 13 01:40:56.747997 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:40:56.748003 kernel: NET: Registered PF_INET protocol family Dec 13 01:40:56.748009 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:40:56.748015 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 01:40:56.748021 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:40:56.748029 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:40:56.748035 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 01:40:56.748041 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 01:40:56.748047 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 01:40:56.748053 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 01:40:56.748058 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:40:56.748064 kernel: NET: Registered PF_XDP protocol family Dec 13 01:40:56.748805 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Dec 13 01:40:56.748882 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 13 01:40:56.748936 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 01:40:56.749006 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 01:40:56.749059 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 01:40:56.749110 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Dec 13 01:40:56.749162 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Dec 13 01:40:56.749217 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Dec 13 01:40:56.749268 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Dec 13 01:40:56.749328 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Dec 13 01:40:56.749380 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Dec 13 01:40:56.749432 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Dec 13 01:40:56.749482 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Dec 13 01:40:56.749537 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Dec 13 01:40:56.749587 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Dec 13 01:40:56.750729 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Dec 13 01:40:56.750782 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Dec 13 01:40:56.750832 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Dec 13 01:40:56.750882 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Dec 13 01:40:56.750935 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Dec 13 01:40:56.750986 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Dec 13 01:40:56.751036 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Dec 13 01:40:56.751086 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Dec 13 01:40:56.751136 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Dec 13 01:40:56.751185 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Dec 13 01:40:56.751237 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.751286 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.751335 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.751383 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.751433 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.751482 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.751532 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.751580 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.751997 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.752051 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.752102 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.752152 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.752201 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.752249 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.752298 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.752347 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.752399 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.752448 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.752497 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.752545 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.752635 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.752723 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.752771 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.752819 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.752872 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.752921 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.752970 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.753018 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.753068 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.753116 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.753165 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.753213 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.753264 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.753313 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.753361 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.753410 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.753459 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.753512 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.753561 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.753689 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.753745 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.753795 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.753843 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.753891 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.753939 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.753987 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.754036 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.754084 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.754132 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.754183 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.754231 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.754279 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.754328 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.754376 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.754425 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.754473 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.754521 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.754570 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.754627 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.754679 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.754727 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.754776 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.754824 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.754873 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.754921 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.754969 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.755018 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.755067 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.755118 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.755166 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.755215 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.755264 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.755312 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.755361 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.755409 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.755458 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.755506 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.755554 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.755619 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.755670 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.755719 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.755767 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.755816 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.755865 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.755914 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 13 01:40:56.755964 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Dec 13 01:40:56.756013 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Dec 13 01:40:56.756336 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Dec 13 01:40:56.756391 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Dec 13 01:40:56.756445 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Dec 13 01:40:56.756499 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Dec 13 01:40:56.756550 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Dec 13 01:40:56.756811 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Dec 13 01:40:56.757848 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Dec 13 01:40:56.757904 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Dec 13 01:40:56.757959 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Dec 13 01:40:56.758009 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Dec 13 01:40:56.758059 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Dec 13 01:40:56.758109 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Dec 13 01:40:56.758158 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Dec 13 01:40:56.758207 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Dec 13 01:40:56.758256 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Dec 13 01:40:56.758305 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Dec 13 01:40:56.758353 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Dec 13 01:40:56.758402 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Dec 13 01:40:56.758453 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Dec 13 01:40:56.758534 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Dec 13 01:40:56.758586 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Dec 13 01:40:56.758685 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Dec 13 01:40:56.758735 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Dec 13 01:40:56.758788 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Dec 13 01:40:56.758838 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Dec 13 01:40:56.758888 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Dec 13 01:40:56.758938 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Dec 13 01:40:56.758987 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Dec 13 01:40:56.759037 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Dec 13 01:40:56.759087 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Dec 13 01:40:56.759139 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Dec 13 01:40:56.759191 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Dec 13 01:40:56.759241 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Dec 13 01:40:56.759294 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Dec 13 01:40:56.759344 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Dec 13 01:40:56.759396 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Dec 13 01:40:56.759446 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Dec 13 01:40:56.759495 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Dec 13 01:40:56.759545 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Dec 13 01:40:56.759602 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Dec 13 01:40:56.759655 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Dec 13 01:40:56.759707 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Dec 13 01:40:56.759759 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Dec 13 01:40:56.759809 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Dec 13 01:40:56.759858 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Dec 13 01:40:56.759908 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Dec 13 01:40:56.759957 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Dec 13 01:40:56.760007 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Dec 13 01:40:56.760057 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Dec 13 01:40:56.760108 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Dec 13 01:40:56.760157 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Dec 13 01:40:56.760208 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Dec 13 01:40:56.760260 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Dec 13 01:40:56.760310 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Dec 13 01:40:56.760360 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Dec 13 01:40:56.760410 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Dec 13 01:40:56.760460 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Dec 13 01:40:56.760510 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Dec 13 01:40:56.760561 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Dec 13 01:40:56.761812 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Dec 13 01:40:56.761869 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Dec 13 01:40:56.761940 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Dec 13 01:40:56.761991 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Dec 13 01:40:56.762041 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Dec 13 01:40:56.762090 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Dec 13 01:40:56.762139 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Dec 13 01:40:56.762189 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Dec 13 01:40:56.762239 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Dec 13 01:40:56.762288 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Dec 13 01:40:56.762338 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Dec 13 01:40:56.762390 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Dec 13 01:40:56.762444 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Dec 13 01:40:56.762499 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Dec 13 01:40:56.762549 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Dec 13 01:40:56.762731 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Dec 13 01:40:56.762785 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Dec 13 01:40:56.762835 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Dec 13 01:40:56.762884 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Dec 13 01:40:56.762934 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Dec 13 01:40:56.763017 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Dec 13 01:40:56.763070 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Dec 13 01:40:56.763118 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Dec 13 01:40:56.763168 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Dec 13 01:40:56.763217 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Dec 13 01:40:56.763266 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Dec 13 01:40:56.763316 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Dec 13 01:40:56.763366 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Dec 13 01:40:56.763415 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Dec 13 01:40:56.763464 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Dec 13 01:40:56.763517 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Dec 13 01:40:56.763567 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Dec 13 01:40:56.765206 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Dec 13 01:40:56.765264 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Dec 13 01:40:56.765316 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Dec 13 01:40:56.765366 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Dec 13 01:40:56.765417 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Dec 13 01:40:56.765468 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Dec 13 01:40:56.765519 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Dec 13 01:40:56.765569 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Dec 13 01:40:56.765644 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Dec 13 01:40:56.765697 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Dec 13 01:40:56.765747 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Dec 13 01:40:56.765798 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Dec 13 01:40:56.765847 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Dec 13 01:40:56.765897 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Dec 13 01:40:56.765947 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Dec 13 01:40:56.765997 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Dec 13 01:40:56.766047 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Dec 13 01:40:56.766101 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Dec 13 01:40:56.766150 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Dec 13 01:40:56.766200 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Dec 13 01:40:56.766250 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Dec 13 01:40:56.766295 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Dec 13 01:40:56.766339 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Dec 13 01:40:56.766382 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Dec 13 01:40:56.766425 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Dec 13 01:40:56.766474 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Dec 13 01:40:56.766523 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Dec 13 01:40:56.766569 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Dec 13 01:40:56.766629 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Dec 13 01:40:56.766675 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Dec 13 01:40:56.766721 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Dec 13 01:40:56.766765 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Dec 13 01:40:56.766810 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Dec 13 01:40:56.766864 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Dec 13 01:40:56.766911 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Dec 13 01:40:56.766956 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Dec 13 01:40:56.767005 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Dec 13 01:40:56.767051 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Dec 13 01:40:56.767097 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Dec 13 01:40:56.767146 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Dec 13 01:40:56.767195 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Dec 13 01:40:56.767240 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Dec 13 01:40:56.767289 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Dec 13 01:40:56.767335 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Dec 13 01:40:56.767385 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Dec 13 01:40:56.767431 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Dec 13 01:40:56.767486 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Dec 13 01:40:56.767533 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Dec 13 01:40:56.767583 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Dec 13 01:40:56.767658 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Dec 13 01:40:56.767712 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Dec 13 01:40:56.767766 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Dec 13 01:40:56.767823 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Dec 13 01:40:56.767871 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Dec 13 01:40:56.767916 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Dec 13 01:40:56.767965 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Dec 13 01:40:56.768011 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Dec 13 01:40:56.768056 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Dec 13 01:40:56.768108 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Dec 13 01:40:56.768159 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Dec 13 01:40:56.768208 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Dec 13 01:40:56.768257 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Dec 13 01:40:56.768304 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Dec 13 01:40:56.768353 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Dec 13 01:40:56.768400 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Dec 13 01:40:56.768452 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Dec 13 01:40:56.768504 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Dec 13 01:40:56.768569 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Dec 13 01:40:56.768629 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Dec 13 01:40:56.768680 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Dec 13 01:40:56.768725 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Dec 13 01:40:56.768794 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Dec 13 01:40:56.768843 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Dec 13 01:40:56.768888 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Dec 13 01:40:56.768938 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Dec 13 01:40:56.768983 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Dec 13 01:40:56.769028 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Dec 13 01:40:56.769078 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Dec 13 01:40:56.769128 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Dec 13 01:40:56.769173 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Dec 13 01:40:56.769222 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Dec 13 01:40:56.769268 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Dec 13 01:40:56.769317 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Dec 13 01:40:56.769362 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Dec 13 01:40:56.769413 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Dec 13 01:40:56.769459 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Dec 13 01:40:56.769509 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Dec 13 01:40:56.769555 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Dec 13 01:40:56.769660 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Dec 13 01:40:56.769708 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Dec 13 01:40:56.769760 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Dec 13 01:40:56.769806 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Dec 13 01:40:56.769850 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Dec 13 01:40:56.769899 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Dec 13 01:40:56.769944 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Dec 13 01:40:56.769988 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Dec 13 01:40:56.770042 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Dec 13 01:40:56.770090 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Dec 13 01:40:56.770139 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Dec 13 01:40:56.770185 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Dec 13 01:40:56.770234 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Dec 13 01:40:56.770280 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Dec 13 01:40:56.770329 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Dec 13 01:40:56.770377 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Dec 13 01:40:56.770428 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Dec 13 01:40:56.770474 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Dec 13 01:40:56.770522 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Dec 13 01:40:56.770568 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Dec 13 01:40:56.770757 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 01:40:56.770770 kernel: PCI: CLS 32 bytes, default 64 Dec 13 01:40:56.770777 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 01:40:56.770784 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Dec 13 01:40:56.770790 kernel: clocksource: Switched to clocksource tsc Dec 13 01:40:56.770796 kernel: Initialise system trusted keyrings Dec 13 01:40:56.770803 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 01:40:56.770809 kernel: Key type asymmetric registered Dec 13 01:40:56.770815 kernel: Asymmetric key parser 'x509' registered Dec 13 01:40:56.770821 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:40:56.770829 kernel: io scheduler mq-deadline registered Dec 13 01:40:56.770835 kernel: io scheduler kyber registered Dec 13 01:40:56.770841 kernel: io scheduler bfq registered Dec 13 01:40:56.770893 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Dec 13 01:40:56.770944 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.770995 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Dec 13 01:40:56.771045 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.771095 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Dec 13 01:40:56.771147 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.771197 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Dec 13 01:40:56.771246 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.771296 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Dec 13 01:40:56.771345 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.771394 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Dec 13 01:40:56.771446 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.771499 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Dec 13 01:40:56.771549 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.771606 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Dec 13 01:40:56.771657 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.771710 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Dec 13 01:40:56.771760 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.771810 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Dec 13 01:40:56.771859 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.771909 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Dec 13 01:40:56.771959 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.772008 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Dec 13 01:40:56.772060 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.772111 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Dec 13 01:40:56.772160 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.772209 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Dec 13 01:40:56.772259 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.772311 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Dec 13 01:40:56.772360 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.772411 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Dec 13 01:40:56.772461 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.772529 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Dec 13 01:40:56.772599 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.772662 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Dec 13 01:40:56.772712 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.772762 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Dec 13 01:40:56.772813 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.772863 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Dec 13 01:40:56.772914 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.772968 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Dec 13 01:40:56.773018 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.773069 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Dec 13 01:40:56.773119 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.773170 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Dec 13 01:40:56.773220 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.773272 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Dec 13 01:40:56.773322 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.773372 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Dec 13 01:40:56.773421 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.773470 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Dec 13 01:40:56.773519 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.773571 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Dec 13 01:40:56.773683 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.773736 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Dec 13 01:40:56.773786 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.773836 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Dec 13 01:40:56.773890 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.773939 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Dec 13 01:40:56.773990 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.774040 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Dec 13 01:40:56.774092 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.774142 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Dec 13 01:40:56.774195 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.774204 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:40:56.774211 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:40:56.774218 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:40:56.774225 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Dec 13 01:40:56.774231 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:40:56.774237 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:40:56.774288 kernel: rtc_cmos 00:01: registered as rtc0 Dec 13 01:40:56.774336 kernel: rtc_cmos 00:01: setting system clock to 2024-12-13T01:40:56 UTC (1734054056) Dec 13 01:40:56.774381 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Dec 13 01:40:56.774390 kernel: intel_pstate: CPU model not supported Dec 13 01:40:56.774397 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:40:56.774403 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:40:56.774409 kernel: Segment Routing with IPv6 Dec 13 01:40:56.774415 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:40:56.774423 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:40:56.774430 kernel: Key type dns_resolver registered Dec 13 01:40:56.774436 kernel: IPI shorthand broadcast: enabled Dec 13 01:40:56.774442 kernel: sched_clock: Marking stable (916004282, 224888887)->(1202951873, -62058704) Dec 13 01:40:56.774449 kernel: registered taskstats version 1 Dec 13 01:40:56.774455 kernel: Loading compiled-in X.509 certificates Dec 13 01:40:56.774463 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:40:56.774469 kernel: Key type .fscrypt registered Dec 13 01:40:56.774475 kernel: Key type fscrypt-provisioning registered Dec 13 01:40:56.774482 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:40:56.774489 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:40:56.774495 kernel: ima: No architecture policies found Dec 13 01:40:56.774501 kernel: clk: Disabling unused clocks Dec 13 01:40:56.774508 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:40:56.774514 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:40:56.774520 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:40:56.774526 kernel: Run /init as init process Dec 13 01:40:56.774532 kernel: with arguments: Dec 13 01:40:56.774540 kernel: /init Dec 13 01:40:56.774546 kernel: with environment: Dec 13 01:40:56.774553 kernel: HOME=/ Dec 13 01:40:56.774559 kernel: TERM=linux Dec 13 01:40:56.774565 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:40:56.774573 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:40:56.774580 systemd[1]: Detected virtualization vmware. Dec 13 01:40:56.774587 systemd[1]: Detected architecture x86-64. Dec 13 01:40:56.774601 systemd[1]: Running in initrd. Dec 13 01:40:56.774617 systemd[1]: No hostname configured, using default hostname. Dec 13 01:40:56.774623 systemd[1]: Hostname set to . Dec 13 01:40:56.774630 systemd[1]: Initializing machine ID from random generator. Dec 13 01:40:56.774636 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:40:56.774643 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:40:56.774649 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:40:56.774656 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:40:56.774665 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:40:56.774672 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:40:56.774678 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:40:56.774686 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:40:56.774693 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:40:56.774700 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:40:56.774706 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:40:56.774713 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:40:56.774720 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:40:56.774726 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:40:56.774733 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:40:56.774739 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:40:56.774745 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:40:56.774752 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:40:56.774758 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:40:56.774766 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:40:56.774772 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:40:56.774779 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:40:56.774785 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:40:56.774791 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:40:56.774798 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:40:56.774805 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:40:56.774811 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:40:56.774817 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:40:56.774825 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:40:56.774831 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:40:56.774852 systemd-journald[215]: Collecting audit messages is disabled. Dec 13 01:40:56.774868 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:40:56.774876 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:40:56.774882 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:40:56.774889 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:40:56.774895 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:40:56.774903 kernel: Bridge firewalling registered Dec 13 01:40:56.774911 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:40:56.774917 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:40:56.774924 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:40:56.774931 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:40:56.774937 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:40:56.774944 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:40:56.774950 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:40:56.774975 systemd-journald[215]: Journal started Dec 13 01:40:56.774990 systemd-journald[215]: Runtime Journal (/run/log/journal/593dd62661084148bb27634387cd8128) is 4.8M, max 38.6M, 33.8M free. Dec 13 01:40:56.726793 systemd-modules-load[216]: Inserted module 'overlay' Dec 13 01:40:56.749311 systemd-modules-load[216]: Inserted module 'br_netfilter' Dec 13 01:40:56.777613 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:40:56.784719 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:40:56.785227 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:40:56.786678 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:40:56.788639 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:40:56.791512 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:40:56.796971 dracut-cmdline[244]: dracut-dracut-053 Dec 13 01:40:56.798156 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:40:56.799374 dracut-cmdline[244]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:40:56.814187 systemd-resolved[250]: Positive Trust Anchors: Dec 13 01:40:56.814194 systemd-resolved[250]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:40:56.814216 systemd-resolved[250]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:40:56.816240 systemd-resolved[250]: Defaulting to hostname 'linux'. Dec 13 01:40:56.818078 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:40:56.818232 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:40:56.847621 kernel: SCSI subsystem initialized Dec 13 01:40:56.853607 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:40:56.860613 kernel: iscsi: registered transport (tcp) Dec 13 01:40:56.873932 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:40:56.873966 kernel: QLogic iSCSI HBA Driver Dec 13 01:40:56.893767 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:40:56.897703 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:40:56.912763 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:40:56.912807 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:40:56.913919 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:40:56.945629 kernel: raid6: avx2x4 gen() 51684 MB/s Dec 13 01:40:56.961610 kernel: raid6: avx2x2 gen() 53371 MB/s Dec 13 01:40:56.978875 kernel: raid6: avx2x1 gen() 44304 MB/s Dec 13 01:40:56.978911 kernel: raid6: using algorithm avx2x2 gen() 53371 MB/s Dec 13 01:40:56.996821 kernel: raid6: .... xor() 31075 MB/s, rmw enabled Dec 13 01:40:56.996845 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:40:57.010605 kernel: xor: automatically using best checksumming function avx Dec 13 01:40:57.109617 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:40:57.115248 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:40:57.120674 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:40:57.127884 systemd-udevd[432]: Using default interface naming scheme 'v255'. Dec 13 01:40:57.130319 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:40:57.134661 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:40:57.142326 dracut-pre-trigger[433]: rd.md=0: removing MD RAID activation Dec 13 01:40:57.158301 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:40:57.161736 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:40:57.232550 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:40:57.236711 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:40:57.249653 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:40:57.250492 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:40:57.251637 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:40:57.252140 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:40:57.257767 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:40:57.265708 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:40:57.295611 kernel: VMware PVSCSI driver - version 1.0.7.0-k Dec 13 01:40:57.302072 kernel: vmw_pvscsi: using 64bit dma Dec 13 01:40:57.302108 kernel: vmw_pvscsi: max_id: 16 Dec 13 01:40:57.302116 kernel: vmw_pvscsi: setting ring_pages to 8 Dec 13 01:40:57.306995 kernel: vmw_pvscsi: enabling reqCallThreshold Dec 13 01:40:57.307027 kernel: vmw_pvscsi: driver-based request coalescing enabled Dec 13 01:40:57.307035 kernel: vmw_pvscsi: using MSI-X Dec 13 01:40:57.310867 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Dec 13 01:40:57.310899 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Dec 13 01:40:57.314343 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Dec 13 01:40:57.314377 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Dec 13 01:40:57.324992 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Dec 13 01:40:57.326090 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Dec 13 01:40:57.326743 kernel: libata version 3.00 loaded. Dec 13 01:40:57.329625 kernel: ata_piix 0000:00:07.1: version 2.13 Dec 13 01:40:57.337719 kernel: scsi host1: ata_piix Dec 13 01:40:57.337795 kernel: scsi host2: ata_piix Dec 13 01:40:57.337857 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:40:57.337866 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Dec 13 01:40:57.337873 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Dec 13 01:40:57.337880 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Dec 13 01:40:57.341804 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:40:57.341879 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:40:57.342056 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:40:57.342157 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:40:57.342228 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:40:57.342397 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:40:57.346733 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:40:57.357337 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:40:57.364721 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:40:57.375110 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:40:57.503615 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Dec 13 01:40:57.510639 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Dec 13 01:40:57.519734 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:40:57.519800 kernel: AES CTR mode by8 optimization enabled Dec 13 01:40:57.531920 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Dec 13 01:40:57.539150 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 01:40:57.539230 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Dec 13 01:40:57.539292 kernel: sd 0:0:0:0: [sda] Cache data unavailable Dec 13 01:40:57.539352 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Dec 13 01:40:57.539411 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Dec 13 01:40:57.544815 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:40:57.544841 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:40:57.544855 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 01:40:57.544967 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 01:40:57.574133 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Dec 13 01:40:57.576607 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (490) Dec 13 01:40:57.580836 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Dec 13 01:40:57.581664 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (482) Dec 13 01:40:57.587184 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Dec 13 01:40:57.589760 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Dec 13 01:40:57.589984 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Dec 13 01:40:57.596703 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:40:57.622931 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:40:57.626633 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:40:58.636638 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:40:58.637642 disk-uuid[593]: The operation has completed successfully. Dec 13 01:40:58.673102 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:40:58.673163 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:40:58.683815 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:40:58.685636 sh[613]: Success Dec 13 01:40:58.693609 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 01:40:58.735119 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:40:58.745288 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:40:58.746818 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:40:58.763228 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:40:58.763257 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:40:58.763266 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:40:58.763274 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:40:58.763281 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:40:58.769604 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 01:40:58.770459 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:40:58.775678 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Dec 13 01:40:58.776718 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:40:58.802954 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:40:58.802993 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:40:58.803002 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:40:58.821619 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:40:58.826579 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:40:58.829290 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:40:58.835492 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:40:58.843608 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:40:58.848754 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Dec 13 01:40:58.849557 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:40:58.906948 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:40:58.909992 ignition[674]: Ignition 2.19.0 Dec 13 01:40:58.910080 ignition[674]: Stage: fetch-offline Dec 13 01:40:58.910767 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:40:58.910101 ignition[674]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:40:58.910106 ignition[674]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 01:40:58.910418 ignition[674]: parsed url from cmdline: "" Dec 13 01:40:58.910421 ignition[674]: no config URL provided Dec 13 01:40:58.910424 ignition[674]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:40:58.910429 ignition[674]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:40:58.911918 ignition[674]: config successfully fetched Dec 13 01:40:58.912231 ignition[674]: parsing config with SHA512: 73ab89af34b238c3c3ac35b8ae1140af7b79079db2b05e83c5b380b093a6f8be8659ad68929e0facbaa526368a26229a588fd86e51a1dcc0bd27c8c2f3fce9bb Dec 13 01:40:58.914852 unknown[674]: fetched base config from "system" Dec 13 01:40:58.915085 ignition[674]: fetch-offline: fetch-offline passed Dec 13 01:40:58.914857 unknown[674]: fetched user config from "vmware" Dec 13 01:40:58.915121 ignition[674]: Ignition finished successfully Dec 13 01:40:58.916125 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:40:58.923689 systemd-networkd[808]: lo: Link UP Dec 13 01:40:58.923695 systemd-networkd[808]: lo: Gained carrier Dec 13 01:40:58.924356 systemd-networkd[808]: Enumeration completed Dec 13 01:40:58.924614 systemd-networkd[808]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Dec 13 01:40:58.924630 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:40:58.924760 systemd[1]: Reached target network.target - Network. Dec 13 01:40:58.924838 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 01:40:58.928242 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Dec 13 01:40:58.928351 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Dec 13 01:40:58.928108 systemd-networkd[808]: ens192: Link UP Dec 13 01:40:58.928111 systemd-networkd[808]: ens192: Gained carrier Dec 13 01:40:58.929685 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:40:58.937978 ignition[811]: Ignition 2.19.0 Dec 13 01:40:58.937984 ignition[811]: Stage: kargs Dec 13 01:40:58.938102 ignition[811]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:40:58.938109 ignition[811]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 01:40:58.938644 ignition[811]: kargs: kargs passed Dec 13 01:40:58.938673 ignition[811]: Ignition finished successfully Dec 13 01:40:58.939716 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:40:58.943832 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:40:58.950692 ignition[818]: Ignition 2.19.0 Dec 13 01:40:58.950699 ignition[818]: Stage: disks Dec 13 01:40:58.950807 ignition[818]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:40:58.950813 ignition[818]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 01:40:58.951331 ignition[818]: disks: disks passed Dec 13 01:40:58.951359 ignition[818]: Ignition finished successfully Dec 13 01:40:58.952016 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:40:58.952505 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:40:58.952748 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:40:58.952847 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:40:58.952931 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:40:58.953012 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:40:58.956696 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:40:58.966734 systemd-fsck[826]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 01:40:58.967666 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:40:58.971693 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:40:59.026790 kernel: EXT4-fs (sda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:40:59.027130 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:40:59.027499 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:40:59.036688 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:40:59.038111 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:40:59.038397 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:40:59.038427 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:40:59.038442 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:40:59.042404 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:40:59.046199 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (834) Dec 13 01:40:59.046232 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:40:59.047850 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:40:59.047867 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:40:59.050855 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:40:59.050782 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:40:59.052380 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:40:59.079276 initrd-setup-root[858]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:40:59.082381 initrd-setup-root[865]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:40:59.085459 initrd-setup-root[872]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:40:59.088285 initrd-setup-root[879]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:40:59.159616 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:40:59.164737 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:40:59.167315 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:40:59.172707 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:40:59.185784 ignition[946]: INFO : Ignition 2.19.0 Dec 13 01:40:59.185784 ignition[946]: INFO : Stage: mount Dec 13 01:40:59.186617 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:40:59.186617 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 01:40:59.186617 ignition[946]: INFO : mount: mount passed Dec 13 01:40:59.186617 ignition[946]: INFO : Ignition finished successfully Dec 13 01:40:59.188240 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:40:59.193757 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:40:59.194050 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:40:59.759303 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:40:59.765861 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:40:59.774620 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (960) Dec 13 01:40:59.777468 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:40:59.777489 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:40:59.777499 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:40:59.781612 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:40:59.783100 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:40:59.802035 ignition[977]: INFO : Ignition 2.19.0 Dec 13 01:40:59.802615 ignition[977]: INFO : Stage: files Dec 13 01:40:59.803614 ignition[977]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:40:59.803614 ignition[977]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 01:40:59.803614 ignition[977]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:40:59.804311 ignition[977]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:40:59.804499 ignition[977]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:40:59.807042 ignition[977]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:40:59.807226 ignition[977]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:40:59.807412 ignition[977]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:40:59.807345 unknown[977]: wrote ssh authorized keys file for user: core Dec 13 01:40:59.809160 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:40:59.809383 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:40:59.846829 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:40:59.919085 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:40:59.919347 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:40:59.919347 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:40:59.919347 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:40:59.919347 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:40:59.919347 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:40:59.920096 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:40:59.920096 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:40:59.920096 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:40:59.920096 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:40:59.920096 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:40:59.920096 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:40:59.920096 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:40:59.920096 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:40:59.920096 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Dec 13 01:41:00.200765 systemd-networkd[808]: ens192: Gained IPv6LL Dec 13 01:41:00.254228 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 01:41:00.448093 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:41:00.448377 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Dec 13 01:41:00.448377 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Dec 13 01:41:00.448377 ignition[977]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 01:41:00.448876 ignition[977]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:41:00.448876 ignition[977]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:41:00.448876 ignition[977]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 01:41:00.448876 ignition[977]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 13 01:41:00.448876 ignition[977]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:41:00.448876 ignition[977]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:41:00.448876 ignition[977]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 13 01:41:00.448876 ignition[977]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 01:41:00.484223 ignition[977]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:41:00.486486 ignition[977]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:41:00.486994 ignition[977]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 01:41:00.486994 ignition[977]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:41:00.486994 ignition[977]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:41:00.486994 ignition[977]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:41:00.486994 ignition[977]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:41:00.486994 ignition[977]: INFO : files: files passed Dec 13 01:41:00.486994 ignition[977]: INFO : Ignition finished successfully Dec 13 01:41:00.487732 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:41:00.490699 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:41:00.491897 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:41:00.492960 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:41:00.493154 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:41:00.498174 initrd-setup-root-after-ignition[1007]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:41:00.498174 initrd-setup-root-after-ignition[1007]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:41:00.499081 initrd-setup-root-after-ignition[1011]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:41:00.500068 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:41:00.500559 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:41:00.504692 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:41:00.516104 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:41:00.516158 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:41:00.516570 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:41:00.516695 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:41:00.516907 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:41:00.517320 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:41:00.526977 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:41:00.530708 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:41:00.535988 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:41:00.536154 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:41:00.536374 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:41:00.536564 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:41:00.536644 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:41:00.536906 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:41:00.537146 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:41:00.537331 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:41:00.537518 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:41:00.537726 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:41:00.538099 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:41:00.538302 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:41:00.538538 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:41:00.538751 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:41:00.538943 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:41:00.539110 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:41:00.539171 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:41:00.539504 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:41:00.539688 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:41:00.539855 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:41:00.539897 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:41:00.540080 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:41:00.540140 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:41:00.540395 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:41:00.540456 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:41:00.540731 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:41:00.540874 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:41:00.545616 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:41:00.545793 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:41:00.545987 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:41:00.546183 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:41:00.546268 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:41:00.546464 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:41:00.546509 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:41:00.546794 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:41:00.546875 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:41:00.547135 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:41:00.547210 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:41:00.554809 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:41:00.556445 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:41:00.556590 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:41:00.556701 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:41:00.556948 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:41:00.557029 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:41:00.561050 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:41:00.561159 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:41:00.565055 ignition[1031]: INFO : Ignition 2.19.0 Dec 13 01:41:00.565866 ignition[1031]: INFO : Stage: umount Dec 13 01:41:00.566125 ignition[1031]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:41:00.566275 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 01:41:00.567056 ignition[1031]: INFO : umount: umount passed Dec 13 01:41:00.567224 ignition[1031]: INFO : Ignition finished successfully Dec 13 01:41:00.567751 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:41:00.567809 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:41:00.568181 systemd[1]: Stopped target network.target - Network. Dec 13 01:41:00.568368 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:41:00.568399 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:41:00.568519 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:41:00.568543 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:41:00.568672 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:41:00.568694 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:41:00.568848 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:41:00.568876 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:41:00.569113 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:41:00.569274 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:41:00.573479 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:41:00.576783 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:41:00.576920 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:41:00.577186 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:41:00.577238 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:41:00.578049 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:41:00.578073 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:41:00.581693 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:41:00.581803 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:41:00.581833 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:41:00.581960 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Dec 13 01:41:00.581981 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Dec 13 01:41:00.582093 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:41:00.582114 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:41:00.582215 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:41:00.582235 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:41:00.582336 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:41:00.582356 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:41:00.582502 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:41:00.589874 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:41:00.589949 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:41:00.592006 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:41:00.592086 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:41:00.592329 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:41:00.592350 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:41:00.592467 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:41:00.592484 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:41:00.592611 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:41:00.592636 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:41:00.592888 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:41:00.592911 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:41:00.593226 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:41:00.593249 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:41:00.597700 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:41:00.598656 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:41:00.598688 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:41:00.598806 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:41:00.598827 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:41:00.598932 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:41:00.598952 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:41:00.599053 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:41:00.599074 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:41:00.600861 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:41:00.600910 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:41:00.624029 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:41:00.624095 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:41:00.624524 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:41:00.624649 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:41:00.624679 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:41:00.628706 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:41:00.638104 systemd[1]: Switching root. Dec 13 01:41:00.681065 systemd-journald[215]: Journal stopped Dec 13 01:40:56.723114 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:40:56.723130 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:40:56.723136 kernel: Disabled fast string operations Dec 13 01:40:56.723140 kernel: BIOS-provided physical RAM map: Dec 13 01:40:56.723144 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Dec 13 01:40:56.723148 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Dec 13 01:40:56.723154 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Dec 13 01:40:56.723158 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Dec 13 01:40:56.723162 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Dec 13 01:40:56.723166 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Dec 13 01:40:56.723171 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Dec 13 01:40:56.723175 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Dec 13 01:40:56.723179 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Dec 13 01:40:56.723183 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Dec 13 01:40:56.723189 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Dec 13 01:40:56.723194 kernel: NX (Execute Disable) protection: active Dec 13 01:40:56.723199 kernel: APIC: Static calls initialized Dec 13 01:40:56.723203 kernel: SMBIOS 2.7 present. Dec 13 01:40:56.723208 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Dec 13 01:40:56.723213 kernel: vmware: hypercall mode: 0x00 Dec 13 01:40:56.723218 kernel: Hypervisor detected: VMware Dec 13 01:40:56.723222 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Dec 13 01:40:56.723228 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Dec 13 01:40:56.723233 kernel: vmware: using clock offset of 2471440218 ns Dec 13 01:40:56.723238 kernel: tsc: Detected 3408.000 MHz processor Dec 13 01:40:56.723243 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:40:56.723248 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:40:56.723253 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Dec 13 01:40:56.723258 kernel: total RAM covered: 3072M Dec 13 01:40:56.723263 kernel: Found optimal setting for mtrr clean up Dec 13 01:40:56.723268 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Dec 13 01:40:56.723274 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Dec 13 01:40:56.723279 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:40:56.723284 kernel: Using GB pages for direct mapping Dec 13 01:40:56.723288 kernel: ACPI: Early table checksum verification disabled Dec 13 01:40:56.723293 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Dec 13 01:40:56.723297 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Dec 13 01:40:56.723302 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Dec 13 01:40:56.723307 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Dec 13 01:40:56.723312 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Dec 13 01:40:56.723319 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Dec 13 01:40:56.723324 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Dec 13 01:40:56.723329 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Dec 13 01:40:56.723334 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Dec 13 01:40:56.723339 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Dec 13 01:40:56.723345 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Dec 13 01:40:56.723350 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Dec 13 01:40:56.723355 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Dec 13 01:40:56.723360 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Dec 13 01:40:56.723365 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Dec 13 01:40:56.723370 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Dec 13 01:40:56.723375 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Dec 13 01:40:56.723380 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Dec 13 01:40:56.723385 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Dec 13 01:40:56.723390 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Dec 13 01:40:56.723396 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Dec 13 01:40:56.723401 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Dec 13 01:40:56.723406 kernel: system APIC only can use physical flat Dec 13 01:40:56.723411 kernel: APIC: Switched APIC routing to: physical flat Dec 13 01:40:56.723416 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 01:40:56.723421 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Dec 13 01:40:56.723425 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Dec 13 01:40:56.723430 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Dec 13 01:40:56.723435 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Dec 13 01:40:56.723441 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Dec 13 01:40:56.723446 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Dec 13 01:40:56.723451 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Dec 13 01:40:56.723456 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Dec 13 01:40:56.723460 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Dec 13 01:40:56.723465 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Dec 13 01:40:56.723470 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Dec 13 01:40:56.723475 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Dec 13 01:40:56.723480 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Dec 13 01:40:56.723485 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Dec 13 01:40:56.723491 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Dec 13 01:40:56.723496 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Dec 13 01:40:56.723500 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Dec 13 01:40:56.723505 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Dec 13 01:40:56.723510 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Dec 13 01:40:56.723515 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Dec 13 01:40:56.723520 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Dec 13 01:40:56.723525 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Dec 13 01:40:56.723529 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Dec 13 01:40:56.723534 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Dec 13 01:40:56.723539 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Dec 13 01:40:56.723545 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Dec 13 01:40:56.723550 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Dec 13 01:40:56.723555 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Dec 13 01:40:56.723560 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Dec 13 01:40:56.723564 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Dec 13 01:40:56.723569 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Dec 13 01:40:56.723574 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Dec 13 01:40:56.723579 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Dec 13 01:40:56.723584 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Dec 13 01:40:56.723589 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Dec 13 01:40:56.723604 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Dec 13 01:40:56.723610 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Dec 13 01:40:56.723615 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Dec 13 01:40:56.723620 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Dec 13 01:40:56.723625 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Dec 13 01:40:56.723630 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Dec 13 01:40:56.723635 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Dec 13 01:40:56.723640 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Dec 13 01:40:56.723645 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Dec 13 01:40:56.723649 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Dec 13 01:40:56.723656 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Dec 13 01:40:56.723661 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Dec 13 01:40:56.723666 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Dec 13 01:40:56.723671 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Dec 13 01:40:56.723676 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Dec 13 01:40:56.723681 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Dec 13 01:40:56.723686 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Dec 13 01:40:56.723690 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Dec 13 01:40:56.723695 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Dec 13 01:40:56.723700 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Dec 13 01:40:56.723706 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Dec 13 01:40:56.723711 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Dec 13 01:40:56.723716 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Dec 13 01:40:56.723725 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Dec 13 01:40:56.723731 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Dec 13 01:40:56.723737 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Dec 13 01:40:56.723742 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Dec 13 01:40:56.723747 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Dec 13 01:40:56.723753 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Dec 13 01:40:56.723758 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Dec 13 01:40:56.723764 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Dec 13 01:40:56.723769 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Dec 13 01:40:56.723774 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Dec 13 01:40:56.723779 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Dec 13 01:40:56.723785 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Dec 13 01:40:56.723790 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Dec 13 01:40:56.723795 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Dec 13 01:40:56.723800 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Dec 13 01:40:56.723805 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Dec 13 01:40:56.723812 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Dec 13 01:40:56.723817 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Dec 13 01:40:56.723822 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Dec 13 01:40:56.723828 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Dec 13 01:40:56.723833 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Dec 13 01:40:56.723838 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Dec 13 01:40:56.723843 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Dec 13 01:40:56.723848 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Dec 13 01:40:56.723854 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Dec 13 01:40:56.723859 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Dec 13 01:40:56.723865 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Dec 13 01:40:56.723870 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Dec 13 01:40:56.723876 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Dec 13 01:40:56.723881 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Dec 13 01:40:56.723886 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Dec 13 01:40:56.723891 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Dec 13 01:40:56.723896 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Dec 13 01:40:56.723901 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Dec 13 01:40:56.723907 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Dec 13 01:40:56.723912 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Dec 13 01:40:56.723918 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Dec 13 01:40:56.723924 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Dec 13 01:40:56.723929 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Dec 13 01:40:56.723934 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Dec 13 01:40:56.723939 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Dec 13 01:40:56.723944 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Dec 13 01:40:56.723950 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Dec 13 01:40:56.723955 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Dec 13 01:40:56.723960 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Dec 13 01:40:56.723965 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Dec 13 01:40:56.723971 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Dec 13 01:40:56.723976 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Dec 13 01:40:56.723982 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Dec 13 01:40:56.723987 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Dec 13 01:40:56.723992 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Dec 13 01:40:56.723997 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Dec 13 01:40:56.724002 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Dec 13 01:40:56.724007 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Dec 13 01:40:56.724013 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Dec 13 01:40:56.724018 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Dec 13 01:40:56.724024 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Dec 13 01:40:56.724029 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Dec 13 01:40:56.724035 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Dec 13 01:40:56.724040 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Dec 13 01:40:56.724045 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Dec 13 01:40:56.724050 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Dec 13 01:40:56.724055 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Dec 13 01:40:56.724060 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Dec 13 01:40:56.724065 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Dec 13 01:40:56.724071 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Dec 13 01:40:56.724077 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Dec 13 01:40:56.724082 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Dec 13 01:40:56.724087 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Dec 13 01:40:56.724093 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 01:40:56.724098 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 13 01:40:56.724103 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Dec 13 01:40:56.724109 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Dec 13 01:40:56.724114 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Dec 13 01:40:56.724120 kernel: Zone ranges: Dec 13 01:40:56.724125 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:40:56.724132 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Dec 13 01:40:56.724137 kernel: Normal empty Dec 13 01:40:56.724142 kernel: Movable zone start for each node Dec 13 01:40:56.724148 kernel: Early memory node ranges Dec 13 01:40:56.724153 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Dec 13 01:40:56.724158 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Dec 13 01:40:56.724164 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Dec 13 01:40:56.724169 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Dec 13 01:40:56.724174 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:40:56.724180 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Dec 13 01:40:56.724186 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Dec 13 01:40:56.724191 kernel: ACPI: PM-Timer IO Port: 0x1008 Dec 13 01:40:56.724197 kernel: system APIC only can use physical flat Dec 13 01:40:56.724202 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Dec 13 01:40:56.724207 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Dec 13 01:40:56.724213 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Dec 13 01:40:56.724218 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Dec 13 01:40:56.724223 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Dec 13 01:40:56.724228 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Dec 13 01:40:56.724235 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Dec 13 01:40:56.724240 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Dec 13 01:40:56.724245 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Dec 13 01:40:56.724250 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Dec 13 01:40:56.724255 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Dec 13 01:40:56.724261 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Dec 13 01:40:56.724266 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Dec 13 01:40:56.724271 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Dec 13 01:40:56.724276 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Dec 13 01:40:56.724283 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Dec 13 01:40:56.724288 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Dec 13 01:40:56.724293 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Dec 13 01:40:56.724299 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Dec 13 01:40:56.724304 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Dec 13 01:40:56.724309 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Dec 13 01:40:56.724314 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Dec 13 01:40:56.724319 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Dec 13 01:40:56.724325 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Dec 13 01:40:56.724330 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Dec 13 01:40:56.724336 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Dec 13 01:40:56.724341 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Dec 13 01:40:56.724347 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Dec 13 01:40:56.724352 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Dec 13 01:40:56.724357 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Dec 13 01:40:56.724362 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Dec 13 01:40:56.724368 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Dec 13 01:40:56.724373 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Dec 13 01:40:56.724378 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Dec 13 01:40:56.724383 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Dec 13 01:40:56.724390 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Dec 13 01:40:56.724395 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Dec 13 01:40:56.724400 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Dec 13 01:40:56.724405 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Dec 13 01:40:56.724411 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Dec 13 01:40:56.724416 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Dec 13 01:40:56.724421 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Dec 13 01:40:56.724426 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Dec 13 01:40:56.724431 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Dec 13 01:40:56.724438 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Dec 13 01:40:56.724443 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Dec 13 01:40:56.724448 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Dec 13 01:40:56.724453 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Dec 13 01:40:56.724459 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Dec 13 01:40:56.724464 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Dec 13 01:40:56.724469 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Dec 13 01:40:56.724474 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Dec 13 01:40:56.724483 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Dec 13 01:40:56.724489 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Dec 13 01:40:56.724495 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Dec 13 01:40:56.724500 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Dec 13 01:40:56.724506 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Dec 13 01:40:56.724511 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Dec 13 01:40:56.724516 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Dec 13 01:40:56.724521 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Dec 13 01:40:56.724527 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Dec 13 01:40:56.724537 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Dec 13 01:40:56.724543 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Dec 13 01:40:56.724550 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Dec 13 01:40:56.724556 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Dec 13 01:40:56.724561 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Dec 13 01:40:56.724566 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Dec 13 01:40:56.724572 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Dec 13 01:40:56.724577 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Dec 13 01:40:56.724582 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Dec 13 01:40:56.724587 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Dec 13 01:40:56.724593 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Dec 13 01:40:56.724609 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Dec 13 01:40:56.724616 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Dec 13 01:40:56.724621 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Dec 13 01:40:56.724627 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Dec 13 01:40:56.724632 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Dec 13 01:40:56.724637 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Dec 13 01:40:56.724643 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Dec 13 01:40:56.724648 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Dec 13 01:40:56.724653 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Dec 13 01:40:56.724658 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Dec 13 01:40:56.724665 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Dec 13 01:40:56.724670 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Dec 13 01:40:56.724676 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Dec 13 01:40:56.724681 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Dec 13 01:40:56.724686 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Dec 13 01:40:56.724691 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Dec 13 01:40:56.724696 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Dec 13 01:40:56.724702 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Dec 13 01:40:56.724707 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Dec 13 01:40:56.724712 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Dec 13 01:40:56.724718 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Dec 13 01:40:56.724746 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Dec 13 01:40:56.724751 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Dec 13 01:40:56.724757 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Dec 13 01:40:56.724762 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Dec 13 01:40:56.724768 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Dec 13 01:40:56.724773 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Dec 13 01:40:56.724778 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Dec 13 01:40:56.724784 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Dec 13 01:40:56.724789 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Dec 13 01:40:56.724795 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Dec 13 01:40:56.724801 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Dec 13 01:40:56.724806 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Dec 13 01:40:56.724811 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Dec 13 01:40:56.724832 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Dec 13 01:40:56.724837 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Dec 13 01:40:56.724843 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Dec 13 01:40:56.724848 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Dec 13 01:40:56.724853 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Dec 13 01:40:56.724860 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Dec 13 01:40:56.724865 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Dec 13 01:40:56.724870 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Dec 13 01:40:56.724875 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Dec 13 01:40:56.724881 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Dec 13 01:40:56.724886 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Dec 13 01:40:56.724891 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Dec 13 01:40:56.724897 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Dec 13 01:40:56.724902 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Dec 13 01:40:56.724907 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Dec 13 01:40:56.724914 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Dec 13 01:40:56.724919 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Dec 13 01:40:56.724925 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Dec 13 01:40:56.724930 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Dec 13 01:40:56.724935 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Dec 13 01:40:56.724940 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Dec 13 01:40:56.724946 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Dec 13 01:40:56.724951 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:40:56.724956 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Dec 13 01:40:56.724963 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:40:56.724968 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Dec 13 01:40:56.724974 kernel: TSC deadline timer available Dec 13 01:40:56.724979 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Dec 13 01:40:56.724984 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Dec 13 01:40:56.724990 kernel: Booting paravirtualized kernel on VMware hypervisor Dec 13 01:40:56.724995 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:40:56.725000 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Dec 13 01:40:56.725006 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Dec 13 01:40:56.725011 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Dec 13 01:40:56.725017 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Dec 13 01:40:56.725023 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Dec 13 01:40:56.725028 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Dec 13 01:40:56.725033 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Dec 13 01:40:56.725039 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Dec 13 01:40:56.725051 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Dec 13 01:40:56.725057 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Dec 13 01:40:56.725063 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Dec 13 01:40:56.725069 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Dec 13 01:40:56.725075 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Dec 13 01:40:56.725080 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Dec 13 01:40:56.725086 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Dec 13 01:40:56.725091 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Dec 13 01:40:56.725097 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Dec 13 01:40:56.725103 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Dec 13 01:40:56.725108 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Dec 13 01:40:56.725115 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:40:56.725122 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:40:56.725127 kernel: random: crng init done Dec 13 01:40:56.725133 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Dec 13 01:40:56.725138 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Dec 13 01:40:56.725144 kernel: printk: log_buf_len min size: 262144 bytes Dec 13 01:40:56.725150 kernel: printk: log_buf_len: 1048576 bytes Dec 13 01:40:56.725155 kernel: printk: early log buf free: 239648(91%) Dec 13 01:40:56.725161 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:40:56.725168 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 01:40:56.725174 kernel: Fallback order for Node 0: 0 Dec 13 01:40:56.725179 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Dec 13 01:40:56.725185 kernel: Policy zone: DMA32 Dec 13 01:40:56.725191 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:40:56.725197 kernel: Memory: 1936352K/2096628K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 160016K reserved, 0K cma-reserved) Dec 13 01:40:56.725204 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Dec 13 01:40:56.725210 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:40:56.725215 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:40:56.725221 kernel: Dynamic Preempt: voluntary Dec 13 01:40:56.725227 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:40:56.725233 kernel: rcu: RCU event tracing is enabled. Dec 13 01:40:56.725240 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Dec 13 01:40:56.725245 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:40:56.725251 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:40:56.725258 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:40:56.725264 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:40:56.725269 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Dec 13 01:40:56.725275 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Dec 13 01:40:56.725281 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Dec 13 01:40:56.725286 kernel: Console: colour VGA+ 80x25 Dec 13 01:40:56.725292 kernel: printk: console [tty0] enabled Dec 13 01:40:56.725298 kernel: printk: console [ttyS0] enabled Dec 13 01:40:56.725304 kernel: ACPI: Core revision 20230628 Dec 13 01:40:56.725311 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Dec 13 01:40:56.725316 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:40:56.725322 kernel: x2apic enabled Dec 13 01:40:56.725328 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:40:56.725333 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 01:40:56.725339 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Dec 13 01:40:56.725345 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Dec 13 01:40:56.725351 kernel: Disabled fast string operations Dec 13 01:40:56.725357 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 01:40:56.725362 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 01:40:56.725369 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:40:56.725375 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Dec 13 01:40:56.725380 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Dec 13 01:40:56.725386 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Dec 13 01:40:56.725392 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:40:56.725398 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Dec 13 01:40:56.725403 kernel: RETBleed: Mitigation: Enhanced IBRS Dec 13 01:40:56.725409 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:40:56.725416 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 01:40:56.725421 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 01:40:56.725427 kernel: SRBDS: Unknown: Dependent on hypervisor status Dec 13 01:40:56.725433 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 01:40:56.725439 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:40:56.725444 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:40:56.725450 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:40:56.725456 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:40:56.725462 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 13 01:40:56.725468 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:40:56.725474 kernel: pid_max: default: 131072 minimum: 1024 Dec 13 01:40:56.725480 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:40:56.725485 kernel: landlock: Up and running. Dec 13 01:40:56.725491 kernel: SELinux: Initializing. Dec 13 01:40:56.725497 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 01:40:56.725503 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 01:40:56.725509 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Dec 13 01:40:56.725514 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Dec 13 01:40:56.725521 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Dec 13 01:40:56.725527 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Dec 13 01:40:56.725533 kernel: Performance Events: Skylake events, core PMU driver. Dec 13 01:40:56.725538 kernel: core: CPUID marked event: 'cpu cycles' unavailable Dec 13 01:40:56.725544 kernel: core: CPUID marked event: 'instructions' unavailable Dec 13 01:40:56.725549 kernel: core: CPUID marked event: 'bus cycles' unavailable Dec 13 01:40:56.725555 kernel: core: CPUID marked event: 'cache references' unavailable Dec 13 01:40:56.725560 kernel: core: CPUID marked event: 'cache misses' unavailable Dec 13 01:40:56.725567 kernel: core: CPUID marked event: 'branch instructions' unavailable Dec 13 01:40:56.725572 kernel: core: CPUID marked event: 'branch misses' unavailable Dec 13 01:40:56.725578 kernel: ... version: 1 Dec 13 01:40:56.725584 kernel: ... bit width: 48 Dec 13 01:40:56.725589 kernel: ... generic registers: 4 Dec 13 01:40:56.725602 kernel: ... value mask: 0000ffffffffffff Dec 13 01:40:56.725609 kernel: ... max period: 000000007fffffff Dec 13 01:40:56.725615 kernel: ... fixed-purpose events: 0 Dec 13 01:40:56.725620 kernel: ... event mask: 000000000000000f Dec 13 01:40:56.725626 kernel: signal: max sigframe size: 1776 Dec 13 01:40:56.725633 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:40:56.725640 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:40:56.725646 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 01:40:56.725651 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:40:56.725657 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:40:56.725663 kernel: .... node #0, CPUs: #1 Dec 13 01:40:56.725669 kernel: Disabled fast string operations Dec 13 01:40:56.725674 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Dec 13 01:40:56.725680 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Dec 13 01:40:56.725686 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:40:56.725692 kernel: smpboot: Max logical packages: 128 Dec 13 01:40:56.725698 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Dec 13 01:40:56.725703 kernel: devtmpfs: initialized Dec 13 01:40:56.725709 kernel: x86/mm: Memory block size: 128MB Dec 13 01:40:56.725715 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Dec 13 01:40:56.725721 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:40:56.725727 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Dec 13 01:40:56.725733 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:40:56.725739 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:40:56.725745 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:40:56.725751 kernel: audit: type=2000 audit(1734054054.066:1): state=initialized audit_enabled=0 res=1 Dec 13 01:40:56.725756 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:40:56.725762 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:40:56.725771 kernel: cpuidle: using governor menu Dec 13 01:40:56.725778 kernel: Simple Boot Flag at 0x36 set to 0x80 Dec 13 01:40:56.725795 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:40:56.725813 kernel: dca service started, version 1.12.1 Dec 13 01:40:56.725825 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Dec 13 01:40:56.725831 kernel: PCI: Using configuration type 1 for base access Dec 13 01:40:56.725842 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:40:56.725848 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:40:56.725854 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:40:56.725860 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:40:56.725865 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:40:56.725871 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:40:56.725877 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:40:56.725884 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:40:56.725890 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:40:56.725895 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:40:56.725901 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Dec 13 01:40:56.725907 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:40:56.725912 kernel: ACPI: Interpreter enabled Dec 13 01:40:56.725918 kernel: ACPI: PM: (supports S0 S1 S5) Dec 13 01:40:56.725924 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:40:56.725930 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:40:56.725936 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 01:40:56.725942 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Dec 13 01:40:56.725948 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Dec 13 01:40:56.726024 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:40:56.726078 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Dec 13 01:40:56.726126 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Dec 13 01:40:56.726135 kernel: PCI host bridge to bus 0000:00 Dec 13 01:40:56.726185 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:40:56.726229 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Dec 13 01:40:56.726272 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 01:40:56.726315 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:40:56.726357 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Dec 13 01:40:56.726399 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Dec 13 01:40:56.726455 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Dec 13 01:40:56.726511 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Dec 13 01:40:56.726566 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Dec 13 01:40:56.726689 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Dec 13 01:40:56.726745 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Dec 13 01:40:56.726803 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Dec 13 01:40:56.726853 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Dec 13 01:40:56.726914 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Dec 13 01:40:56.726964 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Dec 13 01:40:56.727017 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Dec 13 01:40:56.727066 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Dec 13 01:40:56.727114 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Dec 13 01:40:56.727166 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Dec 13 01:40:56.727216 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Dec 13 01:40:56.727267 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Dec 13 01:40:56.727318 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Dec 13 01:40:56.727367 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Dec 13 01:40:56.727414 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Dec 13 01:40:56.727484 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Dec 13 01:40:56.727745 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Dec 13 01:40:56.727798 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 01:40:56.727851 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Dec 13 01:40:56.727904 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.727953 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.728006 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.728056 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.728110 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.728162 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.728239 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.728297 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.728350 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.728398 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.728451 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.728531 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.728617 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.728669 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.728721 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.728770 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.728822 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.728875 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.728929 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.728978 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.729030 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.729079 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.729135 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.729184 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.729237 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.729286 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.729338 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.729387 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.729440 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.729492 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.729562 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.729638 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.729692 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.729741 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.729794 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.729846 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.729913 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.729962 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.730017 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.730066 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.730118 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.730169 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.730222 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.730270 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.730324 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.730373 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.730425 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.730477 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.730561 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.730624 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.730696 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.730761 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.730813 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.730865 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.730945 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.731011 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.731063 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.731112 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.731163 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.731212 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.731269 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.731319 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.731371 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Dec 13 01:40:56.731420 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.731471 kernel: pci_bus 0000:01: extended config space not accessible Dec 13 01:40:56.731531 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 13 01:40:56.731587 kernel: pci_bus 0000:02: extended config space not accessible Dec 13 01:40:56.731634 kernel: acpiphp: Slot [32] registered Dec 13 01:40:56.731641 kernel: acpiphp: Slot [33] registered Dec 13 01:40:56.731647 kernel: acpiphp: Slot [34] registered Dec 13 01:40:56.731652 kernel: acpiphp: Slot [35] registered Dec 13 01:40:56.731658 kernel: acpiphp: Slot [36] registered Dec 13 01:40:56.731664 kernel: acpiphp: Slot [37] registered Dec 13 01:40:56.731670 kernel: acpiphp: Slot [38] registered Dec 13 01:40:56.731675 kernel: acpiphp: Slot [39] registered Dec 13 01:40:56.731683 kernel: acpiphp: Slot [40] registered Dec 13 01:40:56.731689 kernel: acpiphp: Slot [41] registered Dec 13 01:40:56.731695 kernel: acpiphp: Slot [42] registered Dec 13 01:40:56.731701 kernel: acpiphp: Slot [43] registered Dec 13 01:40:56.731706 kernel: acpiphp: Slot [44] registered Dec 13 01:40:56.731712 kernel: acpiphp: Slot [45] registered Dec 13 01:40:56.731717 kernel: acpiphp: Slot [46] registered Dec 13 01:40:56.731723 kernel: acpiphp: Slot [47] registered Dec 13 01:40:56.731729 kernel: acpiphp: Slot [48] registered Dec 13 01:40:56.731735 kernel: acpiphp: Slot [49] registered Dec 13 01:40:56.731741 kernel: acpiphp: Slot [50] registered Dec 13 01:40:56.731747 kernel: acpiphp: Slot [51] registered Dec 13 01:40:56.731752 kernel: acpiphp: Slot [52] registered Dec 13 01:40:56.731758 kernel: acpiphp: Slot [53] registered Dec 13 01:40:56.731764 kernel: acpiphp: Slot [54] registered Dec 13 01:40:56.731769 kernel: acpiphp: Slot [55] registered Dec 13 01:40:56.731775 kernel: acpiphp: Slot [56] registered Dec 13 01:40:56.731781 kernel: acpiphp: Slot [57] registered Dec 13 01:40:56.731786 kernel: acpiphp: Slot [58] registered Dec 13 01:40:56.731793 kernel: acpiphp: Slot [59] registered Dec 13 01:40:56.731799 kernel: acpiphp: Slot [60] registered Dec 13 01:40:56.731804 kernel: acpiphp: Slot [61] registered Dec 13 01:40:56.731810 kernel: acpiphp: Slot [62] registered Dec 13 01:40:56.731816 kernel: acpiphp: Slot [63] registered Dec 13 01:40:56.731867 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Dec 13 01:40:56.731916 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Dec 13 01:40:56.731963 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Dec 13 01:40:56.732013 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Dec 13 01:40:56.732059 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Dec 13 01:40:56.732107 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Dec 13 01:40:56.732154 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Dec 13 01:40:56.732202 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Dec 13 01:40:56.732249 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Dec 13 01:40:56.732302 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Dec 13 01:40:56.732354 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Dec 13 01:40:56.732403 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Dec 13 01:40:56.732452 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Dec 13 01:40:56.732500 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Dec 13 01:40:56.732549 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Dec 13 01:40:56.732617 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Dec 13 01:40:56.735056 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Dec 13 01:40:56.735111 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Dec 13 01:40:56.735166 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Dec 13 01:40:56.735216 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Dec 13 01:40:56.735264 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Dec 13 01:40:56.735313 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Dec 13 01:40:56.735362 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Dec 13 01:40:56.735411 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Dec 13 01:40:56.735459 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Dec 13 01:40:56.735507 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Dec 13 01:40:56.735560 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Dec 13 01:40:56.735916 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Dec 13 01:40:56.735971 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Dec 13 01:40:56.736023 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Dec 13 01:40:56.736072 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Dec 13 01:40:56.736120 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Dec 13 01:40:56.736173 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Dec 13 01:40:56.736221 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Dec 13 01:40:56.736270 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Dec 13 01:40:56.736320 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Dec 13 01:40:56.736368 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Dec 13 01:40:56.736416 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Dec 13 01:40:56.736468 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Dec 13 01:40:56.736521 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Dec 13 01:40:56.736570 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Dec 13 01:40:56.736650 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Dec 13 01:40:56.736703 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Dec 13 01:40:56.736771 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Dec 13 01:40:56.736823 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Dec 13 01:40:56.736892 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Dec 13 01:40:56.736942 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Dec 13 01:40:56.736992 kernel: pci 0000:0b:00.0: supports D1 D2 Dec 13 01:40:56.737041 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 01:40:56.737091 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Dec 13 01:40:56.737141 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Dec 13 01:40:56.737189 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Dec 13 01:40:56.737238 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Dec 13 01:40:56.737308 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Dec 13 01:40:56.737358 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Dec 13 01:40:56.737407 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Dec 13 01:40:56.737456 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Dec 13 01:40:56.737511 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Dec 13 01:40:56.737562 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Dec 13 01:40:56.737744 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Dec 13 01:40:56.737796 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Dec 13 01:40:56.737851 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Dec 13 01:40:56.737901 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Dec 13 01:40:56.737950 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Dec 13 01:40:56.738024 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Dec 13 01:40:56.738079 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Dec 13 01:40:56.738129 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Dec 13 01:40:56.738179 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Dec 13 01:40:56.738232 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Dec 13 01:40:56.738281 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Dec 13 01:40:56.738331 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Dec 13 01:40:56.738380 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Dec 13 01:40:56.738430 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Dec 13 01:40:56.738484 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Dec 13 01:40:56.738535 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Dec 13 01:40:56.738584 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Dec 13 01:40:56.739664 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Dec 13 01:40:56.739718 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Dec 13 01:40:56.739770 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Dec 13 01:40:56.739819 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Dec 13 01:40:56.739869 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Dec 13 01:40:56.739919 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Dec 13 01:40:56.739969 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Dec 13 01:40:56.740019 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Dec 13 01:40:56.740074 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Dec 13 01:40:56.740124 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Dec 13 01:40:56.740174 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Dec 13 01:40:56.740224 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Dec 13 01:40:56.740274 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Dec 13 01:40:56.740325 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Dec 13 01:40:56.740374 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Dec 13 01:40:56.740424 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Dec 13 01:40:56.740476 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Dec 13 01:40:56.740526 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Dec 13 01:40:56.740576 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Dec 13 01:40:56.742692 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Dec 13 01:40:56.742751 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Dec 13 01:40:56.742806 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Dec 13 01:40:56.742857 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Dec 13 01:40:56.742906 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Dec 13 01:40:56.742961 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Dec 13 01:40:56.743011 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Dec 13 01:40:56.743060 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Dec 13 01:40:56.743110 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Dec 13 01:40:56.743159 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Dec 13 01:40:56.743208 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Dec 13 01:40:56.743256 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Dec 13 01:40:56.743307 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Dec 13 01:40:56.743359 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Dec 13 01:40:56.743409 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Dec 13 01:40:56.743458 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Dec 13 01:40:56.743508 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Dec 13 01:40:56.744902 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Dec 13 01:40:56.744970 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Dec 13 01:40:56.745041 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Dec 13 01:40:56.745095 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Dec 13 01:40:56.745145 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Dec 13 01:40:56.745196 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Dec 13 01:40:56.745261 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Dec 13 01:40:56.745324 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Dec 13 01:40:56.745377 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Dec 13 01:40:56.745427 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Dec 13 01:40:56.745476 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Dec 13 01:40:56.745531 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Dec 13 01:40:56.745580 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Dec 13 01:40:56.745674 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Dec 13 01:40:56.745727 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Dec 13 01:40:56.745775 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Dec 13 01:40:56.745823 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Dec 13 01:40:56.745832 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Dec 13 01:40:56.745838 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Dec 13 01:40:56.745844 kernel: ACPI: PCI: Interrupt link LNKB disabled Dec 13 01:40:56.745852 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:40:56.745858 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Dec 13 01:40:56.745864 kernel: iommu: Default domain type: Translated Dec 13 01:40:56.745870 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:40:56.745876 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:40:56.745882 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:40:56.745887 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Dec 13 01:40:56.745893 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Dec 13 01:40:56.745941 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Dec 13 01:40:56.745992 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Dec 13 01:40:56.746040 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 01:40:56.746049 kernel: vgaarb: loaded Dec 13 01:40:56.746055 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Dec 13 01:40:56.746061 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Dec 13 01:40:56.746067 kernel: clocksource: Switched to clocksource tsc-early Dec 13 01:40:56.746073 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:40:56.746079 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:40:56.746085 kernel: pnp: PnP ACPI init Dec 13 01:40:56.746138 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Dec 13 01:40:56.746184 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Dec 13 01:40:56.746227 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Dec 13 01:40:56.746274 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Dec 13 01:40:56.746324 kernel: pnp 00:06: [dma 2] Dec 13 01:40:56.746371 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Dec 13 01:40:56.746435 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Dec 13 01:40:56.747979 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Dec 13 01:40:56.747990 kernel: pnp: PnP ACPI: found 8 devices Dec 13 01:40:56.747997 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:40:56.748003 kernel: NET: Registered PF_INET protocol family Dec 13 01:40:56.748009 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:40:56.748015 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 01:40:56.748021 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:40:56.748029 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:40:56.748035 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 01:40:56.748041 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 01:40:56.748047 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 01:40:56.748053 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 01:40:56.748058 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:40:56.748064 kernel: NET: Registered PF_XDP protocol family Dec 13 01:40:56.748805 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Dec 13 01:40:56.748882 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 13 01:40:56.748936 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 01:40:56.749006 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 01:40:56.749059 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 01:40:56.749110 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Dec 13 01:40:56.749162 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Dec 13 01:40:56.749217 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Dec 13 01:40:56.749268 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Dec 13 01:40:56.749328 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Dec 13 01:40:56.749380 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Dec 13 01:40:56.749432 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Dec 13 01:40:56.749482 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Dec 13 01:40:56.749537 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Dec 13 01:40:56.749587 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Dec 13 01:40:56.750729 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Dec 13 01:40:56.750782 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Dec 13 01:40:56.750832 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Dec 13 01:40:56.750882 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Dec 13 01:40:56.750935 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Dec 13 01:40:56.750986 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Dec 13 01:40:56.751036 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Dec 13 01:40:56.751086 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Dec 13 01:40:56.751136 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Dec 13 01:40:56.751185 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Dec 13 01:40:56.751237 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.751286 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.751335 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.751383 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.751433 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.751482 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.751532 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.751580 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.751997 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.752051 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.752102 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.752152 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.752201 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.752249 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.752298 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.752347 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.752399 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.752448 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.752497 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.752545 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.752635 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.752723 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.752771 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.752819 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.752872 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.752921 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.752970 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.753018 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.753068 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.753116 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.753165 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.753213 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.753264 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.753313 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.753361 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.753410 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.753459 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.753512 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.753561 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.753689 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.753745 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.753795 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.753843 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.753891 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.753939 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.753987 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.754036 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.754084 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.754132 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.754183 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.754231 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.754279 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.754328 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.754376 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.754425 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.754473 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.754521 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.754570 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.754627 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.754679 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.754727 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.754776 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.754824 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.754873 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.754921 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.754969 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.755018 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.755067 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.755118 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.755166 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.755215 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.755264 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.755312 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.755361 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.755409 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.755458 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.755506 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.755554 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.755619 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.755670 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.755719 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.755767 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.755816 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Dec 13 01:40:56.755865 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:40:56.755914 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 13 01:40:56.755964 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Dec 13 01:40:56.756013 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Dec 13 01:40:56.756336 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Dec 13 01:40:56.756391 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Dec 13 01:40:56.756445 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Dec 13 01:40:56.756499 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Dec 13 01:40:56.756550 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Dec 13 01:40:56.756811 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Dec 13 01:40:56.757848 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Dec 13 01:40:56.757904 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Dec 13 01:40:56.757959 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Dec 13 01:40:56.758009 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Dec 13 01:40:56.758059 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Dec 13 01:40:56.758109 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Dec 13 01:40:56.758158 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Dec 13 01:40:56.758207 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Dec 13 01:40:56.758256 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Dec 13 01:40:56.758305 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Dec 13 01:40:56.758353 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Dec 13 01:40:56.758402 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Dec 13 01:40:56.758453 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Dec 13 01:40:56.758534 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Dec 13 01:40:56.758586 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Dec 13 01:40:56.758685 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Dec 13 01:40:56.758735 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Dec 13 01:40:56.758788 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Dec 13 01:40:56.758838 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Dec 13 01:40:56.758888 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Dec 13 01:40:56.758938 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Dec 13 01:40:56.758987 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Dec 13 01:40:56.759037 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Dec 13 01:40:56.759087 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Dec 13 01:40:56.759139 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Dec 13 01:40:56.759191 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Dec 13 01:40:56.759241 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Dec 13 01:40:56.759294 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Dec 13 01:40:56.759344 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Dec 13 01:40:56.759396 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Dec 13 01:40:56.759446 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Dec 13 01:40:56.759495 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Dec 13 01:40:56.759545 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Dec 13 01:40:56.759602 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Dec 13 01:40:56.759655 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Dec 13 01:40:56.759707 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Dec 13 01:40:56.759759 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Dec 13 01:40:56.759809 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Dec 13 01:40:56.759858 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Dec 13 01:40:56.759908 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Dec 13 01:40:56.759957 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Dec 13 01:40:56.760007 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Dec 13 01:40:56.760057 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Dec 13 01:40:56.760108 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Dec 13 01:40:56.760157 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Dec 13 01:40:56.760208 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Dec 13 01:40:56.760260 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Dec 13 01:40:56.760310 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Dec 13 01:40:56.760360 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Dec 13 01:40:56.760410 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Dec 13 01:40:56.760460 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Dec 13 01:40:56.760510 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Dec 13 01:40:56.760561 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Dec 13 01:40:56.761812 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Dec 13 01:40:56.761869 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Dec 13 01:40:56.761940 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Dec 13 01:40:56.761991 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Dec 13 01:40:56.762041 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Dec 13 01:40:56.762090 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Dec 13 01:40:56.762139 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Dec 13 01:40:56.762189 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Dec 13 01:40:56.762239 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Dec 13 01:40:56.762288 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Dec 13 01:40:56.762338 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Dec 13 01:40:56.762390 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Dec 13 01:40:56.762444 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Dec 13 01:40:56.762499 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Dec 13 01:40:56.762549 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Dec 13 01:40:56.762731 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Dec 13 01:40:56.762785 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Dec 13 01:40:56.762835 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Dec 13 01:40:56.762884 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Dec 13 01:40:56.762934 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Dec 13 01:40:56.763017 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Dec 13 01:40:56.763070 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Dec 13 01:40:56.763118 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Dec 13 01:40:56.763168 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Dec 13 01:40:56.763217 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Dec 13 01:40:56.763266 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Dec 13 01:40:56.763316 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Dec 13 01:40:56.763366 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Dec 13 01:40:56.763415 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Dec 13 01:40:56.763464 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Dec 13 01:40:56.763517 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Dec 13 01:40:56.763567 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Dec 13 01:40:56.765206 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Dec 13 01:40:56.765264 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Dec 13 01:40:56.765316 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Dec 13 01:40:56.765366 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Dec 13 01:40:56.765417 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Dec 13 01:40:56.765468 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Dec 13 01:40:56.765519 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Dec 13 01:40:56.765569 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Dec 13 01:40:56.765644 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Dec 13 01:40:56.765697 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Dec 13 01:40:56.765747 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Dec 13 01:40:56.765798 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Dec 13 01:40:56.765847 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Dec 13 01:40:56.765897 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Dec 13 01:40:56.765947 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Dec 13 01:40:56.765997 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Dec 13 01:40:56.766047 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Dec 13 01:40:56.766101 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Dec 13 01:40:56.766150 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Dec 13 01:40:56.766200 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Dec 13 01:40:56.766250 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Dec 13 01:40:56.766295 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Dec 13 01:40:56.766339 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Dec 13 01:40:56.766382 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Dec 13 01:40:56.766425 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Dec 13 01:40:56.766474 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Dec 13 01:40:56.766523 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Dec 13 01:40:56.766569 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Dec 13 01:40:56.766629 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Dec 13 01:40:56.766675 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Dec 13 01:40:56.766721 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Dec 13 01:40:56.766765 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Dec 13 01:40:56.766810 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Dec 13 01:40:56.766864 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Dec 13 01:40:56.766911 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Dec 13 01:40:56.766956 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Dec 13 01:40:56.767005 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Dec 13 01:40:56.767051 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Dec 13 01:40:56.767097 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Dec 13 01:40:56.767146 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Dec 13 01:40:56.767195 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Dec 13 01:40:56.767240 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Dec 13 01:40:56.767289 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Dec 13 01:40:56.767335 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Dec 13 01:40:56.767385 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Dec 13 01:40:56.767431 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Dec 13 01:40:56.767486 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Dec 13 01:40:56.767533 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Dec 13 01:40:56.767583 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Dec 13 01:40:56.767658 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Dec 13 01:40:56.767712 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Dec 13 01:40:56.767766 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Dec 13 01:40:56.767823 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Dec 13 01:40:56.767871 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Dec 13 01:40:56.767916 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Dec 13 01:40:56.767965 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Dec 13 01:40:56.768011 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Dec 13 01:40:56.768056 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Dec 13 01:40:56.768108 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Dec 13 01:40:56.768159 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Dec 13 01:40:56.768208 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Dec 13 01:40:56.768257 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Dec 13 01:40:56.768304 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Dec 13 01:40:56.768353 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Dec 13 01:40:56.768400 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Dec 13 01:40:56.768452 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Dec 13 01:40:56.768504 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Dec 13 01:40:56.768569 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Dec 13 01:40:56.768629 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Dec 13 01:40:56.768680 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Dec 13 01:40:56.768725 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Dec 13 01:40:56.768794 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Dec 13 01:40:56.768843 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Dec 13 01:40:56.768888 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Dec 13 01:40:56.768938 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Dec 13 01:40:56.768983 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Dec 13 01:40:56.769028 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Dec 13 01:40:56.769078 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Dec 13 01:40:56.769128 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Dec 13 01:40:56.769173 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Dec 13 01:40:56.769222 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Dec 13 01:40:56.769268 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Dec 13 01:40:56.769317 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Dec 13 01:40:56.769362 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Dec 13 01:40:56.769413 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Dec 13 01:40:56.769459 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Dec 13 01:40:56.769509 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Dec 13 01:40:56.769555 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Dec 13 01:40:56.769660 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Dec 13 01:40:56.769708 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Dec 13 01:40:56.769760 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Dec 13 01:40:56.769806 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Dec 13 01:40:56.769850 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Dec 13 01:40:56.769899 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Dec 13 01:40:56.769944 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Dec 13 01:40:56.769988 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Dec 13 01:40:56.770042 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Dec 13 01:40:56.770090 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Dec 13 01:40:56.770139 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Dec 13 01:40:56.770185 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Dec 13 01:40:56.770234 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Dec 13 01:40:56.770280 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Dec 13 01:40:56.770329 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Dec 13 01:40:56.770377 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Dec 13 01:40:56.770428 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Dec 13 01:40:56.770474 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Dec 13 01:40:56.770522 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Dec 13 01:40:56.770568 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Dec 13 01:40:56.770757 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 01:40:56.770770 kernel: PCI: CLS 32 bytes, default 64 Dec 13 01:40:56.770777 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 01:40:56.770784 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Dec 13 01:40:56.770790 kernel: clocksource: Switched to clocksource tsc Dec 13 01:40:56.770796 kernel: Initialise system trusted keyrings Dec 13 01:40:56.770803 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 01:40:56.770809 kernel: Key type asymmetric registered Dec 13 01:40:56.770815 kernel: Asymmetric key parser 'x509' registered Dec 13 01:40:56.770821 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:40:56.770829 kernel: io scheduler mq-deadline registered Dec 13 01:40:56.770835 kernel: io scheduler kyber registered Dec 13 01:40:56.770841 kernel: io scheduler bfq registered Dec 13 01:40:56.770893 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Dec 13 01:40:56.770944 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.770995 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Dec 13 01:40:56.771045 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.771095 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Dec 13 01:40:56.771147 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.771197 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Dec 13 01:40:56.771246 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.771296 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Dec 13 01:40:56.771345 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.771394 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Dec 13 01:40:56.771446 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.771499 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Dec 13 01:40:56.771549 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.771606 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Dec 13 01:40:56.771657 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.771710 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Dec 13 01:40:56.771760 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.771810 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Dec 13 01:40:56.771859 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.771909 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Dec 13 01:40:56.771959 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.772008 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Dec 13 01:40:56.772060 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.772111 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Dec 13 01:40:56.772160 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.772209 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Dec 13 01:40:56.772259 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.772311 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Dec 13 01:40:56.772360 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.772411 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Dec 13 01:40:56.772461 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.772529 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Dec 13 01:40:56.772599 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.772662 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Dec 13 01:40:56.772712 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.772762 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Dec 13 01:40:56.772813 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.772863 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Dec 13 01:40:56.772914 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.772968 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Dec 13 01:40:56.773018 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.773069 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Dec 13 01:40:56.773119 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.773170 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Dec 13 01:40:56.773220 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.773272 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Dec 13 01:40:56.773322 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.773372 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Dec 13 01:40:56.773421 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.773470 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Dec 13 01:40:56.773519 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.773571 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Dec 13 01:40:56.773683 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.773736 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Dec 13 01:40:56.773786 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.773836 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Dec 13 01:40:56.773890 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.773939 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Dec 13 01:40:56.773990 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.774040 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Dec 13 01:40:56.774092 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.774142 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Dec 13 01:40:56.774195 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:40:56.774204 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:40:56.774211 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:40:56.774218 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:40:56.774225 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Dec 13 01:40:56.774231 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:40:56.774237 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:40:56.774288 kernel: rtc_cmos 00:01: registered as rtc0 Dec 13 01:40:56.774336 kernel: rtc_cmos 00:01: setting system clock to 2024-12-13T01:40:56 UTC (1734054056) Dec 13 01:40:56.774381 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Dec 13 01:40:56.774390 kernel: intel_pstate: CPU model not supported Dec 13 01:40:56.774397 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:40:56.774403 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:40:56.774409 kernel: Segment Routing with IPv6 Dec 13 01:40:56.774415 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:40:56.774423 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:40:56.774430 kernel: Key type dns_resolver registered Dec 13 01:40:56.774436 kernel: IPI shorthand broadcast: enabled Dec 13 01:40:56.774442 kernel: sched_clock: Marking stable (916004282, 224888887)->(1202951873, -62058704) Dec 13 01:40:56.774449 kernel: registered taskstats version 1 Dec 13 01:40:56.774455 kernel: Loading compiled-in X.509 certificates Dec 13 01:40:56.774463 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:40:56.774469 kernel: Key type .fscrypt registered Dec 13 01:40:56.774475 kernel: Key type fscrypt-provisioning registered Dec 13 01:40:56.774482 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:40:56.774489 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:40:56.774495 kernel: ima: No architecture policies found Dec 13 01:40:56.774501 kernel: clk: Disabling unused clocks Dec 13 01:40:56.774508 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:40:56.774514 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:40:56.774520 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:40:56.774526 kernel: Run /init as init process Dec 13 01:40:56.774532 kernel: with arguments: Dec 13 01:40:56.774540 kernel: /init Dec 13 01:40:56.774546 kernel: with environment: Dec 13 01:40:56.774553 kernel: HOME=/ Dec 13 01:40:56.774559 kernel: TERM=linux Dec 13 01:40:56.774565 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:40:56.774573 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:40:56.774580 systemd[1]: Detected virtualization vmware. Dec 13 01:40:56.774587 systemd[1]: Detected architecture x86-64. Dec 13 01:40:56.774601 systemd[1]: Running in initrd. Dec 13 01:40:56.774617 systemd[1]: No hostname configured, using default hostname. Dec 13 01:40:56.774623 systemd[1]: Hostname set to . Dec 13 01:40:56.774630 systemd[1]: Initializing machine ID from random generator. Dec 13 01:40:56.774636 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:40:56.774643 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:40:56.774649 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:40:56.774656 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:40:56.774665 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:40:56.774672 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:40:56.774678 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:40:56.774686 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:40:56.774693 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:40:56.774700 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:40:56.774706 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:40:56.774713 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:40:56.774720 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:40:56.774726 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:40:56.774733 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:40:56.774739 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:40:56.774745 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:40:56.774752 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:40:56.774758 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:40:56.774766 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:40:56.774772 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:40:56.774779 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:40:56.774785 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:40:56.774791 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:40:56.774798 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:40:56.774805 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:40:56.774811 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:40:56.774817 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:40:56.774825 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:40:56.774831 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:40:56.774852 systemd-journald[215]: Collecting audit messages is disabled. Dec 13 01:40:56.774868 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:40:56.774876 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:40:56.774882 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:40:56.774889 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:40:56.774895 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:40:56.774903 kernel: Bridge firewalling registered Dec 13 01:40:56.774911 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:40:56.774917 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:40:56.774924 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:40:56.774931 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:40:56.774937 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:40:56.774944 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:40:56.774950 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:40:56.774975 systemd-journald[215]: Journal started Dec 13 01:40:56.774990 systemd-journald[215]: Runtime Journal (/run/log/journal/593dd62661084148bb27634387cd8128) is 4.8M, max 38.6M, 33.8M free. Dec 13 01:40:56.726793 systemd-modules-load[216]: Inserted module 'overlay' Dec 13 01:40:56.749311 systemd-modules-load[216]: Inserted module 'br_netfilter' Dec 13 01:40:56.777613 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:40:56.784719 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:40:56.785227 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:40:56.786678 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:40:56.788639 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:40:56.791512 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:40:56.796971 dracut-cmdline[244]: dracut-dracut-053 Dec 13 01:40:56.798156 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:40:56.799374 dracut-cmdline[244]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:40:56.814187 systemd-resolved[250]: Positive Trust Anchors: Dec 13 01:40:56.814194 systemd-resolved[250]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:40:56.814216 systemd-resolved[250]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:40:56.816240 systemd-resolved[250]: Defaulting to hostname 'linux'. Dec 13 01:40:56.818078 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:40:56.818232 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:40:56.847621 kernel: SCSI subsystem initialized Dec 13 01:40:56.853607 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:40:56.860613 kernel: iscsi: registered transport (tcp) Dec 13 01:40:56.873932 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:40:56.873966 kernel: QLogic iSCSI HBA Driver Dec 13 01:40:56.893767 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:40:56.897703 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:40:56.912763 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:40:56.912807 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:40:56.913919 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:40:56.945629 kernel: raid6: avx2x4 gen() 51684 MB/s Dec 13 01:40:56.961610 kernel: raid6: avx2x2 gen() 53371 MB/s Dec 13 01:40:56.978875 kernel: raid6: avx2x1 gen() 44304 MB/s Dec 13 01:40:56.978911 kernel: raid6: using algorithm avx2x2 gen() 53371 MB/s Dec 13 01:40:56.996821 kernel: raid6: .... xor() 31075 MB/s, rmw enabled Dec 13 01:40:56.996845 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:40:57.010605 kernel: xor: automatically using best checksumming function avx Dec 13 01:40:57.109617 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:40:57.115248 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:40:57.120674 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:40:57.127884 systemd-udevd[432]: Using default interface naming scheme 'v255'. Dec 13 01:40:57.130319 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:40:57.134661 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:40:57.142326 dracut-pre-trigger[433]: rd.md=0: removing MD RAID activation Dec 13 01:40:57.158301 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:40:57.161736 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:40:57.232550 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:40:57.236711 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:40:57.249653 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:40:57.250492 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:40:57.251637 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:40:57.252140 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:40:57.257767 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:40:57.265708 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:40:57.295611 kernel: VMware PVSCSI driver - version 1.0.7.0-k Dec 13 01:40:57.302072 kernel: vmw_pvscsi: using 64bit dma Dec 13 01:40:57.302108 kernel: vmw_pvscsi: max_id: 16 Dec 13 01:40:57.302116 kernel: vmw_pvscsi: setting ring_pages to 8 Dec 13 01:40:57.306995 kernel: vmw_pvscsi: enabling reqCallThreshold Dec 13 01:40:57.307027 kernel: vmw_pvscsi: driver-based request coalescing enabled Dec 13 01:40:57.307035 kernel: vmw_pvscsi: using MSI-X Dec 13 01:40:57.310867 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Dec 13 01:40:57.310899 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Dec 13 01:40:57.314343 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Dec 13 01:40:57.314377 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Dec 13 01:40:57.324992 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Dec 13 01:40:57.326090 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Dec 13 01:40:57.326743 kernel: libata version 3.00 loaded. Dec 13 01:40:57.329625 kernel: ata_piix 0000:00:07.1: version 2.13 Dec 13 01:40:57.337719 kernel: scsi host1: ata_piix Dec 13 01:40:57.337795 kernel: scsi host2: ata_piix Dec 13 01:40:57.337857 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:40:57.337866 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Dec 13 01:40:57.337873 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Dec 13 01:40:57.337880 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Dec 13 01:40:57.341804 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:40:57.341879 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:40:57.342056 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:40:57.342157 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:40:57.342228 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:40:57.342397 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:40:57.346733 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:40:57.357337 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:40:57.364721 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:40:57.375110 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:40:57.503615 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Dec 13 01:40:57.510639 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Dec 13 01:40:57.519734 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:40:57.519800 kernel: AES CTR mode by8 optimization enabled Dec 13 01:40:57.531920 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Dec 13 01:40:57.539150 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 01:40:57.539230 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Dec 13 01:40:57.539292 kernel: sd 0:0:0:0: [sda] Cache data unavailable Dec 13 01:40:57.539352 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Dec 13 01:40:57.539411 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Dec 13 01:40:57.544815 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:40:57.544841 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:40:57.544855 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 01:40:57.544967 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 01:40:57.574133 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Dec 13 01:40:57.576607 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (490) Dec 13 01:40:57.580836 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Dec 13 01:40:57.581664 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (482) Dec 13 01:40:57.587184 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Dec 13 01:40:57.589760 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Dec 13 01:40:57.589984 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Dec 13 01:40:57.596703 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:40:57.622931 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:40:57.626633 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:40:58.636638 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:40:58.637642 disk-uuid[593]: The operation has completed successfully. Dec 13 01:40:58.673102 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:40:58.673163 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:40:58.683815 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:40:58.685636 sh[613]: Success Dec 13 01:40:58.693609 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 01:40:58.735119 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:40:58.745288 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:40:58.746818 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:40:58.763228 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:40:58.763257 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:40:58.763266 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:40:58.763274 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:40:58.763281 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:40:58.769604 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 01:40:58.770459 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:40:58.775678 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Dec 13 01:40:58.776718 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:40:58.802954 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:40:58.802993 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:40:58.803002 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:40:58.821619 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:40:58.826579 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:40:58.829290 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:40:58.835492 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:40:58.843608 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:40:58.848754 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Dec 13 01:40:58.849557 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:40:58.906948 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:40:58.909992 ignition[674]: Ignition 2.19.0 Dec 13 01:40:58.910080 ignition[674]: Stage: fetch-offline Dec 13 01:40:58.910767 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:40:58.910101 ignition[674]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:40:58.910106 ignition[674]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 01:40:58.910418 ignition[674]: parsed url from cmdline: "" Dec 13 01:40:58.910421 ignition[674]: no config URL provided Dec 13 01:40:58.910424 ignition[674]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:40:58.910429 ignition[674]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:40:58.911918 ignition[674]: config successfully fetched Dec 13 01:40:58.912231 ignition[674]: parsing config with SHA512: 73ab89af34b238c3c3ac35b8ae1140af7b79079db2b05e83c5b380b093a6f8be8659ad68929e0facbaa526368a26229a588fd86e51a1dcc0bd27c8c2f3fce9bb Dec 13 01:40:58.914852 unknown[674]: fetched base config from "system" Dec 13 01:40:58.915085 ignition[674]: fetch-offline: fetch-offline passed Dec 13 01:40:58.914857 unknown[674]: fetched user config from "vmware" Dec 13 01:40:58.915121 ignition[674]: Ignition finished successfully Dec 13 01:40:58.916125 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:40:58.923689 systemd-networkd[808]: lo: Link UP Dec 13 01:40:58.923695 systemd-networkd[808]: lo: Gained carrier Dec 13 01:40:58.924356 systemd-networkd[808]: Enumeration completed Dec 13 01:40:58.924614 systemd-networkd[808]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Dec 13 01:40:58.924630 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:40:58.924760 systemd[1]: Reached target network.target - Network. Dec 13 01:40:58.924838 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 01:40:58.928242 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Dec 13 01:40:58.928351 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Dec 13 01:40:58.928108 systemd-networkd[808]: ens192: Link UP Dec 13 01:40:58.928111 systemd-networkd[808]: ens192: Gained carrier Dec 13 01:40:58.929685 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:40:58.937978 ignition[811]: Ignition 2.19.0 Dec 13 01:40:58.937984 ignition[811]: Stage: kargs Dec 13 01:40:58.938102 ignition[811]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:40:58.938109 ignition[811]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 01:40:58.938644 ignition[811]: kargs: kargs passed Dec 13 01:40:58.938673 ignition[811]: Ignition finished successfully Dec 13 01:40:58.939716 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:40:58.943832 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:40:58.950692 ignition[818]: Ignition 2.19.0 Dec 13 01:40:58.950699 ignition[818]: Stage: disks Dec 13 01:40:58.950807 ignition[818]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:40:58.950813 ignition[818]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 01:40:58.951331 ignition[818]: disks: disks passed Dec 13 01:40:58.951359 ignition[818]: Ignition finished successfully Dec 13 01:40:58.952016 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:40:58.952505 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:40:58.952748 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:40:58.952847 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:40:58.952931 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:40:58.953012 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:40:58.956696 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:40:58.966734 systemd-fsck[826]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 01:40:58.967666 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:40:58.971693 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:40:59.026790 kernel: EXT4-fs (sda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:40:59.027130 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:40:59.027499 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:40:59.036688 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:40:59.038111 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:40:59.038397 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:40:59.038427 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:40:59.038442 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:40:59.042404 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:40:59.046199 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (834) Dec 13 01:40:59.046232 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:40:59.047850 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:40:59.047867 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:40:59.050855 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:40:59.050782 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:40:59.052380 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:40:59.079276 initrd-setup-root[858]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:40:59.082381 initrd-setup-root[865]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:40:59.085459 initrd-setup-root[872]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:40:59.088285 initrd-setup-root[879]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:40:59.159616 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:40:59.164737 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:40:59.167315 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:40:59.172707 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:40:59.185784 ignition[946]: INFO : Ignition 2.19.0 Dec 13 01:40:59.185784 ignition[946]: INFO : Stage: mount Dec 13 01:40:59.186617 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:40:59.186617 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 01:40:59.186617 ignition[946]: INFO : mount: mount passed Dec 13 01:40:59.186617 ignition[946]: INFO : Ignition finished successfully Dec 13 01:40:59.188240 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:40:59.193757 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:40:59.194050 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:40:59.759303 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:40:59.765861 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:40:59.774620 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (960) Dec 13 01:40:59.777468 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:40:59.777489 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:40:59.777499 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:40:59.781612 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:40:59.783100 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:40:59.802035 ignition[977]: INFO : Ignition 2.19.0 Dec 13 01:40:59.802615 ignition[977]: INFO : Stage: files Dec 13 01:40:59.803614 ignition[977]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:40:59.803614 ignition[977]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 01:40:59.803614 ignition[977]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:40:59.804311 ignition[977]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:40:59.804499 ignition[977]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:40:59.807042 ignition[977]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:40:59.807226 ignition[977]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:40:59.807412 ignition[977]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:40:59.807345 unknown[977]: wrote ssh authorized keys file for user: core Dec 13 01:40:59.809160 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:40:59.809383 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:40:59.846829 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:40:59.919085 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:40:59.919347 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:40:59.919347 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:40:59.919347 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:40:59.919347 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:40:59.919347 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:40:59.920096 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:40:59.920096 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:40:59.920096 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:40:59.920096 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:40:59.920096 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:40:59.920096 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:40:59.920096 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:40:59.920096 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:40:59.920096 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Dec 13 01:41:00.200765 systemd-networkd[808]: ens192: Gained IPv6LL Dec 13 01:41:00.254228 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 01:41:00.448093 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:41:00.448377 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Dec 13 01:41:00.448377 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Dec 13 01:41:00.448377 ignition[977]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 01:41:00.448876 ignition[977]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:41:00.448876 ignition[977]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:41:00.448876 ignition[977]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 01:41:00.448876 ignition[977]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 13 01:41:00.448876 ignition[977]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:41:00.448876 ignition[977]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:41:00.448876 ignition[977]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 13 01:41:00.448876 ignition[977]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 01:41:00.484223 ignition[977]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:41:00.486486 ignition[977]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:41:00.486994 ignition[977]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 01:41:00.486994 ignition[977]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:41:00.486994 ignition[977]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:41:00.486994 ignition[977]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:41:00.486994 ignition[977]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:41:00.486994 ignition[977]: INFO : files: files passed Dec 13 01:41:00.486994 ignition[977]: INFO : Ignition finished successfully Dec 13 01:41:00.487732 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:41:00.490699 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:41:00.491897 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:41:00.492960 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:41:00.493154 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:41:00.498174 initrd-setup-root-after-ignition[1007]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:41:00.498174 initrd-setup-root-after-ignition[1007]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:41:00.499081 initrd-setup-root-after-ignition[1011]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:41:00.500068 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:41:00.500559 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:41:00.504692 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:41:00.516104 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:41:00.516158 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:41:00.516570 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:41:00.516695 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:41:00.516907 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:41:00.517320 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:41:00.526977 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:41:00.530708 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:41:00.535988 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:41:00.536154 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:41:00.536374 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:41:00.536564 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:41:00.536644 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:41:00.536906 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:41:00.537146 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:41:00.537331 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:41:00.537518 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:41:00.537726 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:41:00.538099 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:41:00.538302 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:41:00.538538 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:41:00.538751 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:41:00.538943 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:41:00.539110 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:41:00.539171 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:41:00.539504 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:41:00.539688 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:41:00.539855 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:41:00.539897 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:41:00.540080 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:41:00.540140 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:41:00.540395 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:41:00.540456 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:41:00.540731 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:41:00.540874 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:41:00.545616 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:41:00.545793 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:41:00.545987 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:41:00.546183 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:41:00.546268 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:41:00.546464 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:41:00.546509 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:41:00.546794 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:41:00.546875 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:41:00.547135 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:41:00.547210 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:41:00.554809 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:41:00.556445 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:41:00.556590 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:41:00.556701 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:41:00.556948 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:41:00.557029 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:41:00.561050 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:41:00.561159 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:41:00.565055 ignition[1031]: INFO : Ignition 2.19.0 Dec 13 01:41:00.565866 ignition[1031]: INFO : Stage: umount Dec 13 01:41:00.566125 ignition[1031]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:41:00.566275 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 01:41:00.567056 ignition[1031]: INFO : umount: umount passed Dec 13 01:41:00.567224 ignition[1031]: INFO : Ignition finished successfully Dec 13 01:41:00.567751 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:41:00.567809 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:41:00.568181 systemd[1]: Stopped target network.target - Network. Dec 13 01:41:00.568368 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:41:00.568399 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:41:00.568519 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:41:00.568543 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:41:00.568672 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:41:00.568694 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:41:00.568848 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:41:00.568876 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:41:00.569113 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:41:00.569274 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:41:00.573479 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:41:00.576783 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:41:00.576920 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:41:00.577186 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:41:00.577238 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:41:00.578049 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:41:00.578073 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:41:00.581693 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:41:00.581803 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:41:00.581833 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:41:00.581960 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Dec 13 01:41:00.581981 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Dec 13 01:41:00.582093 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:41:00.582114 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:41:00.582215 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:41:00.582235 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:41:00.582336 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:41:00.582356 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:41:00.582502 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:41:00.589874 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:41:00.589949 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:41:00.592006 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:41:00.592086 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:41:00.592329 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:41:00.592350 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:41:00.592467 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:41:00.592484 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:41:00.592611 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:41:00.592636 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:41:00.592888 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:41:00.592911 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:41:00.593226 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:41:00.593249 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:41:00.597700 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:41:00.598656 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:41:00.598688 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:41:00.598806 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:41:00.598827 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:41:00.598932 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:41:00.598952 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:41:00.599053 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:41:00.599074 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:41:00.600861 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:41:00.600910 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:41:00.624029 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:41:00.624095 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:41:00.624524 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:41:00.624649 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:41:00.624679 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:41:00.628706 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:41:00.638104 systemd[1]: Switching root. Dec 13 01:41:00.681065 systemd-journald[215]: Journal stopped Dec 13 01:41:01.787276 systemd-journald[215]: Received SIGTERM from PID 1 (systemd). Dec 13 01:41:01.787297 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:41:01.787305 kernel: SELinux: policy capability open_perms=1 Dec 13 01:41:01.787311 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:41:01.787316 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:41:01.787321 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:41:01.787328 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:41:01.787334 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:41:01.787339 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:41:01.787345 systemd[1]: Successfully loaded SELinux policy in 61.006ms. Dec 13 01:41:01.787352 kernel: audit: type=1403 audit(1734054061.300:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:41:01.787358 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.609ms. Dec 13 01:41:01.787365 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:41:01.787373 systemd[1]: Detected virtualization vmware. Dec 13 01:41:01.787379 systemd[1]: Detected architecture x86-64. Dec 13 01:41:01.787386 systemd[1]: Detected first boot. Dec 13 01:41:01.787392 systemd[1]: Initializing machine ID from random generator. Dec 13 01:41:01.787400 zram_generator::config[1073]: No configuration found. Dec 13 01:41:01.787407 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:41:01.787415 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Dec 13 01:41:01.787422 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Dec 13 01:41:01.787429 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:41:01.787435 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:41:01.787441 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:41:01.787449 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:41:01.787456 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:41:01.787462 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:41:01.787468 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:41:01.787475 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:41:01.787487 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:41:01.787496 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:41:01.787504 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:41:01.787510 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:41:01.787517 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:41:01.787523 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:41:01.787530 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:41:01.787537 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:41:01.787543 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:41:01.787550 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:41:01.787557 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:41:01.787565 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:41:01.787573 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:41:01.787580 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:41:01.787587 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:41:01.787600 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:41:01.787611 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:41:01.787618 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:41:01.787627 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:41:01.787634 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:41:01.787640 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:41:01.787647 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:41:01.787654 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:41:01.787662 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:41:01.787668 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:41:01.787675 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:41:01.787682 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:41:01.787688 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:41:01.787695 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:41:01.787702 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:41:01.787709 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:41:01.787717 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:41:01.787725 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:41:01.787732 systemd[1]: Reached target machines.target - Containers. Dec 13 01:41:01.787739 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:41:01.787746 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Dec 13 01:41:01.787753 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:41:01.787759 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:41:01.787766 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:41:01.787774 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:41:01.787781 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:41:01.787788 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:41:01.787794 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:41:01.787801 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:41:01.787808 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:41:01.787814 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:41:01.787821 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:41:01.787828 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:41:01.787835 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:41:01.787842 kernel: fuse: init (API version 7.39) Dec 13 01:41:01.787848 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:41:01.787855 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:41:01.787862 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:41:01.787869 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:41:01.787876 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:41:01.787883 systemd[1]: Stopped verity-setup.service. Dec 13 01:41:01.787891 kernel: ACPI: bus type drm_connector registered Dec 13 01:41:01.787897 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:41:01.787904 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:41:01.787910 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:41:01.787917 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:41:01.787934 systemd-journald[1160]: Collecting audit messages is disabled. Dec 13 01:41:01.787950 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:41:01.787957 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:41:01.787964 systemd-journald[1160]: Journal started Dec 13 01:41:01.787978 systemd-journald[1160]: Runtime Journal (/run/log/journal/1df0cacddb1c4157a36b656d05ed0ba8) is 4.8M, max 38.6M, 33.8M free. Dec 13 01:41:01.627988 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:41:01.645282 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 13 01:41:01.645642 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:41:01.788453 jq[1140]: true Dec 13 01:41:01.789772 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:41:01.789990 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:41:01.791808 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:41:01.792052 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:41:01.792136 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:41:01.792372 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:41:01.792444 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:41:01.793776 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:41:01.793857 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:41:01.794086 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:41:01.794159 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:41:01.794388 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:41:01.794459 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:41:01.794925 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:41:01.795167 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:41:01.797647 kernel: loop: module loaded Dec 13 01:41:01.798316 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:41:01.798413 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:41:01.809693 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:41:01.812816 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:41:01.819037 jq[1175]: true Dec 13 01:41:01.812946 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:41:01.812966 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:41:01.814364 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:41:01.816693 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:41:01.819418 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:41:01.819835 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:41:01.822034 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:41:01.830318 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:41:01.830468 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:41:01.833141 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:41:01.833325 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:41:01.839707 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:41:01.842008 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:41:01.845738 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:41:01.847959 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:41:01.848269 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:41:01.850772 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:41:01.850966 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:41:01.851208 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:41:01.856085 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:41:01.861742 systemd-journald[1160]: Time spent on flushing to /var/log/journal/1df0cacddb1c4157a36b656d05ed0ba8 is 128.195ms for 1835 entries. Dec 13 01:41:01.861742 systemd-journald[1160]: System Journal (/var/log/journal/1df0cacddb1c4157a36b656d05ed0ba8) is 8.0M, max 584.8M, 576.8M free. Dec 13 01:41:01.997509 systemd-journald[1160]: Received client request to flush runtime journal. Dec 13 01:41:01.997545 kernel: loop0: detected capacity change from 0 to 140768 Dec 13 01:41:01.873632 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:41:01.877657 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:41:01.886089 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:41:01.941310 systemd-tmpfiles[1208]: ACLs are not supported, ignoring. Dec 13 01:41:01.941320 systemd-tmpfiles[1208]: ACLs are not supported, ignoring. Dec 13 01:41:01.945085 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:41:01.950868 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:41:01.951171 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:41:01.961381 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:41:01.963716 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:41:01.982858 udevadm[1226]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 01:41:01.987859 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:41:01.988614 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:41:02.001856 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:41:02.011703 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:41:02.017916 ignition[1209]: Ignition 2.19.0 Dec 13 01:41:02.018146 ignition[1209]: deleting config from guestinfo properties Dec 13 01:41:02.022111 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:41:02.022449 ignition[1209]: Successfully deleted config Dec 13 01:41:02.032789 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:41:02.033161 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Dec 13 01:41:02.037613 kernel: loop1: detected capacity change from 0 to 142488 Dec 13 01:41:02.045832 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Dec 13 01:41:02.045845 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Dec 13 01:41:02.049502 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:41:02.160614 kernel: loop2: detected capacity change from 0 to 2976 Dec 13 01:41:02.200610 kernel: loop3: detected capacity change from 0 to 205544 Dec 13 01:41:02.253644 kernel: loop4: detected capacity change from 0 to 140768 Dec 13 01:41:02.288793 kernel: loop5: detected capacity change from 0 to 142488 Dec 13 01:41:02.316683 kernel: loop6: detected capacity change from 0 to 2976 Dec 13 01:41:02.336643 kernel: loop7: detected capacity change from 0 to 205544 Dec 13 01:41:02.358151 (sd-merge)[1246]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Dec 13 01:41:02.359744 (sd-merge)[1246]: Merged extensions into '/usr'. Dec 13 01:41:02.362934 systemd[1]: Reloading requested from client PID 1207 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:41:02.363006 systemd[1]: Reloading... Dec 13 01:41:02.415612 zram_generator::config[1280]: No configuration found. Dec 13 01:41:02.499835 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Dec 13 01:41:02.515610 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:41:02.547467 systemd[1]: Reloading finished in 184 ms. Dec 13 01:41:02.550049 ldconfig[1202]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:41:02.569136 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:41:02.569444 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:41:02.569696 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:41:02.575763 systemd[1]: Starting ensure-sysext.service... Dec 13 01:41:02.576797 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:41:02.578695 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:41:02.585588 systemd[1]: Reloading requested from client PID 1329 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:41:02.585620 systemd[1]: Reloading... Dec 13 01:41:02.598747 systemd-tmpfiles[1330]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:41:02.598953 systemd-tmpfiles[1330]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:41:02.599458 systemd-tmpfiles[1330]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:41:02.599722 systemd-tmpfiles[1330]: ACLs are not supported, ignoring. Dec 13 01:41:02.599762 systemd-udevd[1331]: Using default interface naming scheme 'v255'. Dec 13 01:41:02.599764 systemd-tmpfiles[1330]: ACLs are not supported, ignoring. Dec 13 01:41:02.603711 systemd-tmpfiles[1330]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:41:02.603999 systemd-tmpfiles[1330]: Skipping /boot Dec 13 01:41:02.611268 systemd-tmpfiles[1330]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:41:02.611353 systemd-tmpfiles[1330]: Skipping /boot Dec 13 01:41:02.649641 zram_generator::config[1354]: No configuration found. Dec 13 01:41:02.689612 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1369) Dec 13 01:41:02.700613 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1369) Dec 13 01:41:02.718608 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 01:41:02.725618 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:41:02.755305 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Dec 13 01:41:02.770761 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1362) Dec 13 01:41:02.771493 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:41:02.807307 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 01:41:02.807423 systemd[1]: Reloading finished in 221 ms. Dec 13 01:41:02.816773 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:41:02.823233 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:41:02.828674 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Dec 13 01:41:02.831618 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Dec 13 01:41:02.832636 kernel: Guest personality initialized and is active Dec 13 01:41:02.835075 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Dec 13 01:41:02.835104 kernel: Initialized host personality Dec 13 01:41:02.839774 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Dec 13 01:41:02.840230 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:41:02.844755 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:41:02.847731 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:41:02.849282 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:41:02.849929 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:41:02.855839 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:41:02.856021 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:41:02.856829 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:41:02.859733 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:41:02.862702 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:41:02.864134 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:41:02.864957 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:41:02.865068 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:41:02.865815 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:41:02.867635 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:41:02.867976 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:41:02.868051 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:41:02.870063 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:41:02.876864 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:41:02.877849 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:41:02.880016 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:41:02.880179 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:41:02.880275 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:41:02.881759 systemd[1]: Finished ensure-sysext.service. Dec 13 01:41:02.885355 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 01:41:02.889706 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:41:02.894823 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:41:02.904934 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:41:02.905634 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:41:02.908016 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:41:02.908121 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:41:02.908461 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:41:02.908729 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:41:02.908820 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:41:02.911374 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:41:02.911412 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:41:02.923085 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:41:02.928818 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:41:02.929156 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:41:02.929991 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:41:02.935645 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:41:02.935881 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:41:02.936621 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 01:41:02.953078 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:41:02.955644 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:41:02.957778 augenrules[1490]: No rules Dec 13 01:41:02.958683 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:41:02.967727 (udev-worker)[1367]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Dec 13 01:41:02.969386 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:41:02.975820 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:41:02.989229 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:41:02.996781 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:41:03.011938 lvm[1505]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:41:03.036833 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:41:03.037086 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:41:03.044885 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:41:03.045946 systemd-networkd[1456]: lo: Link UP Dec 13 01:41:03.045953 systemd-networkd[1456]: lo: Gained carrier Dec 13 01:41:03.046843 systemd-networkd[1456]: Enumeration completed Dec 13 01:41:03.046907 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:41:03.047098 systemd-networkd[1456]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Dec 13 01:41:03.049126 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:41:03.051090 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Dec 13 01:41:03.051250 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Dec 13 01:41:03.053957 systemd-networkd[1456]: ens192: Link UP Dec 13 01:41:03.054144 systemd-networkd[1456]: ens192: Gained carrier Dec 13 01:41:03.054427 lvm[1510]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:41:03.061649 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 01:41:03.061858 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:41:03.062029 systemd-timesyncd[1468]: Network configuration changed, trying to establish connection. Dec 13 01:41:03.068633 systemd-resolved[1459]: Positive Trust Anchors: Dec 13 01:41:03.068825 systemd-resolved[1459]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:41:03.068877 systemd-resolved[1459]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:41:03.072524 systemd-resolved[1459]: Defaulting to hostname 'linux'. Dec 13 01:41:03.073711 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:41:03.079029 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:41:03.079340 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:41:03.080182 systemd[1]: Reached target network.target - Network. Dec 13 01:41:03.080285 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:41:03.080398 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:41:03.080557 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:41:03.080751 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:41:03.080972 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:41:03.081116 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:41:03.081227 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:41:03.081337 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:41:03.081357 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:41:03.081440 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:41:03.082380 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:41:03.083600 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:41:03.091894 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:41:03.092400 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:41:03.092541 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:41:03.092669 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:41:03.092787 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:41:03.092809 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:41:03.093617 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:41:03.094786 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:41:03.097687 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:41:03.098757 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:41:03.098864 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:41:03.103732 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:41:03.104675 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:41:03.107878 jq[1520]: false Dec 13 01:41:03.108622 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:41:03.109789 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:41:03.114155 dbus-daemon[1519]: [system] SELinux support is enabled Dec 13 01:41:03.115470 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:41:03.115871 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:41:03.116313 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:41:03.117884 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:41:03.120734 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:41:03.124659 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Dec 13 01:41:03.125017 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:41:03.127794 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:41:03.128798 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:41:03.133961 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:41:03.134077 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:41:03.137438 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:41:03.137460 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:41:03.137629 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:41:03.137640 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:41:03.148052 extend-filesystems[1521]: Found loop4 Dec 13 01:41:03.148348 extend-filesystems[1521]: Found loop5 Dec 13 01:41:03.148495 extend-filesystems[1521]: Found loop6 Dec 13 01:41:03.150623 extend-filesystems[1521]: Found loop7 Dec 13 01:41:03.150623 extend-filesystems[1521]: Found sda Dec 13 01:41:03.150623 extend-filesystems[1521]: Found sda1 Dec 13 01:41:03.150623 extend-filesystems[1521]: Found sda2 Dec 13 01:41:03.150623 extend-filesystems[1521]: Found sda3 Dec 13 01:41:03.150623 extend-filesystems[1521]: Found usr Dec 13 01:41:03.150623 extend-filesystems[1521]: Found sda4 Dec 13 01:41:03.150623 extend-filesystems[1521]: Found sda6 Dec 13 01:41:03.150623 extend-filesystems[1521]: Found sda7 Dec 13 01:41:03.150623 extend-filesystems[1521]: Found sda9 Dec 13 01:41:03.150623 extend-filesystems[1521]: Checking size of /dev/sda9 Dec 13 01:41:03.148910 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:41:03.149031 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:42:12.691374 systemd-timesyncd[1468]: Contacted time server 69.64.225.2:123 (0.flatcar.pool.ntp.org). Dec 13 01:42:12.691403 systemd-timesyncd[1468]: Initial clock synchronization to Fri 2024-12-13 01:42:12.691291 UTC. Dec 13 01:42:12.692857 systemd-resolved[1459]: Clock change detected. Flushing caches. Dec 13 01:42:12.695919 jq[1529]: true Dec 13 01:42:12.700486 update_engine[1528]: I20241213 01:42:12.699541 1528 main.cc:92] Flatcar Update Engine starting Dec 13 01:42:12.705245 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:42:12.705417 (ntainerd)[1549]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:42:12.707609 update_engine[1528]: I20241213 01:42:12.706184 1528 update_check_scheduler.cc:74] Next update check in 3m38s Dec 13 01:42:12.707671 extend-filesystems[1521]: Old size kept for /dev/sda9 Dec 13 01:42:12.707671 extend-filesystems[1521]: Found sr0 Dec 13 01:42:12.710178 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:42:12.710543 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:42:12.712067 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:42:12.719366 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Dec 13 01:42:12.721771 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Dec 13 01:42:12.724161 jq[1552]: true Dec 13 01:42:12.732368 tar[1540]: linux-amd64/helm Dec 13 01:42:12.764009 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1358) Dec 13 01:42:12.778438 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Dec 13 01:42:12.785112 unknown[1559]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Dec 13 01:42:12.786344 systemd-logind[1526]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 01:42:12.786358 systemd-logind[1526]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:42:12.789497 unknown[1559]: Core dump limit set to -1 Dec 13 01:42:12.790385 systemd-logind[1526]: New seat seat0. Dec 13 01:42:12.796999 kernel: NET: Registered PF_VSOCK protocol family Dec 13 01:42:12.806748 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:42:12.877170 locksmithd[1557]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:42:12.890812 bash[1582]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:42:12.891521 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:42:12.894112 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 01:42:12.963059 containerd[1549]: time="2024-12-13T01:42:12.960625461Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:42:13.004970 containerd[1549]: time="2024-12-13T01:42:13.004933435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:42:13.007213 containerd[1549]: time="2024-12-13T01:42:13.006816036Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:42:13.007213 containerd[1549]: time="2024-12-13T01:42:13.006838139Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:42:13.007213 containerd[1549]: time="2024-12-13T01:42:13.006849607Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:42:13.007213 containerd[1549]: time="2024-12-13T01:42:13.006943880Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:42:13.007213 containerd[1549]: time="2024-12-13T01:42:13.006954371Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:42:13.007552 containerd[1549]: time="2024-12-13T01:42:13.007534387Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:42:13.007552 containerd[1549]: time="2024-12-13T01:42:13.007549000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:42:13.007668 containerd[1549]: time="2024-12-13T01:42:13.007653933Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:42:13.007668 containerd[1549]: time="2024-12-13T01:42:13.007664968Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:42:13.007699 containerd[1549]: time="2024-12-13T01:42:13.007672820Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:42:13.007699 containerd[1549]: time="2024-12-13T01:42:13.007678326Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:42:13.007730 containerd[1549]: time="2024-12-13T01:42:13.007721348Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:42:13.007856 containerd[1549]: time="2024-12-13T01:42:13.007842953Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:42:13.007920 containerd[1549]: time="2024-12-13T01:42:13.007907088Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:42:13.007920 containerd[1549]: time="2024-12-13T01:42:13.007918443Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:42:13.007986 containerd[1549]: time="2024-12-13T01:42:13.007966737Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:42:13.008017 containerd[1549]: time="2024-12-13T01:42:13.008005728Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:42:13.018473 containerd[1549]: time="2024-12-13T01:42:13.018442697Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:42:13.018538 containerd[1549]: time="2024-12-13T01:42:13.018485913Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:42:13.018538 containerd[1549]: time="2024-12-13T01:42:13.018497011Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:42:13.018538 containerd[1549]: time="2024-12-13T01:42:13.018510260Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:42:13.018538 containerd[1549]: time="2024-12-13T01:42:13.018521138Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:42:13.018636 containerd[1549]: time="2024-12-13T01:42:13.018623866Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:42:13.018787 containerd[1549]: time="2024-12-13T01:42:13.018773691Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:42:13.018852 containerd[1549]: time="2024-12-13T01:42:13.018840662Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:42:13.018875 containerd[1549]: time="2024-12-13T01:42:13.018852424Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:42:13.018890 containerd[1549]: time="2024-12-13T01:42:13.018877776Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:42:13.018890 containerd[1549]: time="2024-12-13T01:42:13.018887186Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:42:13.018915 containerd[1549]: time="2024-12-13T01:42:13.018894850Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:42:13.018915 containerd[1549]: time="2024-12-13T01:42:13.018901673Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:42:13.018915 containerd[1549]: time="2024-12-13T01:42:13.018910302Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:42:13.018956 containerd[1549]: time="2024-12-13T01:42:13.018918662Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:42:13.018956 containerd[1549]: time="2024-12-13T01:42:13.018926053Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:42:13.018956 containerd[1549]: time="2024-12-13T01:42:13.018932942Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:42:13.018956 containerd[1549]: time="2024-12-13T01:42:13.018940442Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:42:13.019027 containerd[1549]: time="2024-12-13T01:42:13.018957797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:42:13.019027 containerd[1549]: time="2024-12-13T01:42:13.018966356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:42:13.019027 containerd[1549]: time="2024-12-13T01:42:13.018985966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:42:13.019027 containerd[1549]: time="2024-12-13T01:42:13.018998878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:42:13.019027 containerd[1549]: time="2024-12-13T01:42:13.019006151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:42:13.019027 containerd[1549]: time="2024-12-13T01:42:13.019013170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:42:13.019027 containerd[1549]: time="2024-12-13T01:42:13.019019450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:42:13.019027 containerd[1549]: time="2024-12-13T01:42:13.019026499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:42:13.019135 containerd[1549]: time="2024-12-13T01:42:13.019033324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:42:13.019135 containerd[1549]: time="2024-12-13T01:42:13.019042291Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:42:13.019135 containerd[1549]: time="2024-12-13T01:42:13.019048978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:42:13.019135 containerd[1549]: time="2024-12-13T01:42:13.019055868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:42:13.019135 containerd[1549]: time="2024-12-13T01:42:13.019067268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:42:13.019135 containerd[1549]: time="2024-12-13T01:42:13.019076189Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:42:13.019135 containerd[1549]: time="2024-12-13T01:42:13.019093005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:42:13.019135 containerd[1549]: time="2024-12-13T01:42:13.019100232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:42:13.019135 containerd[1549]: time="2024-12-13T01:42:13.019105948Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:42:13.019135 containerd[1549]: time="2024-12-13T01:42:13.019130027Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:42:13.019260 containerd[1549]: time="2024-12-13T01:42:13.019141406Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:42:13.019260 containerd[1549]: time="2024-12-13T01:42:13.019147756Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:42:13.019260 containerd[1549]: time="2024-12-13T01:42:13.019154238Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:42:13.019260 containerd[1549]: time="2024-12-13T01:42:13.019159493Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:42:13.019260 containerd[1549]: time="2024-12-13T01:42:13.019166154Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:42:13.019260 containerd[1549]: time="2024-12-13T01:42:13.019171795Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:42:13.019260 containerd[1549]: time="2024-12-13T01:42:13.019177750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:42:13.019378 containerd[1549]: time="2024-12-13T01:42:13.019345176Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:42:13.019465 containerd[1549]: time="2024-12-13T01:42:13.019382442Z" level=info msg="Connect containerd service" Dec 13 01:42:13.019465 containerd[1549]: time="2024-12-13T01:42:13.019411672Z" level=info msg="using legacy CRI server" Dec 13 01:42:13.019465 containerd[1549]: time="2024-12-13T01:42:13.019416304Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:42:13.019505 containerd[1549]: time="2024-12-13T01:42:13.019470923Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:42:13.021776 containerd[1549]: time="2024-12-13T01:42:13.021745806Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:42:13.022018 containerd[1549]: time="2024-12-13T01:42:13.021999443Z" level=info msg="Start subscribing containerd event" Dec 13 01:42:13.022050 containerd[1549]: time="2024-12-13T01:42:13.022028681Z" level=info msg="Start recovering state" Dec 13 01:42:13.022078 containerd[1549]: time="2024-12-13T01:42:13.022069112Z" level=info msg="Start event monitor" Dec 13 01:42:13.022095 containerd[1549]: time="2024-12-13T01:42:13.022080025Z" level=info msg="Start snapshots syncer" Dec 13 01:42:13.022095 containerd[1549]: time="2024-12-13T01:42:13.022086465Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:42:13.022095 containerd[1549]: time="2024-12-13T01:42:13.022090444Z" level=info msg="Start streaming server" Dec 13 01:42:13.022917 containerd[1549]: time="2024-12-13T01:42:13.022898774Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:42:13.022946 containerd[1549]: time="2024-12-13T01:42:13.022931181Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:42:13.023568 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:42:13.027650 containerd[1549]: time="2024-12-13T01:42:13.023865472Z" level=info msg="containerd successfully booted in 0.064299s" Dec 13 01:42:13.198366 tar[1540]: linux-amd64/LICENSE Dec 13 01:42:13.198425 tar[1540]: linux-amd64/README.md Dec 13 01:42:13.208644 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:42:13.285724 sshd_keygen[1541]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:42:13.299404 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:42:13.304162 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:42:13.307808 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:42:13.307934 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:42:13.309590 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:42:13.318192 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:42:13.325316 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:42:13.326252 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:42:13.326434 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:42:14.026090 systemd-networkd[1456]: ens192: Gained IPv6LL Dec 13 01:42:14.027231 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:42:14.028295 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:42:14.034206 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Dec 13 01:42:14.035813 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:42:14.040115 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:42:14.068738 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:42:14.069811 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 01:42:14.070060 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Dec 13 01:42:14.071098 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:42:14.738820 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:42:14.739192 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:42:14.741057 systemd[1]: Startup finished in 997ms (kernel) + 4.645s (initrd) + 3.962s (userspace) = 9.606s. Dec 13 01:42:14.745213 (kubelet)[1698]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:42:14.764239 login[1666]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 01:42:14.765570 login[1667]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 01:42:14.773133 systemd-logind[1526]: New session 2 of user core. Dec 13 01:42:14.773413 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:42:14.779349 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:42:14.781923 systemd-logind[1526]: New session 1 of user core. Dec 13 01:42:14.787725 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:42:14.791206 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:42:14.794722 (systemd)[1705]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:42:14.864851 systemd[1705]: Queued start job for default target default.target. Dec 13 01:42:14.868887 systemd[1705]: Created slice app.slice - User Application Slice. Dec 13 01:42:14.868913 systemd[1705]: Reached target paths.target - Paths. Dec 13 01:42:14.868927 systemd[1705]: Reached target timers.target - Timers. Dec 13 01:42:14.872127 systemd[1705]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:42:14.877528 systemd[1705]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:42:14.877889 systemd[1705]: Reached target sockets.target - Sockets. Dec 13 01:42:14.877902 systemd[1705]: Reached target basic.target - Basic System. Dec 13 01:42:14.877932 systemd[1705]: Reached target default.target - Main User Target. Dec 13 01:42:14.877955 systemd[1705]: Startup finished in 78ms. Dec 13 01:42:14.878009 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:42:14.879012 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:42:14.879566 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:42:15.194817 kubelet[1698]: E1213 01:42:15.194707 1698 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:42:15.196234 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:42:15.196321 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:42:25.438999 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:42:25.448126 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:42:25.546965 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:42:25.549448 (kubelet)[1749]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:42:25.611132 kubelet[1749]: E1213 01:42:25.611087 1749 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:42:25.613349 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:42:25.613433 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:42:35.689152 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:42:35.696265 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:42:36.021794 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:42:36.026181 (kubelet)[1764]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:42:36.088346 kubelet[1764]: E1213 01:42:36.088313 1764 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:42:36.089527 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:42:36.089667 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:42:46.189132 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 01:42:46.197148 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:42:46.538049 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:42:46.540387 (kubelet)[1779]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:42:46.566124 kubelet[1779]: E1213 01:42:46.566089 1779 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:42:46.567714 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:42:46.567855 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:42:52.929027 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:42:52.930172 systemd[1]: Started sshd@0-139.178.70.108:22-139.178.89.65:33994.service - OpenSSH per-connection server daemon (139.178.89.65:33994). Dec 13 01:42:52.965886 sshd[1787]: Accepted publickey for core from 139.178.89.65 port 33994 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:42:52.966754 sshd[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:42:52.969597 systemd-logind[1526]: New session 3 of user core. Dec 13 01:42:52.980154 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:42:53.037967 systemd[1]: Started sshd@1-139.178.70.108:22-139.178.89.65:34004.service - OpenSSH per-connection server daemon (139.178.89.65:34004). Dec 13 01:42:53.063873 sshd[1792]: Accepted publickey for core from 139.178.89.65 port 34004 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:42:53.064672 sshd[1792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:42:53.067214 systemd-logind[1526]: New session 4 of user core. Dec 13 01:42:53.074047 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:42:53.122196 sshd[1792]: pam_unix(sshd:session): session closed for user core Dec 13 01:42:53.129809 systemd[1]: sshd@1-139.178.70.108:22-139.178.89.65:34004.service: Deactivated successfully. Dec 13 01:42:53.130623 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:42:53.131077 systemd-logind[1526]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:42:53.132192 systemd[1]: Started sshd@2-139.178.70.108:22-139.178.89.65:34008.service - OpenSSH per-connection server daemon (139.178.89.65:34008). Dec 13 01:42:53.133000 systemd-logind[1526]: Removed session 4. Dec 13 01:42:53.168107 sshd[1799]: Accepted publickey for core from 139.178.89.65 port 34008 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:42:53.168758 sshd[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:42:53.171010 systemd-logind[1526]: New session 5 of user core. Dec 13 01:42:53.178050 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:42:53.223308 sshd[1799]: pam_unix(sshd:session): session closed for user core Dec 13 01:42:53.236369 systemd[1]: sshd@2-139.178.70.108:22-139.178.89.65:34008.service: Deactivated successfully. Dec 13 01:42:53.237519 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:42:53.238500 systemd-logind[1526]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:42:53.242377 systemd[1]: Started sshd@3-139.178.70.108:22-139.178.89.65:34010.service - OpenSSH per-connection server daemon (139.178.89.65:34010). Dec 13 01:42:53.243227 systemd-logind[1526]: Removed session 5. Dec 13 01:42:53.272053 sshd[1806]: Accepted publickey for core from 139.178.89.65 port 34010 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:42:53.272926 sshd[1806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:42:53.276873 systemd-logind[1526]: New session 6 of user core. Dec 13 01:42:53.288173 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:42:53.337247 sshd[1806]: pam_unix(sshd:session): session closed for user core Dec 13 01:42:53.346280 systemd[1]: sshd@3-139.178.70.108:22-139.178.89.65:34010.service: Deactivated successfully. Dec 13 01:42:53.347199 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:42:53.348132 systemd-logind[1526]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:42:53.353157 systemd[1]: Started sshd@4-139.178.70.108:22-139.178.89.65:34012.service - OpenSSH per-connection server daemon (139.178.89.65:34012). Dec 13 01:42:53.356252 systemd-logind[1526]: Removed session 6. Dec 13 01:42:53.382691 sshd[1813]: Accepted publickey for core from 139.178.89.65 port 34012 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:42:53.383541 sshd[1813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:42:53.386441 systemd-logind[1526]: New session 7 of user core. Dec 13 01:42:53.396143 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:42:53.454214 sudo[1816]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:42:53.454418 sudo[1816]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:42:53.466663 sudo[1816]: pam_unix(sudo:session): session closed for user root Dec 13 01:42:53.467737 sshd[1813]: pam_unix(sshd:session): session closed for user core Dec 13 01:42:53.478724 systemd[1]: sshd@4-139.178.70.108:22-139.178.89.65:34012.service: Deactivated successfully. Dec 13 01:42:53.480278 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:42:53.481256 systemd-logind[1526]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:42:53.484105 systemd[1]: Started sshd@5-139.178.70.108:22-139.178.89.65:34028.service - OpenSSH per-connection server daemon (139.178.89.65:34028). Dec 13 01:42:53.485216 systemd-logind[1526]: Removed session 7. Dec 13 01:42:53.510919 sshd[1821]: Accepted publickey for core from 139.178.89.65 port 34028 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:42:53.511816 sshd[1821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:42:53.514716 systemd-logind[1526]: New session 8 of user core. Dec 13 01:42:53.523071 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:42:53.572466 sudo[1825]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:42:53.572674 sudo[1825]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:42:53.575061 sudo[1825]: pam_unix(sudo:session): session closed for user root Dec 13 01:42:53.578578 sudo[1824]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:42:53.578779 sudo[1824]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:42:53.589196 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:42:53.590087 auditctl[1828]: No rules Dec 13 01:42:53.590404 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:42:53.590545 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:42:53.592266 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:42:53.610954 augenrules[1846]: No rules Dec 13 01:42:53.611744 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:42:53.612450 sudo[1824]: pam_unix(sudo:session): session closed for user root Dec 13 01:42:53.613320 sshd[1821]: pam_unix(sshd:session): session closed for user core Dec 13 01:42:53.618487 systemd[1]: sshd@5-139.178.70.108:22-139.178.89.65:34028.service: Deactivated successfully. Dec 13 01:42:53.619600 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:42:53.620311 systemd-logind[1526]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:42:53.621247 systemd[1]: Started sshd@6-139.178.70.108:22-139.178.89.65:34040.service - OpenSSH per-connection server daemon (139.178.89.65:34040). Dec 13 01:42:53.622040 systemd-logind[1526]: Removed session 8. Dec 13 01:42:53.650443 sshd[1854]: Accepted publickey for core from 139.178.89.65 port 34040 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:42:53.651312 sshd[1854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:42:53.654168 systemd-logind[1526]: New session 9 of user core. Dec 13 01:42:53.665127 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:42:53.714229 sudo[1857]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:42:53.714439 sudo[1857]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:42:54.139235 (dockerd)[1873]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:42:54.139470 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:42:54.611002 dockerd[1873]: time="2024-12-13T01:42:54.610917923Z" level=info msg="Starting up" Dec 13 01:42:54.691344 dockerd[1873]: time="2024-12-13T01:42:54.691208360Z" level=info msg="Loading containers: start." Dec 13 01:42:54.751043 kernel: Initializing XFRM netlink socket Dec 13 01:42:54.793778 systemd-networkd[1456]: docker0: Link UP Dec 13 01:42:54.803998 dockerd[1873]: time="2024-12-13T01:42:54.803965511Z" level=info msg="Loading containers: done." Dec 13 01:42:54.815051 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2158420810-merged.mount: Deactivated successfully. Dec 13 01:42:54.816503 dockerd[1873]: time="2024-12-13T01:42:54.816244145Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:42:54.816503 dockerd[1873]: time="2024-12-13T01:42:54.816311842Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:42:54.816503 dockerd[1873]: time="2024-12-13T01:42:54.816369335Z" level=info msg="Daemon has completed initialization" Dec 13 01:42:54.830542 dockerd[1873]: time="2024-12-13T01:42:54.830497571Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:42:54.830845 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:42:55.555264 containerd[1549]: time="2024-12-13T01:42:55.555241880Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Dec 13 01:42:56.275661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3923734360.mount: Deactivated successfully. Dec 13 01:42:56.689103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 01:42:56.694102 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:42:56.755743 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:42:56.758178 (kubelet)[2071]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:42:56.861397 kubelet[2071]: E1213 01:42:56.861328 2071 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:42:56.862860 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:42:56.862985 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:42:57.497796 containerd[1549]: time="2024-12-13T01:42:57.497768526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:42:57.498644 containerd[1549]: time="2024-12-13T01:42:57.498201095Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.4: active requests=0, bytes read=27975483" Dec 13 01:42:57.498644 containerd[1549]: time="2024-12-13T01:42:57.498607178Z" level=info msg="ImageCreate event name:\"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:42:57.500248 containerd[1549]: time="2024-12-13T01:42:57.500228010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:42:57.500944 containerd[1549]: time="2024-12-13T01:42:57.500860452Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.4\" with image id \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\", size \"27972283\" in 1.945596637s" Dec 13 01:42:57.500944 containerd[1549]: time="2024-12-13T01:42:57.500879338Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\"" Dec 13 01:42:57.502339 containerd[1549]: time="2024-12-13T01:42:57.502323741Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Dec 13 01:42:58.139450 update_engine[1528]: I20241213 01:42:58.139035 1528 update_attempter.cc:509] Updating boot flags... Dec 13 01:42:58.168025 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2091) Dec 13 01:42:58.205012 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2091) Dec 13 01:42:58.817188 containerd[1549]: time="2024-12-13T01:42:58.817149861Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:42:58.834182 containerd[1549]: time="2024-12-13T01:42:58.834153459Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.4: active requests=0, bytes read=24702157" Dec 13 01:42:58.876375 containerd[1549]: time="2024-12-13T01:42:58.876355720Z" level=info msg="ImageCreate event name:\"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:42:58.933092 containerd[1549]: time="2024-12-13T01:42:58.933054464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:42:58.933716 containerd[1549]: time="2024-12-13T01:42:58.933692393Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.4\" with image id \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\", size \"26147269\" in 1.431318767s" Dec 13 01:42:58.933758 containerd[1549]: time="2024-12-13T01:42:58.933717105Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\"" Dec 13 01:42:58.934398 containerd[1549]: time="2024-12-13T01:42:58.934052493Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Dec 13 01:43:00.458364 containerd[1549]: time="2024-12-13T01:43:00.458314044Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:43:00.467156 containerd[1549]: time="2024-12-13T01:43:00.467114554Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.4: active requests=0, bytes read=18652067" Dec 13 01:43:00.475012 containerd[1549]: time="2024-12-13T01:43:00.474919314Z" level=info msg="ImageCreate event name:\"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:43:00.483602 containerd[1549]: time="2024-12-13T01:43:00.483567964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:43:00.484345 containerd[1549]: time="2024-12-13T01:43:00.484258250Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.4\" with image id \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\", size \"20097197\" in 1.55017942s" Dec 13 01:43:00.484345 containerd[1549]: time="2024-12-13T01:43:00.484282251Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\"" Dec 13 01:43:00.484873 containerd[1549]: time="2024-12-13T01:43:00.484669019Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 01:43:01.318243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1933492909.mount: Deactivated successfully. Dec 13 01:43:01.607713 containerd[1549]: time="2024-12-13T01:43:01.607649212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:43:01.608042 containerd[1549]: time="2024-12-13T01:43:01.608019205Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=30230243" Dec 13 01:43:01.608496 containerd[1549]: time="2024-12-13T01:43:01.608474524Z" level=info msg="ImageCreate event name:\"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:43:01.609504 containerd[1549]: time="2024-12-13T01:43:01.609491980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:43:01.609899 containerd[1549]: time="2024-12-13T01:43:01.609882454Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"30229262\" in 1.125194327s" Dec 13 01:43:01.609927 containerd[1549]: time="2024-12-13T01:43:01.609900942Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Dec 13 01:43:01.610406 containerd[1549]: time="2024-12-13T01:43:01.610260179Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:43:02.085598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount940222473.mount: Deactivated successfully. Dec 13 01:43:02.971995 containerd[1549]: time="2024-12-13T01:43:02.971768548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:43:02.972215 containerd[1549]: time="2024-12-13T01:43:02.972138277Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Dec 13 01:43:02.972651 containerd[1549]: time="2024-12-13T01:43:02.972636237Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:43:02.974257 containerd[1549]: time="2024-12-13T01:43:02.974243326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:43:02.974903 containerd[1549]: time="2024-12-13T01:43:02.974889438Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.364595271s" Dec 13 01:43:02.974948 containerd[1549]: time="2024-12-13T01:43:02.974940311Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 01:43:02.975326 containerd[1549]: time="2024-12-13T01:43:02.975316975Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 13 01:43:03.421324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3291653870.mount: Deactivated successfully. Dec 13 01:43:03.423659 containerd[1549]: time="2024-12-13T01:43:03.423631478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:43:03.424327 containerd[1549]: time="2024-12-13T01:43:03.424298008Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Dec 13 01:43:03.424640 containerd[1549]: time="2024-12-13T01:43:03.424626103Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:43:03.425737 containerd[1549]: time="2024-12-13T01:43:03.425711086Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:43:03.426538 containerd[1549]: time="2024-12-13T01:43:03.426187233Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 450.820599ms" Dec 13 01:43:03.426538 containerd[1549]: time="2024-12-13T01:43:03.426209365Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 13 01:43:03.426538 containerd[1549]: time="2024-12-13T01:43:03.426507057Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Dec 13 01:43:03.868004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1077947995.mount: Deactivated successfully. Dec 13 01:43:05.936509 containerd[1549]: time="2024-12-13T01:43:05.936455323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:43:05.937660 containerd[1549]: time="2024-12-13T01:43:05.937575231Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Dec 13 01:43:05.937660 containerd[1549]: time="2024-12-13T01:43:05.937618497Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:43:05.940409 containerd[1549]: time="2024-12-13T01:43:05.940351784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:43:05.941876 containerd[1549]: time="2024-12-13T01:43:05.941483326Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.51496101s" Dec 13 01:43:05.941876 containerd[1549]: time="2024-12-13T01:43:05.941512532Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Dec 13 01:43:06.939342 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 13 01:43:06.947753 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:43:07.319059 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:43:07.320534 (kubelet)[2245]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:43:07.357409 kubelet[2245]: E1213 01:43:07.357381 2245 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:43:07.358851 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:43:07.358926 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:43:07.603017 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:43:07.613212 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:43:07.634134 systemd[1]: Reloading requested from client PID 2260 ('systemctl') (unit session-9.scope)... Dec 13 01:43:07.634144 systemd[1]: Reloading... Dec 13 01:43:07.693014 zram_generator::config[2297]: No configuration found. Dec 13 01:43:07.753416 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Dec 13 01:43:07.768472 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:43:07.810373 systemd[1]: Reloading finished in 175 ms. Dec 13 01:43:07.909061 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:43:07.909132 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:43:07.909336 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:43:07.913204 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:43:08.123160 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:43:08.126031 (kubelet)[2365]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:43:08.146994 kubelet[2365]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:43:08.147314 kubelet[2365]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:43:08.147314 kubelet[2365]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:43:08.157671 kubelet[2365]: I1213 01:43:08.157518 2365 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:43:08.725810 kubelet[2365]: I1213 01:43:08.725787 2365 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 01:43:08.725810 kubelet[2365]: I1213 01:43:08.725804 2365 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:43:08.725946 kubelet[2365]: I1213 01:43:08.725936 2365 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 01:43:08.824073 kubelet[2365]: I1213 01:43:08.823556 2365 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:43:08.830965 kubelet[2365]: E1213 01:43:08.830926 2365 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:43:08.884907 kubelet[2365]: E1213 01:43:08.884864 2365 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 01:43:08.884907 kubelet[2365]: I1213 01:43:08.884901 2365 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 01:43:08.892509 kubelet[2365]: I1213 01:43:08.892486 2365 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:43:08.892596 kubelet[2365]: I1213 01:43:08.892558 2365 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 01:43:08.892700 kubelet[2365]: I1213 01:43:08.892672 2365 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:43:08.892836 kubelet[2365]: I1213 01:43:08.892699 2365 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 01:43:08.892923 kubelet[2365]: I1213 01:43:08.892842 2365 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:43:08.892923 kubelet[2365]: I1213 01:43:08.892851 2365 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 01:43:08.892981 kubelet[2365]: I1213 01:43:08.892927 2365 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:43:08.910129 kubelet[2365]: I1213 01:43:08.910039 2365 kubelet.go:408] "Attempting to sync node with API server" Dec 13 01:43:08.910129 kubelet[2365]: I1213 01:43:08.910058 2365 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:43:08.914427 kubelet[2365]: I1213 01:43:08.914282 2365 kubelet.go:314] "Adding apiserver pod source" Dec 13 01:43:08.914427 kubelet[2365]: I1213 01:43:08.914300 2365 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:43:08.935530 kubelet[2365]: W1213 01:43:08.935408 2365 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Dec 13 01:43:08.935530 kubelet[2365]: E1213 01:43:08.935447 2365 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:43:08.943397 kubelet[2365]: I1213 01:43:08.943273 2365 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:43:08.949929 kubelet[2365]: I1213 01:43:08.949183 2365 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:43:08.949929 kubelet[2365]: W1213 01:43:08.949238 2365 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:43:08.949929 kubelet[2365]: I1213 01:43:08.949674 2365 server.go:1269] "Started kubelet" Dec 13 01:43:08.953597 kubelet[2365]: W1213 01:43:08.953562 2365 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Dec 13 01:43:08.953642 kubelet[2365]: E1213 01:43:08.953602 2365 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:43:08.954679 kubelet[2365]: I1213 01:43:08.954656 2365 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:43:08.958634 kubelet[2365]: I1213 01:43:08.958616 2365 server.go:460] "Adding debug handlers to kubelet server" Dec 13 01:43:08.964918 kubelet[2365]: I1213 01:43:08.964897 2365 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:43:08.967866 kubelet[2365]: I1213 01:43:08.967792 2365 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:43:08.967955 kubelet[2365]: I1213 01:43:08.967940 2365 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:43:08.973379 kubelet[2365]: I1213 01:43:08.973109 2365 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 01:43:08.978042 kubelet[2365]: I1213 01:43:08.977998 2365 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 01:43:08.978453 kubelet[2365]: E1213 01:43:08.978237 2365 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:43:08.984922 kubelet[2365]: E1213 01:43:08.974291 2365 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.108:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.108:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18109918f19f6871 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 01:43:08.949661809 +0000 UTC m=+0.821504428,LastTimestamp:2024-12-13 01:43:08.949661809 +0000 UTC m=+0.821504428,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 01:43:08.984922 kubelet[2365]: E1213 01:43:08.984811 2365 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="200ms" Dec 13 01:43:08.985314 kubelet[2365]: I1213 01:43:08.985130 2365 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:43:08.985314 kubelet[2365]: I1213 01:43:08.985178 2365 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:43:08.987730 kubelet[2365]: I1213 01:43:08.987561 2365 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 01:43:08.987730 kubelet[2365]: I1213 01:43:08.987614 2365 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:43:08.998223 kubelet[2365]: W1213 01:43:08.997515 2365 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Dec 13 01:43:08.998223 kubelet[2365]: E1213 01:43:08.997551 2365 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:43:08.998223 kubelet[2365]: I1213 01:43:08.997725 2365 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:43:09.002676 kubelet[2365]: E1213 01:43:09.002662 2365 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:43:09.014587 kubelet[2365]: I1213 01:43:09.014569 2365 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:43:09.016653 kubelet[2365]: I1213 01:43:09.016634 2365 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:43:09.016743 kubelet[2365]: I1213 01:43:09.016736 2365 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:43:09.016828 kubelet[2365]: I1213 01:43:09.016821 2365 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 01:43:09.016912 kubelet[2365]: E1213 01:43:09.016901 2365 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:43:09.017811 kubelet[2365]: W1213 01:43:09.017631 2365 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Dec 13 01:43:09.024090 kubelet[2365]: E1213 01:43:09.023959 2365 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:43:09.032277 kubelet[2365]: I1213 01:43:09.032163 2365 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:43:09.032277 kubelet[2365]: I1213 01:43:09.032171 2365 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:43:09.032277 kubelet[2365]: I1213 01:43:09.032180 2365 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:43:09.061632 kubelet[2365]: I1213 01:43:09.061623 2365 policy_none.go:49] "None policy: Start" Dec 13 01:43:09.075097 kubelet[2365]: I1213 01:43:09.062118 2365 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:43:09.075097 kubelet[2365]: I1213 01:43:09.062133 2365 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:43:09.078493 kubelet[2365]: E1213 01:43:09.078476 2365 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:43:09.097045 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:43:09.108510 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:43:09.111695 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:43:09.117376 kubelet[2365]: E1213 01:43:09.117354 2365 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:43:09.118584 kubelet[2365]: I1213 01:43:09.118564 2365 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:43:09.118699 kubelet[2365]: I1213 01:43:09.118686 2365 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 01:43:09.118728 kubelet[2365]: I1213 01:43:09.118698 2365 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:43:09.119644 kubelet[2365]: I1213 01:43:09.119630 2365 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:43:09.119998 kubelet[2365]: E1213 01:43:09.119982 2365 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 01:43:09.185893 kubelet[2365]: E1213 01:43:09.185856 2365 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="400ms" Dec 13 01:43:09.220353 kubelet[2365]: I1213 01:43:09.220331 2365 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Dec 13 01:43:09.220649 kubelet[2365]: E1213 01:43:09.220629 2365 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.108:6443/api/v1/nodes\": dial tcp 139.178.70.108:6443: connect: connection refused" node="localhost" Dec 13 01:43:09.326046 systemd[1]: Created slice kubepods-burstable-poda52b86ce975f496e6002ba953fa9b888.slice - libcontainer container kubepods-burstable-poda52b86ce975f496e6002ba953fa9b888.slice. Dec 13 01:43:09.349117 systemd[1]: Created slice kubepods-burstable-pod2485a99c1d53471f931ebeea768ac64e.slice - libcontainer container kubepods-burstable-pod2485a99c1d53471f931ebeea768ac64e.slice. Dec 13 01:43:09.354750 systemd[1]: Created slice kubepods-burstable-pod50a9ae38ddb3bec3278d8dc73a6a7009.slice - libcontainer container kubepods-burstable-pod50a9ae38ddb3bec3278d8dc73a6a7009.slice. Dec 13 01:43:09.422124 kubelet[2365]: I1213 01:43:09.422065 2365 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Dec 13 01:43:09.422468 kubelet[2365]: E1213 01:43:09.422451 2365 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.108:6443/api/v1/nodes\": dial tcp 139.178.70.108:6443: connect: connection refused" node="localhost" Dec 13 01:43:09.489292 kubelet[2365]: I1213 01:43:09.489274 2365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2485a99c1d53471f931ebeea768ac64e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2485a99c1d53471f931ebeea768ac64e\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:43:09.489491 kubelet[2365]: I1213 01:43:09.489300 2365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:43:09.489491 kubelet[2365]: I1213 01:43:09.489318 2365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a52b86ce975f496e6002ba953fa9b888-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a52b86ce975f496e6002ba953fa9b888\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:43:09.489491 kubelet[2365]: I1213 01:43:09.489342 2365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2485a99c1d53471f931ebeea768ac64e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2485a99c1d53471f931ebeea768ac64e\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:43:09.489491 kubelet[2365]: I1213 01:43:09.489359 2365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2485a99c1d53471f931ebeea768ac64e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2485a99c1d53471f931ebeea768ac64e\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:43:09.489491 kubelet[2365]: I1213 01:43:09.489373 2365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:43:09.489636 kubelet[2365]: I1213 01:43:09.489393 2365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:43:09.489636 kubelet[2365]: I1213 01:43:09.489406 2365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:43:09.489636 kubelet[2365]: I1213 01:43:09.489416 2365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:43:09.586944 kubelet[2365]: E1213 01:43:09.586860 2365 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="800ms" Dec 13 01:43:09.649099 containerd[1549]: time="2024-12-13T01:43:09.648951767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a52b86ce975f496e6002ba953fa9b888,Namespace:kube-system,Attempt:0,}" Dec 13 01:43:09.659556 containerd[1549]: time="2024-12-13T01:43:09.659363505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2485a99c1d53471f931ebeea768ac64e,Namespace:kube-system,Attempt:0,}" Dec 13 01:43:09.659556 containerd[1549]: time="2024-12-13T01:43:09.659439442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:50a9ae38ddb3bec3278d8dc73a6a7009,Namespace:kube-system,Attempt:0,}" Dec 13 01:43:09.823776 kubelet[2365]: I1213 01:43:09.823758 2365 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Dec 13 01:43:09.824179 kubelet[2365]: E1213 01:43:09.824164 2365 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.108:6443/api/v1/nodes\": dial tcp 139.178.70.108:6443: connect: connection refused" node="localhost" Dec 13 01:43:10.108131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3871842682.mount: Deactivated successfully. Dec 13 01:43:10.110233 containerd[1549]: time="2024-12-13T01:43:10.110166094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:43:10.110914 containerd[1549]: time="2024-12-13T01:43:10.110689302Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:43:10.110914 containerd[1549]: time="2024-12-13T01:43:10.110753217Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:43:10.111280 containerd[1549]: time="2024-12-13T01:43:10.111258871Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:43:10.111353 containerd[1549]: time="2024-12-13T01:43:10.111309792Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:43:10.111693 containerd[1549]: time="2024-12-13T01:43:10.111657186Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Dec 13 01:43:10.111803 containerd[1549]: time="2024-12-13T01:43:10.111787460Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:43:10.113610 containerd[1549]: time="2024-12-13T01:43:10.113594815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:43:10.116520 containerd[1549]: time="2024-12-13T01:43:10.116504365Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 467.47018ms" Dec 13 01:43:10.116643 containerd[1549]: time="2024-12-13T01:43:10.116616993Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 457.215463ms" Dec 13 01:43:10.118691 containerd[1549]: time="2024-12-13T01:43:10.118671877Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 459.191527ms" Dec 13 01:43:10.155997 kubelet[2365]: W1213 01:43:10.155925 2365 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Dec 13 01:43:10.155997 kubelet[2365]: E1213 01:43:10.155966 2365 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:43:10.241419 kubelet[2365]: W1213 01:43:10.241379 2365 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Dec 13 01:43:10.241741 kubelet[2365]: E1213 01:43:10.241716 2365 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:43:10.375886 containerd[1549]: time="2024-12-13T01:43:10.375633842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:43:10.376369 containerd[1549]: time="2024-12-13T01:43:10.376103213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:43:10.382375 containerd[1549]: time="2024-12-13T01:43:10.376604303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:43:10.382375 containerd[1549]: time="2024-12-13T01:43:10.376709152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:43:10.391985 containerd[1549]: time="2024-12-13T01:43:10.383965038Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:43:10.391985 containerd[1549]: time="2024-12-13T01:43:10.384050721Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:43:10.391985 containerd[1549]: time="2024-12-13T01:43:10.384107798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:43:10.391985 containerd[1549]: time="2024-12-13T01:43:10.384268760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:43:10.392092 kubelet[2365]: E1213 01:43:10.387087 2365 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="1.6s" Dec 13 01:43:10.394667 containerd[1549]: time="2024-12-13T01:43:10.393793082Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:43:10.394667 containerd[1549]: time="2024-12-13T01:43:10.393828629Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:43:10.394667 containerd[1549]: time="2024-12-13T01:43:10.393836454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:43:10.394667 containerd[1549]: time="2024-12-13T01:43:10.393876162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:43:10.403671 systemd[1]: Started cri-containerd-f429f30cd815838bb8da7cd362d0bd786b24d1622b5b465d38261fcc62c5c1f3.scope - libcontainer container f429f30cd815838bb8da7cd362d0bd786b24d1622b5b465d38261fcc62c5c1f3. Dec 13 01:43:10.406320 systemd[1]: Started cri-containerd-0eef395271f357f525bdda6fd6af8bff293ff6d12bd866f3585af8b0e476ca66.scope - libcontainer container 0eef395271f357f525bdda6fd6af8bff293ff6d12bd866f3585af8b0e476ca66. Dec 13 01:43:10.410167 systemd[1]: Started cri-containerd-58319a9651edd8a7a6f83e4faac2d9182d9f67866d959da1e0fb69c99f260983.scope - libcontainer container 58319a9651edd8a7a6f83e4faac2d9182d9f67866d959da1e0fb69c99f260983. Dec 13 01:43:10.452071 containerd[1549]: time="2024-12-13T01:43:10.452050077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2485a99c1d53471f931ebeea768ac64e,Namespace:kube-system,Attempt:0,} returns sandbox id \"58319a9651edd8a7a6f83e4faac2d9182d9f67866d959da1e0fb69c99f260983\"" Dec 13 01:43:10.454670 containerd[1549]: time="2024-12-13T01:43:10.454644656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:50a9ae38ddb3bec3278d8dc73a6a7009,Namespace:kube-system,Attempt:0,} returns sandbox id \"0eef395271f357f525bdda6fd6af8bff293ff6d12bd866f3585af8b0e476ca66\"" Dec 13 01:43:10.457276 containerd[1549]: time="2024-12-13T01:43:10.456969906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a52b86ce975f496e6002ba953fa9b888,Namespace:kube-system,Attempt:0,} returns sandbox id \"f429f30cd815838bb8da7cd362d0bd786b24d1622b5b465d38261fcc62c5c1f3\"" Dec 13 01:43:10.457369 containerd[1549]: time="2024-12-13T01:43:10.457358368Z" level=info msg="CreateContainer within sandbox \"0eef395271f357f525bdda6fd6af8bff293ff6d12bd866f3585af8b0e476ca66\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:43:10.458403 containerd[1549]: time="2024-12-13T01:43:10.458329691Z" level=info msg="CreateContainer within sandbox \"58319a9651edd8a7a6f83e4faac2d9182d9f67866d959da1e0fb69c99f260983\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:43:10.459026 containerd[1549]: time="2024-12-13T01:43:10.458792476Z" level=info msg="CreateContainer within sandbox \"f429f30cd815838bb8da7cd362d0bd786b24d1622b5b465d38261fcc62c5c1f3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:43:10.459125 kubelet[2365]: W1213 01:43:10.459087 2365 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Dec 13 01:43:10.459158 kubelet[2365]: E1213 01:43:10.459125 2365 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:43:10.467613 containerd[1549]: time="2024-12-13T01:43:10.467595256Z" level=info msg="CreateContainer within sandbox \"58319a9651edd8a7a6f83e4faac2d9182d9f67866d959da1e0fb69c99f260983\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0ec317156fd7123cdaad1c28f97464926f7b33662e1c827a44e65cf20120a69d\"" Dec 13 01:43:10.468223 containerd[1549]: time="2024-12-13T01:43:10.467880255Z" level=info msg="StartContainer for \"0ec317156fd7123cdaad1c28f97464926f7b33662e1c827a44e65cf20120a69d\"" Dec 13 01:43:10.469210 containerd[1549]: time="2024-12-13T01:43:10.469175146Z" level=info msg="CreateContainer within sandbox \"0eef395271f357f525bdda6fd6af8bff293ff6d12bd866f3585af8b0e476ca66\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"abb2876e9aa0bf2d049c4be5e1523573cd003c5d6fe158e0d909c3bc314744b7\"" Dec 13 01:43:10.469832 containerd[1549]: time="2024-12-13T01:43:10.469365980Z" level=info msg="StartContainer for \"abb2876e9aa0bf2d049c4be5e1523573cd003c5d6fe158e0d909c3bc314744b7\"" Dec 13 01:43:10.472251 containerd[1549]: time="2024-12-13T01:43:10.472237950Z" level=info msg="CreateContainer within sandbox \"f429f30cd815838bb8da7cd362d0bd786b24d1622b5b465d38261fcc62c5c1f3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"25fd968a79c6bf7530535779a1fba2db723cd1cdf8bf8afa50b5f354ce5d8575\"" Dec 13 01:43:10.472673 containerd[1549]: time="2024-12-13T01:43:10.472662414Z" level=info msg="StartContainer for \"25fd968a79c6bf7530535779a1fba2db723cd1cdf8bf8afa50b5f354ce5d8575\"" Dec 13 01:43:10.485068 systemd[1]: Started cri-containerd-0ec317156fd7123cdaad1c28f97464926f7b33662e1c827a44e65cf20120a69d.scope - libcontainer container 0ec317156fd7123cdaad1c28f97464926f7b33662e1c827a44e65cf20120a69d. Dec 13 01:43:10.495100 systemd[1]: Started cri-containerd-abb2876e9aa0bf2d049c4be5e1523573cd003c5d6fe158e0d909c3bc314744b7.scope - libcontainer container abb2876e9aa0bf2d049c4be5e1523573cd003c5d6fe158e0d909c3bc314744b7. Dec 13 01:43:10.497353 systemd[1]: Started cri-containerd-25fd968a79c6bf7530535779a1fba2db723cd1cdf8bf8afa50b5f354ce5d8575.scope - libcontainer container 25fd968a79c6bf7530535779a1fba2db723cd1cdf8bf8afa50b5f354ce5d8575. Dec 13 01:43:10.506819 kubelet[2365]: W1213 01:43:10.506558 2365 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Dec 13 01:43:10.506819 kubelet[2365]: E1213 01:43:10.506599 2365 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:43:10.525175 containerd[1549]: time="2024-12-13T01:43:10.525149491Z" level=info msg="StartContainer for \"0ec317156fd7123cdaad1c28f97464926f7b33662e1c827a44e65cf20120a69d\" returns successfully" Dec 13 01:43:10.577227 containerd[1549]: time="2024-12-13T01:43:10.577199267Z" level=info msg="StartContainer for \"25fd968a79c6bf7530535779a1fba2db723cd1cdf8bf8afa50b5f354ce5d8575\" returns successfully" Dec 13 01:43:10.577596 containerd[1549]: time="2024-12-13T01:43:10.577260534Z" level=info msg="StartContainer for \"abb2876e9aa0bf2d049c4be5e1523573cd003c5d6fe158e0d909c3bc314744b7\" returns successfully" Dec 13 01:43:10.626021 kubelet[2365]: I1213 01:43:10.625770 2365 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Dec 13 01:43:10.626738 kubelet[2365]: E1213 01:43:10.626715 2365 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.108:6443/api/v1/nodes\": dial tcp 139.178.70.108:6443: connect: connection refused" node="localhost" Dec 13 01:43:10.930604 kubelet[2365]: E1213 01:43:10.930534 2365 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:43:11.989718 kubelet[2365]: E1213 01:43:11.989693 2365 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 01:43:12.228035 kubelet[2365]: I1213 01:43:12.228013 2365 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Dec 13 01:43:12.237586 kubelet[2365]: I1213 01:43:12.237564 2365 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Dec 13 01:43:12.237586 kubelet[2365]: E1213 01:43:12.237585 2365 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Dec 13 01:43:12.243552 kubelet[2365]: E1213 01:43:12.243471 2365 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:43:12.343678 kubelet[2365]: E1213 01:43:12.343651 2365 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:43:12.444427 kubelet[2365]: E1213 01:43:12.444403 2365 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:43:12.938482 kubelet[2365]: I1213 01:43:12.938457 2365 apiserver.go:52] "Watching apiserver" Dec 13 01:43:12.988683 kubelet[2365]: I1213 01:43:12.988650 2365 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 01:43:13.312697 systemd[1]: Reloading requested from client PID 2642 ('systemctl') (unit session-9.scope)... Dec 13 01:43:13.312707 systemd[1]: Reloading... Dec 13 01:43:13.362006 zram_generator::config[2683]: No configuration found. Dec 13 01:43:13.419838 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Dec 13 01:43:13.434478 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:43:13.484199 systemd[1]: Reloading finished in 171 ms. Dec 13 01:43:13.512043 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:43:13.522578 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:43:13.522706 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:43:13.527318 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:43:13.707262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:43:13.711912 (kubelet)[2747]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:43:13.791762 kubelet[2747]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:43:13.791762 kubelet[2747]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:43:13.791762 kubelet[2747]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:43:13.791984 kubelet[2747]: I1213 01:43:13.791806 2747 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:43:13.795088 kubelet[2747]: I1213 01:43:13.795072 2747 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 01:43:13.795088 kubelet[2747]: I1213 01:43:13.795084 2747 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:43:13.795197 kubelet[2747]: I1213 01:43:13.795186 2747 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 01:43:13.796141 kubelet[2747]: I1213 01:43:13.795903 2747 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:43:13.819462 kubelet[2747]: I1213 01:43:13.819045 2747 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:43:13.827342 kubelet[2747]: E1213 01:43:13.827328 2747 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 01:43:13.827423 kubelet[2747]: I1213 01:43:13.827416 2747 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 01:43:13.828866 kubelet[2747]: I1213 01:43:13.828857 2747 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:43:13.830571 kubelet[2747]: I1213 01:43:13.830560 2747 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 01:43:13.830696 kubelet[2747]: I1213 01:43:13.830681 2747 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:43:13.830811 kubelet[2747]: I1213 01:43:13.830726 2747 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 01:43:13.830891 kubelet[2747]: I1213 01:43:13.830884 2747 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:43:13.830926 kubelet[2747]: I1213 01:43:13.830922 2747 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 01:43:13.830972 kubelet[2747]: I1213 01:43:13.830966 2747 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:43:13.831088 kubelet[2747]: I1213 01:43:13.831081 2747 kubelet.go:408] "Attempting to sync node with API server" Dec 13 01:43:13.831120 kubelet[2747]: I1213 01:43:13.831116 2747 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:43:13.831159 kubelet[2747]: I1213 01:43:13.831155 2747 kubelet.go:314] "Adding apiserver pod source" Dec 13 01:43:13.831199 kubelet[2747]: I1213 01:43:13.831194 2747 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:43:13.837813 kubelet[2747]: I1213 01:43:13.837443 2747 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:43:13.838234 kubelet[2747]: I1213 01:43:13.838227 2747 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:43:13.838598 kubelet[2747]: I1213 01:43:13.838591 2747 server.go:1269] "Started kubelet" Dec 13 01:43:13.841195 kubelet[2747]: I1213 01:43:13.841187 2747 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:43:13.844014 kubelet[2747]: I1213 01:43:13.843533 2747 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:43:13.844375 kubelet[2747]: I1213 01:43:13.844170 2747 server.go:460] "Adding debug handlers to kubelet server" Dec 13 01:43:13.844620 kubelet[2747]: I1213 01:43:13.844591 2747 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:43:13.844707 kubelet[2747]: I1213 01:43:13.844698 2747 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:43:13.844951 kubelet[2747]: I1213 01:43:13.844812 2747 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 01:43:13.845503 kubelet[2747]: I1213 01:43:13.845404 2747 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 01:43:13.845503 kubelet[2747]: I1213 01:43:13.845448 2747 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 01:43:13.845555 kubelet[2747]: I1213 01:43:13.845514 2747 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:43:13.846338 kubelet[2747]: I1213 01:43:13.846327 2747 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:43:13.846392 kubelet[2747]: I1213 01:43:13.846383 2747 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:43:13.848741 kubelet[2747]: E1213 01:43:13.848676 2747 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:43:13.849410 kubelet[2747]: I1213 01:43:13.849397 2747 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:43:13.853886 kubelet[2747]: I1213 01:43:13.853869 2747 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:43:13.855483 kubelet[2747]: I1213 01:43:13.855241 2747 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:43:13.855483 kubelet[2747]: I1213 01:43:13.855252 2747 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:43:13.855483 kubelet[2747]: I1213 01:43:13.855263 2747 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 01:43:13.855483 kubelet[2747]: E1213 01:43:13.855284 2747 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:43:13.874520 kubelet[2747]: I1213 01:43:13.874475 2747 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:43:13.874520 kubelet[2747]: I1213 01:43:13.874485 2747 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:43:13.874520 kubelet[2747]: I1213 01:43:13.874494 2747 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:43:13.874883 kubelet[2747]: I1213 01:43:13.874770 2747 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:43:13.874883 kubelet[2747]: I1213 01:43:13.874779 2747 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:43:13.874883 kubelet[2747]: I1213 01:43:13.874789 2747 policy_none.go:49] "None policy: Start" Dec 13 01:43:13.875555 kubelet[2747]: I1213 01:43:13.875181 2747 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:43:13.875555 kubelet[2747]: I1213 01:43:13.875191 2747 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:43:13.875555 kubelet[2747]: I1213 01:43:13.875269 2747 state_mem.go:75] "Updated machine memory state" Dec 13 01:43:13.877540 kubelet[2747]: I1213 01:43:13.877532 2747 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:43:13.877776 kubelet[2747]: I1213 01:43:13.877769 2747 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 01:43:13.877836 kubelet[2747]: I1213 01:43:13.877822 2747 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:43:13.878046 kubelet[2747]: I1213 01:43:13.877928 2747 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:43:13.960314 kubelet[2747]: E1213 01:43:13.960229 2747 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 01:43:13.982467 kubelet[2747]: I1213 01:43:13.982454 2747 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Dec 13 01:43:13.988715 kubelet[2747]: I1213 01:43:13.988689 2747 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Dec 13 01:43:13.989371 kubelet[2747]: I1213 01:43:13.989354 2747 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Dec 13 01:43:14.046754 kubelet[2747]: I1213 01:43:14.046737 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:43:14.046754 kubelet[2747]: I1213 01:43:14.046781 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:43:14.046754 kubelet[2747]: I1213 01:43:14.046798 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:43:14.046754 kubelet[2747]: I1213 01:43:14.046812 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:43:14.046754 kubelet[2747]: I1213 01:43:14.046824 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a52b86ce975f496e6002ba953fa9b888-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a52b86ce975f496e6002ba953fa9b888\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:43:14.047094 kubelet[2747]: I1213 01:43:14.046834 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2485a99c1d53471f931ebeea768ac64e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2485a99c1d53471f931ebeea768ac64e\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:43:14.047094 kubelet[2747]: I1213 01:43:14.046846 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2485a99c1d53471f931ebeea768ac64e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2485a99c1d53471f931ebeea768ac64e\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:43:14.047094 kubelet[2747]: I1213 01:43:14.046857 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2485a99c1d53471f931ebeea768ac64e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2485a99c1d53471f931ebeea768ac64e\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:43:14.047094 kubelet[2747]: I1213 01:43:14.046867 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:43:14.832253 kubelet[2747]: I1213 01:43:14.832228 2747 apiserver.go:52] "Watching apiserver" Dec 13 01:43:14.846551 kubelet[2747]: I1213 01:43:14.846500 2747 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 01:43:14.877292 kubelet[2747]: I1213 01:43:14.877264 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.877252802 podStartE2EDuration="1.877252802s" podCreationTimestamp="2024-12-13 01:43:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:43:14.873740735 +0000 UTC m=+1.147430444" watchObservedRunningTime="2024-12-13 01:43:14.877252802 +0000 UTC m=+1.150942509" Dec 13 01:43:14.882077 kubelet[2747]: I1213 01:43:14.881809 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.881788765 podStartE2EDuration="1.881788765s" podCreationTimestamp="2024-12-13 01:43:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:43:14.877663163 +0000 UTC m=+1.151352872" watchObservedRunningTime="2024-12-13 01:43:14.881788765 +0000 UTC m=+1.155478479" Dec 13 01:43:14.886002 kubelet[2747]: I1213 01:43:14.885937 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.885923885 podStartE2EDuration="1.885923885s" podCreationTimestamp="2024-12-13 01:43:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:43:14.882316566 +0000 UTC m=+1.156006276" watchObservedRunningTime="2024-12-13 01:43:14.885923885 +0000 UTC m=+1.159613595" Dec 13 01:43:18.553764 sudo[1857]: pam_unix(sudo:session): session closed for user root Dec 13 01:43:18.555547 sshd[1854]: pam_unix(sshd:session): session closed for user core Dec 13 01:43:18.556944 systemd-logind[1526]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:43:18.558342 systemd[1]: sshd@6-139.178.70.108:22-139.178.89.65:34040.service: Deactivated successfully. Dec 13 01:43:18.559378 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:43:18.559473 systemd[1]: session-9.scope: Consumed 2.483s CPU time, 141.3M memory peak, 0B memory swap peak. Dec 13 01:43:18.560209 systemd-logind[1526]: Removed session 9. Dec 13 01:43:20.630068 kubelet[2747]: I1213 01:43:20.630048 2747 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:43:20.635007 containerd[1549]: time="2024-12-13T01:43:20.633370894Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:43:20.635222 kubelet[2747]: I1213 01:43:20.633543 2747 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:43:20.966669 systemd[1]: Created slice kubepods-besteffort-pod7bea6aa5_57c9_4ed9_853e_7bb1c356ff62.slice - libcontainer container kubepods-besteffort-pod7bea6aa5_57c9_4ed9_853e_7bb1c356ff62.slice. Dec 13 01:43:20.991969 kubelet[2747]: I1213 01:43:20.991851 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7bea6aa5-57c9-4ed9-853e-7bb1c356ff62-kube-proxy\") pod \"kube-proxy-9dhlq\" (UID: \"7bea6aa5-57c9-4ed9-853e-7bb1c356ff62\") " pod="kube-system/kube-proxy-9dhlq" Dec 13 01:43:20.991969 kubelet[2747]: I1213 01:43:20.991891 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7bea6aa5-57c9-4ed9-853e-7bb1c356ff62-xtables-lock\") pod \"kube-proxy-9dhlq\" (UID: \"7bea6aa5-57c9-4ed9-853e-7bb1c356ff62\") " pod="kube-system/kube-proxy-9dhlq" Dec 13 01:43:20.991969 kubelet[2747]: I1213 01:43:20.991906 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7bea6aa5-57c9-4ed9-853e-7bb1c356ff62-lib-modules\") pod \"kube-proxy-9dhlq\" (UID: \"7bea6aa5-57c9-4ed9-853e-7bb1c356ff62\") " pod="kube-system/kube-proxy-9dhlq" Dec 13 01:43:20.991969 kubelet[2747]: I1213 01:43:20.991918 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg5qj\" (UniqueName: \"kubernetes.io/projected/7bea6aa5-57c9-4ed9-853e-7bb1c356ff62-kube-api-access-lg5qj\") pod \"kube-proxy-9dhlq\" (UID: \"7bea6aa5-57c9-4ed9-853e-7bb1c356ff62\") " pod="kube-system/kube-proxy-9dhlq" Dec 13 01:43:21.098553 kubelet[2747]: E1213 01:43:21.098393 2747 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 01:43:21.098553 kubelet[2747]: E1213 01:43:21.098413 2747 projected.go:194] Error preparing data for projected volume kube-api-access-lg5qj for pod kube-system/kube-proxy-9dhlq: configmap "kube-root-ca.crt" not found Dec 13 01:43:21.098553 kubelet[2747]: E1213 01:43:21.098447 2747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7bea6aa5-57c9-4ed9-853e-7bb1c356ff62-kube-api-access-lg5qj podName:7bea6aa5-57c9-4ed9-853e-7bb1c356ff62 nodeName:}" failed. No retries permitted until 2024-12-13 01:43:21.59843464 +0000 UTC m=+7.872124348 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lg5qj" (UniqueName: "kubernetes.io/projected/7bea6aa5-57c9-4ed9-853e-7bb1c356ff62-kube-api-access-lg5qj") pod "kube-proxy-9dhlq" (UID: "7bea6aa5-57c9-4ed9-853e-7bb1c356ff62") : configmap "kube-root-ca.crt" not found Dec 13 01:43:21.678544 systemd[1]: Created slice kubepods-besteffort-pod6723ef82_6d01_4bbc_8bee_2295ec5eb8b3.slice - libcontainer container kubepods-besteffort-pod6723ef82_6d01_4bbc_8bee_2295ec5eb8b3.slice. Dec 13 01:43:21.695527 kubelet[2747]: I1213 01:43:21.695502 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6723ef82-6d01-4bbc-8bee-2295ec5eb8b3-var-lib-calico\") pod \"tigera-operator-76c4976dd7-hh98j\" (UID: \"6723ef82-6d01-4bbc-8bee-2295ec5eb8b3\") " pod="tigera-operator/tigera-operator-76c4976dd7-hh98j" Dec 13 01:43:21.695527 kubelet[2747]: I1213 01:43:21.695527 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mljtw\" (UniqueName: \"kubernetes.io/projected/6723ef82-6d01-4bbc-8bee-2295ec5eb8b3-kube-api-access-mljtw\") pod \"tigera-operator-76c4976dd7-hh98j\" (UID: \"6723ef82-6d01-4bbc-8bee-2295ec5eb8b3\") " pod="tigera-operator/tigera-operator-76c4976dd7-hh98j" Dec 13 01:43:21.873944 containerd[1549]: time="2024-12-13T01:43:21.873893311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9dhlq,Uid:7bea6aa5-57c9-4ed9-853e-7bb1c356ff62,Namespace:kube-system,Attempt:0,}" Dec 13 01:43:21.893024 containerd[1549]: time="2024-12-13T01:43:21.892925172Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:43:21.893024 containerd[1549]: time="2024-12-13T01:43:21.892969478Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:43:21.893024 containerd[1549]: time="2024-12-13T01:43:21.893033554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:43:21.893269 containerd[1549]: time="2024-12-13T01:43:21.893086109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:43:21.910095 systemd[1]: Started cri-containerd-32801593da461ed4f29761bcce03291f5ad48bcd9e67f76290f45f7fe91cf4f6.scope - libcontainer container 32801593da461ed4f29761bcce03291f5ad48bcd9e67f76290f45f7fe91cf4f6. Dec 13 01:43:21.922725 containerd[1549]: time="2024-12-13T01:43:21.922685281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9dhlq,Uid:7bea6aa5-57c9-4ed9-853e-7bb1c356ff62,Namespace:kube-system,Attempt:0,} returns sandbox id \"32801593da461ed4f29761bcce03291f5ad48bcd9e67f76290f45f7fe91cf4f6\"" Dec 13 01:43:21.924559 containerd[1549]: time="2024-12-13T01:43:21.924540650Z" level=info msg="CreateContainer within sandbox \"32801593da461ed4f29761bcce03291f5ad48bcd9e67f76290f45f7fe91cf4f6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:43:21.957674 containerd[1549]: time="2024-12-13T01:43:21.957612402Z" level=info msg="CreateContainer within sandbox \"32801593da461ed4f29761bcce03291f5ad48bcd9e67f76290f45f7fe91cf4f6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c80cab5c2d7290f65b638dd5b9477f058929669cbb903f5ca8f54c822be0c4d9\"" Dec 13 01:43:21.958870 containerd[1549]: time="2024-12-13T01:43:21.958546057Z" level=info msg="StartContainer for \"c80cab5c2d7290f65b638dd5b9477f058929669cbb903f5ca8f54c822be0c4d9\"" Dec 13 01:43:21.976102 systemd[1]: Started cri-containerd-c80cab5c2d7290f65b638dd5b9477f058929669cbb903f5ca8f54c822be0c4d9.scope - libcontainer container c80cab5c2d7290f65b638dd5b9477f058929669cbb903f5ca8f54c822be0c4d9. Dec 13 01:43:21.988622 containerd[1549]: time="2024-12-13T01:43:21.988514332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-hh98j,Uid:6723ef82-6d01-4bbc-8bee-2295ec5eb8b3,Namespace:tigera-operator,Attempt:0,}" Dec 13 01:43:21.996429 containerd[1549]: time="2024-12-13T01:43:21.996347378Z" level=info msg="StartContainer for \"c80cab5c2d7290f65b638dd5b9477f058929669cbb903f5ca8f54c822be0c4d9\" returns successfully" Dec 13 01:43:22.007312 containerd[1549]: time="2024-12-13T01:43:22.007100203Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:43:22.007312 containerd[1549]: time="2024-12-13T01:43:22.007137813Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:43:22.007312 containerd[1549]: time="2024-12-13T01:43:22.007152293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:43:22.007312 containerd[1549]: time="2024-12-13T01:43:22.007238366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:43:22.023107 systemd[1]: Started cri-containerd-be79bf7909a2351aca5069473806882cc5603eebe2ea9cd2bd66a3db8cf09409.scope - libcontainer container be79bf7909a2351aca5069473806882cc5603eebe2ea9cd2bd66a3db8cf09409. Dec 13 01:43:22.048835 containerd[1549]: time="2024-12-13T01:43:22.048814541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-hh98j,Uid:6723ef82-6d01-4bbc-8bee-2295ec5eb8b3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"be79bf7909a2351aca5069473806882cc5603eebe2ea9cd2bd66a3db8cf09409\"" Dec 13 01:43:22.050086 containerd[1549]: time="2024-12-13T01:43:22.050044185Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 01:43:22.891402 kubelet[2747]: I1213 01:43:22.891179 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9dhlq" podStartSLOduration=2.891166118 podStartE2EDuration="2.891166118s" podCreationTimestamp="2024-12-13 01:43:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:43:22.89102858 +0000 UTC m=+9.164718293" watchObservedRunningTime="2024-12-13 01:43:22.891166118 +0000 UTC m=+9.164855832" Dec 13 01:43:24.061922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1506791590.mount: Deactivated successfully. Dec 13 01:43:24.899358 containerd[1549]: time="2024-12-13T01:43:24.899323389Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:43:24.900687 containerd[1549]: time="2024-12-13T01:43:24.900656376Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764277" Dec 13 01:43:24.902384 containerd[1549]: time="2024-12-13T01:43:24.902345024Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:43:24.903898 containerd[1549]: time="2024-12-13T01:43:24.903852053Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:43:24.904580 containerd[1549]: time="2024-12-13T01:43:24.904499493Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.854347481s" Dec 13 01:43:24.904580 containerd[1549]: time="2024-12-13T01:43:24.904520856Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Dec 13 01:43:24.920357 containerd[1549]: time="2024-12-13T01:43:24.920320490Z" level=info msg="CreateContainer within sandbox \"be79bf7909a2351aca5069473806882cc5603eebe2ea9cd2bd66a3db8cf09409\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 01:43:24.939593 containerd[1549]: time="2024-12-13T01:43:24.939568605Z" level=info msg="CreateContainer within sandbox \"be79bf7909a2351aca5069473806882cc5603eebe2ea9cd2bd66a3db8cf09409\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"cca395f32ba5fa428b754f752d179bda01fa5be0191fd4901a63b3384e6e0cde\"" Dec 13 01:43:24.941429 containerd[1549]: time="2024-12-13T01:43:24.940827474Z" level=info msg="StartContainer for \"cca395f32ba5fa428b754f752d179bda01fa5be0191fd4901a63b3384e6e0cde\"" Dec 13 01:43:24.964143 systemd[1]: Started cri-containerd-cca395f32ba5fa428b754f752d179bda01fa5be0191fd4901a63b3384e6e0cde.scope - libcontainer container cca395f32ba5fa428b754f752d179bda01fa5be0191fd4901a63b3384e6e0cde. Dec 13 01:43:24.982925 containerd[1549]: time="2024-12-13T01:43:24.982893574Z" level=info msg="StartContainer for \"cca395f32ba5fa428b754f752d179bda01fa5be0191fd4901a63b3384e6e0cde\" returns successfully" Dec 13 01:43:25.958090 kubelet[2747]: I1213 01:43:25.957948 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-hh98j" podStartSLOduration=2.093793351 podStartE2EDuration="4.957934328s" podCreationTimestamp="2024-12-13 01:43:21 +0000 UTC" firstStartedPulling="2024-12-13 01:43:22.049616094 +0000 UTC m=+8.323305800" lastFinishedPulling="2024-12-13 01:43:24.913757072 +0000 UTC m=+11.187446777" observedRunningTime="2024-12-13 01:43:25.957843118 +0000 UTC m=+12.231532833" watchObservedRunningTime="2024-12-13 01:43:25.957934328 +0000 UTC m=+12.231624050" Dec 13 01:43:27.860890 systemd[1]: Created slice kubepods-besteffort-pod177973f9_3ebb_451b_85ab_a41a3d2ea79b.slice - libcontainer container kubepods-besteffort-pod177973f9_3ebb_451b_85ab_a41a3d2ea79b.slice. Dec 13 01:43:27.871753 systemd[1]: Created slice kubepods-besteffort-podcafa636e_d27e_4a03_b0b6_680253dba261.slice - libcontainer container kubepods-besteffort-podcafa636e_d27e_4a03_b0b6_680253dba261.slice. Dec 13 01:43:27.949348 kubelet[2747]: E1213 01:43:27.949312 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sgckt" podUID="be0402b2-9394-4ec8-b88a-518cccbc701b" Dec 13 01:43:28.050304 kubelet[2747]: I1213 01:43:28.050190 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cafa636e-d27e-4a03-b0b6-680253dba261-var-lib-calico\") pod \"calico-node-47x4r\" (UID: \"cafa636e-d27e-4a03-b0b6-680253dba261\") " pod="calico-system/calico-node-47x4r" Dec 13 01:43:28.050401 kubelet[2747]: I1213 01:43:28.050327 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/be0402b2-9394-4ec8-b88a-518cccbc701b-varrun\") pod \"csi-node-driver-sgckt\" (UID: \"be0402b2-9394-4ec8-b88a-518cccbc701b\") " pod="calico-system/csi-node-driver-sgckt" Dec 13 01:43:28.050401 kubelet[2747]: I1213 01:43:28.050347 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cafa636e-d27e-4a03-b0b6-680253dba261-lib-modules\") pod \"calico-node-47x4r\" (UID: \"cafa636e-d27e-4a03-b0b6-680253dba261\") " pod="calico-system/calico-node-47x4r" Dec 13 01:43:28.050401 kubelet[2747]: I1213 01:43:28.050357 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/cafa636e-d27e-4a03-b0b6-680253dba261-var-run-calico\") pod \"calico-node-47x4r\" (UID: \"cafa636e-d27e-4a03-b0b6-680253dba261\") " pod="calico-system/calico-node-47x4r" Dec 13 01:43:28.050401 kubelet[2747]: I1213 01:43:28.050371 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/be0402b2-9394-4ec8-b88a-518cccbc701b-registration-dir\") pod \"csi-node-driver-sgckt\" (UID: \"be0402b2-9394-4ec8-b88a-518cccbc701b\") " pod="calico-system/csi-node-driver-sgckt" Dec 13 01:43:28.050401 kubelet[2747]: I1213 01:43:28.050381 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rcg5\" (UniqueName: \"kubernetes.io/projected/be0402b2-9394-4ec8-b88a-518cccbc701b-kube-api-access-2rcg5\") pod \"csi-node-driver-sgckt\" (UID: \"be0402b2-9394-4ec8-b88a-518cccbc701b\") " pod="calico-system/csi-node-driver-sgckt" Dec 13 01:43:28.050528 kubelet[2747]: I1213 01:43:28.050392 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/cafa636e-d27e-4a03-b0b6-680253dba261-flexvol-driver-host\") pod \"calico-node-47x4r\" (UID: \"cafa636e-d27e-4a03-b0b6-680253dba261\") " pod="calico-system/calico-node-47x4r" Dec 13 01:43:28.050552 kubelet[2747]: I1213 01:43:28.050533 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/cafa636e-d27e-4a03-b0b6-680253dba261-cni-bin-dir\") pod \"calico-node-47x4r\" (UID: \"cafa636e-d27e-4a03-b0b6-680253dba261\") " pod="calico-system/calico-node-47x4r" Dec 13 01:43:28.050570 kubelet[2747]: I1213 01:43:28.050552 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/cafa636e-d27e-4a03-b0b6-680253dba261-cni-net-dir\") pod \"calico-node-47x4r\" (UID: \"cafa636e-d27e-4a03-b0b6-680253dba261\") " pod="calico-system/calico-node-47x4r" Dec 13 01:43:28.050586 kubelet[2747]: I1213 01:43:28.050563 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/177973f9-3ebb-451b-85ab-a41a3d2ea79b-tigera-ca-bundle\") pod \"calico-typha-75c89bbff5-slv72\" (UID: \"177973f9-3ebb-451b-85ab-a41a3d2ea79b\") " pod="calico-system/calico-typha-75c89bbff5-slv72" Dec 13 01:43:28.050606 kubelet[2747]: I1213 01:43:28.050590 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/177973f9-3ebb-451b-85ab-a41a3d2ea79b-typha-certs\") pod \"calico-typha-75c89bbff5-slv72\" (UID: \"177973f9-3ebb-451b-85ab-a41a3d2ea79b\") " pod="calico-system/calico-typha-75c89bbff5-slv72" Dec 13 01:43:28.050606 kubelet[2747]: I1213 01:43:28.050602 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/cafa636e-d27e-4a03-b0b6-680253dba261-node-certs\") pod \"calico-node-47x4r\" (UID: \"cafa636e-d27e-4a03-b0b6-680253dba261\") " pod="calico-system/calico-node-47x4r" Dec 13 01:43:28.050644 kubelet[2747]: I1213 01:43:28.050610 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78xhm\" (UniqueName: \"kubernetes.io/projected/177973f9-3ebb-451b-85ab-a41a3d2ea79b-kube-api-access-78xhm\") pod \"calico-typha-75c89bbff5-slv72\" (UID: \"177973f9-3ebb-451b-85ab-a41a3d2ea79b\") " pod="calico-system/calico-typha-75c89bbff5-slv72" Dec 13 01:43:28.050644 kubelet[2747]: I1213 01:43:28.050620 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cafa636e-d27e-4a03-b0b6-680253dba261-tigera-ca-bundle\") pod \"calico-node-47x4r\" (UID: \"cafa636e-d27e-4a03-b0b6-680253dba261\") " pod="calico-system/calico-node-47x4r" Dec 13 01:43:28.051125 kubelet[2747]: I1213 01:43:28.050629 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/be0402b2-9394-4ec8-b88a-518cccbc701b-socket-dir\") pod \"csi-node-driver-sgckt\" (UID: \"be0402b2-9394-4ec8-b88a-518cccbc701b\") " pod="calico-system/csi-node-driver-sgckt" Dec 13 01:43:28.051169 kubelet[2747]: I1213 01:43:28.051140 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cafa636e-d27e-4a03-b0b6-680253dba261-xtables-lock\") pod \"calico-node-47x4r\" (UID: \"cafa636e-d27e-4a03-b0b6-680253dba261\") " pod="calico-system/calico-node-47x4r" Dec 13 01:43:28.051206 kubelet[2747]: I1213 01:43:28.051185 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx4n8\" (UniqueName: \"kubernetes.io/projected/cafa636e-d27e-4a03-b0b6-680253dba261-kube-api-access-mx4n8\") pod \"calico-node-47x4r\" (UID: \"cafa636e-d27e-4a03-b0b6-680253dba261\") " pod="calico-system/calico-node-47x4r" Dec 13 01:43:28.051206 kubelet[2747]: I1213 01:43:28.051202 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/be0402b2-9394-4ec8-b88a-518cccbc701b-kubelet-dir\") pod \"csi-node-driver-sgckt\" (UID: \"be0402b2-9394-4ec8-b88a-518cccbc701b\") " pod="calico-system/csi-node-driver-sgckt" Dec 13 01:43:28.051253 kubelet[2747]: I1213 01:43:28.051214 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/cafa636e-d27e-4a03-b0b6-680253dba261-policysync\") pod \"calico-node-47x4r\" (UID: \"cafa636e-d27e-4a03-b0b6-680253dba261\") " pod="calico-system/calico-node-47x4r" Dec 13 01:43:28.051253 kubelet[2747]: I1213 01:43:28.051222 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/cafa636e-d27e-4a03-b0b6-680253dba261-cni-log-dir\") pod \"calico-node-47x4r\" (UID: \"cafa636e-d27e-4a03-b0b6-680253dba261\") " pod="calico-system/calico-node-47x4r" Dec 13 01:43:28.156882 kubelet[2747]: E1213 01:43:28.156643 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:43:28.156882 kubelet[2747]: W1213 01:43:28.156671 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:43:28.159230 kubelet[2747]: E1213 01:43:28.159113 2747 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:43:28.162328 kubelet[2747]: E1213 01:43:28.162310 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:43:28.162382 kubelet[2747]: W1213 01:43:28.162324 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:43:28.162382 kubelet[2747]: E1213 01:43:28.162348 2747 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:43:28.162751 kubelet[2747]: E1213 01:43:28.162738 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:43:28.162751 kubelet[2747]: W1213 01:43:28.162747 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:43:28.162870 kubelet[2747]: E1213 01:43:28.162754 2747 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:43:28.163024 kubelet[2747]: E1213 01:43:28.162997 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:43:28.163024 kubelet[2747]: W1213 01:43:28.163006 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:43:28.163104 kubelet[2747]: E1213 01:43:28.163024 2747 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:43:28.163251 kubelet[2747]: E1213 01:43:28.163235 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:43:28.163251 kubelet[2747]: W1213 01:43:28.163242 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:43:28.163383 kubelet[2747]: E1213 01:43:28.163347 2747 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:43:28.163425 kubelet[2747]: E1213 01:43:28.163401 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:43:28.163425 kubelet[2747]: W1213 01:43:28.163407 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:43:28.163425 kubelet[2747]: E1213 01:43:28.163415 2747 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:43:28.164052 kubelet[2747]: E1213 01:43:28.164039 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:43:28.164052 kubelet[2747]: W1213 01:43:28.164049 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:43:28.164147 kubelet[2747]: E1213 01:43:28.164064 2747 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:43:28.165467 kubelet[2747]: E1213 01:43:28.165384 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:43:28.165467 kubelet[2747]: W1213 01:43:28.165395 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:43:28.167520 kubelet[2747]: E1213 01:43:28.167508 2747 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:43:28.167969 kubelet[2747]: E1213 01:43:28.167892 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:43:28.167969 kubelet[2747]: W1213 01:43:28.167903 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:43:28.168095 kubelet[2747]: E1213 01:43:28.168080 2747 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:43:28.168173 kubelet[2747]: E1213 01:43:28.168126 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:43:28.168173 kubelet[2747]: W1213 01:43:28.168132 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:43:28.168636 kubelet[2747]: E1213 01:43:28.168354 2747 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:43:28.168636 kubelet[2747]: E1213 01:43:28.168492 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:43:28.168636 kubelet[2747]: W1213 01:43:28.168500 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:43:28.168636 kubelet[2747]: E1213 01:43:28.168570 2747 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:43:28.169086 kubelet[2747]: E1213 01:43:28.169078 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:43:28.169797 kubelet[2747]: W1213 01:43:28.169740 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:43:28.169890 kubelet[2747]: E1213 01:43:28.169872 2747 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:43:28.170001 kubelet[2747]: E1213 01:43:28.169952 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:43:28.170001 kubelet[2747]: W1213 01:43:28.169958 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:43:28.170001 kubelet[2747]: E1213 01:43:28.169981 2747 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:43:28.172742 kubelet[2747]: E1213 01:43:28.172643 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:43:28.172742 kubelet[2747]: W1213 01:43:28.172691 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:43:28.173407 kubelet[2747]: E1213 01:43:28.172874 2747 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:43:28.173407 kubelet[2747]: E1213 01:43:28.173165 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:43:28.173407 kubelet[2747]: W1213 01:43:28.173170 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:43:28.174310 kubelet[2747]: E1213 01:43:28.173721 2747 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:43:28.174310 kubelet[2747]: E1213 01:43:28.173814 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:43:28.174310 kubelet[2747]: W1213 01:43:28.173822 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:43:28.174310 kubelet[2747]: E1213 01:43:28.173833 2747 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:43:28.174465 kubelet[2747]: E1213 01:43:28.174307 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:43:28.174465 kubelet[2747]: W1213 01:43:28.174330 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:43:28.174465 kubelet[2747]: E1213 01:43:28.174341 2747 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:43:28.175810 kubelet[2747]: E1213 01:43:28.174935 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:43:28.175810 kubelet[2747]: W1213 01:43:28.174949 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:43:28.175810 kubelet[2747]: E1213 01:43:28.174961 2747 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:43:28.176263 kubelet[2747]: E1213 01:43:28.175833 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:43:28.176263 kubelet[2747]: W1213 01:43:28.175841 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:43:28.176263 kubelet[2747]: E1213 01:43:28.175852 2747 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:43:28.178015 kubelet[2747]: E1213 01:43:28.177958 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:43:28.178253 kubelet[2747]: W1213 01:43:28.177969 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:43:28.178253 kubelet[2747]: E1213 01:43:28.178057 2747 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:43:28.180464 kubelet[2747]: E1213 01:43:28.180288 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:43:28.180464 kubelet[2747]: W1213 01:43:28.180308 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:43:28.180464 kubelet[2747]: E1213 01:43:28.180318 2747 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:43:28.183616 kubelet[2747]: E1213 01:43:28.183520 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:43:28.183616 kubelet[2747]: W1213 01:43:28.183532 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:43:28.183616 kubelet[2747]: E1213 01:43:28.183546 2747 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:43:28.472279 containerd[1549]: time="2024-12-13T01:43:28.472219442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75c89bbff5-slv72,Uid:177973f9-3ebb-451b-85ab-a41a3d2ea79b,Namespace:calico-system,Attempt:0,}" Dec 13 01:43:28.476306 containerd[1549]: time="2024-12-13T01:43:28.476229477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-47x4r,Uid:cafa636e-d27e-4a03-b0b6-680253dba261,Namespace:calico-system,Attempt:0,}" Dec 13 01:43:28.526576 containerd[1549]: time="2024-12-13T01:43:28.526371029Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:43:28.526576 containerd[1549]: time="2024-12-13T01:43:28.526413644Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:43:28.526576 containerd[1549]: time="2024-12-13T01:43:28.526434600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:43:28.526725 containerd[1549]: time="2024-12-13T01:43:28.526670410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:43:28.528331 containerd[1549]: time="2024-12-13T01:43:28.528055717Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:43:28.528331 containerd[1549]: time="2024-12-13T01:43:28.528173270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:43:28.528331 containerd[1549]: time="2024-12-13T01:43:28.528181816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:43:28.529355 containerd[1549]: time="2024-12-13T01:43:28.528815512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:43:28.553941 kubelet[2747]: E1213 01:43:28.553613 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:43:28.553941 kubelet[2747]: W1213 01:43:28.553628 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:43:28.553941 kubelet[2747]: E1213 01:43:28.553641 2747 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:43:28.554646 kubelet[2747]: E1213 01:43:28.554615 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:43:28.554646 kubelet[2747]: W1213 01:43:28.554626 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:43:28.554989 kubelet[2747]: E1213 01:43:28.554634 2747 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:43:28.554989 kubelet[2747]: E1213 01:43:28.554962 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:43:28.555282 kubelet[2747]: W1213 01:43:28.555091 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:43:28.555282 kubelet[2747]: E1213 01:43:28.555108 2747 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:43:28.555443 kubelet[2747]: E1213 01:43:28.555435 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:43:28.555588 kubelet[2747]: W1213 01:43:28.555493 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:43:28.555588 kubelet[2747]: E1213 01:43:28.555504 2747 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:43:28.555940 kubelet[2747]: E1213 01:43:28.555826 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:43:28.555940 kubelet[2747]: W1213 01:43:28.555833 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:43:28.555940 kubelet[2747]: E1213 01:43:28.555840 2747 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:43:28.556223 kubelet[2747]: E1213 01:43:28.556137 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:43:28.556223 kubelet[2747]: W1213 01:43:28.556145 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:43:28.556426 kubelet[2747]: E1213 01:43:28.556152 2747 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:43:28.556645 kubelet[2747]: E1213 01:43:28.556573 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:43:28.556645 kubelet[2747]: W1213 01:43:28.556581 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:43:28.556645 kubelet[2747]: E1213 01:43:28.556590 2747 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:43:28.556824 kubelet[2747]: E1213 01:43:28.556731 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:43:28.556824 kubelet[2747]: W1213 01:43:28.556737 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:43:28.556824 kubelet[2747]: E1213 01:43:28.556742 2747 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:43:28.557189 kubelet[2747]: E1213 01:43:28.557137 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:43:28.557189 kubelet[2747]: W1213 01:43:28.557146 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:43:28.557189 kubelet[2747]: E1213 01:43:28.557155 2747 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:43:28.557459 kubelet[2747]: E1213 01:43:28.557407 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:43:28.557459 kubelet[2747]: W1213 01:43:28.557415 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:43:28.557459 kubelet[2747]: E1213 01:43:28.557431 2747 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:43:28.557961 kubelet[2747]: E1213 01:43:28.557708 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:43:28.557961 kubelet[2747]: W1213 01:43:28.557717 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:43:28.557961 kubelet[2747]: E1213 01:43:28.557726 2747 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:43:28.558108 kubelet[2747]: E1213 01:43:28.558051 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:43:28.558108 kubelet[2747]: W1213 01:43:28.558059 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:43:28.558108 kubelet[2747]: E1213 01:43:28.558066 2747 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:43:28.558326 kubelet[2747]: E1213 01:43:28.558288 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:43:28.558326 kubelet[2747]: W1213 01:43:28.558295 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:43:28.558326 kubelet[2747]: E1213 01:43:28.558302 2747 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:43:28.558613 kubelet[2747]: E1213 01:43:28.558542 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:43:28.558613 kubelet[2747]: W1213 01:43:28.558551 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:43:28.558613 kubelet[2747]: E1213 01:43:28.558558 2747 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:43:28.558846 kubelet[2747]: E1213 01:43:28.558725 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:43:28.558846 kubelet[2747]: W1213 01:43:28.558731 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:43:28.558846 kubelet[2747]: E1213 01:43:28.558745 2747 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:43:28.559084 kubelet[2747]: E1213 01:43:28.559008 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:43:28.559084 kubelet[2747]: W1213 01:43:28.559017 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:43:28.559084 kubelet[2747]: E1213 01:43:28.559027 2747 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:43:28.559300 kubelet[2747]: E1213 01:43:28.559263 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:43:28.559300 kubelet[2747]: W1213 01:43:28.559270 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:43:28.559300 kubelet[2747]: E1213 01:43:28.559276 2747 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:43:28.559598 kubelet[2747]: E1213 01:43:28.559485 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:43:28.559598 kubelet[2747]: W1213 01:43:28.559493 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:43:28.559598 kubelet[2747]: E1213 01:43:28.559502 2747 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:43:28.559785 kubelet[2747]: E1213 01:43:28.559668 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:43:28.559785 kubelet[2747]: W1213 01:43:28.559673 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:43:28.559785 kubelet[2747]: E1213 01:43:28.559679 2747 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:43:28.560071 kubelet[2747]: E1213 01:43:28.559929 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:43:28.560071 kubelet[2747]: W1213 01:43:28.559936 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:43:28.560071 kubelet[2747]: E1213 01:43:28.559942 2747 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:43:28.560319 kubelet[2747]: E1213 01:43:28.560182 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:43:28.560319 kubelet[2747]: W1213 01:43:28.560189 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:43:28.560319 kubelet[2747]: E1213 01:43:28.560194 2747 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:43:28.560496 kubelet[2747]: E1213 01:43:28.560401 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:43:28.560496 kubelet[2747]: W1213 01:43:28.560408 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:43:28.560496 kubelet[2747]: E1213 01:43:28.560415 2747 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:43:28.560761 kubelet[2747]: E1213 01:43:28.560649 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:43:28.560761 kubelet[2747]: W1213 01:43:28.560657 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:43:28.560761 kubelet[2747]: E1213 01:43:28.560665 2747 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:43:28.560954 kubelet[2747]: E1213 01:43:28.560852 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:43:28.560954 kubelet[2747]: W1213 01:43:28.560859 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:43:28.560954 kubelet[2747]: E1213 01:43:28.560867 2747 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:43:28.561153 kubelet[2747]: E1213 01:43:28.561070 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:43:28.561153 kubelet[2747]: W1213 01:43:28.561075 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:43:28.561153 kubelet[2747]: E1213 01:43:28.561081 2747 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:43:28.568150 systemd[1]: Started cri-containerd-e997dedc8452bf17270287c0995f0d5dadaa268fe2d5e33f62dc89d17e389fdd.scope - libcontainer container e997dedc8452bf17270287c0995f0d5dadaa268fe2d5e33f62dc89d17e389fdd. Dec 13 01:43:28.573601 systemd[1]: Started cri-containerd-7b16b20863325d422e1cf97f308b639d2ee595ff01860faaacc849be4300fb9d.scope - libcontainer container 7b16b20863325d422e1cf97f308b639d2ee595ff01860faaacc849be4300fb9d. Dec 13 01:43:28.599698 containerd[1549]: time="2024-12-13T01:43:28.599666913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-47x4r,Uid:cafa636e-d27e-4a03-b0b6-680253dba261,Namespace:calico-system,Attempt:0,} returns sandbox id \"7b16b20863325d422e1cf97f308b639d2ee595ff01860faaacc849be4300fb9d\"" Dec 13 01:43:28.611439 containerd[1549]: time="2024-12-13T01:43:28.611390998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75c89bbff5-slv72,Uid:177973f9-3ebb-451b-85ab-a41a3d2ea79b,Namespace:calico-system,Attempt:0,} returns sandbox id \"e997dedc8452bf17270287c0995f0d5dadaa268fe2d5e33f62dc89d17e389fdd\"" Dec 13 01:43:28.668663 containerd[1549]: time="2024-12-13T01:43:28.668598418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 01:43:29.856454 kubelet[2747]: E1213 01:43:29.856108 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sgckt" podUID="be0402b2-9394-4ec8-b88a-518cccbc701b" Dec 13 01:43:30.030744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2121017504.mount: Deactivated successfully. Dec 13 01:43:30.102660 containerd[1549]: time="2024-12-13T01:43:30.102623777Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:43:30.103090 containerd[1549]: time="2024-12-13T01:43:30.103038426Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Dec 13 01:43:30.103686 containerd[1549]: time="2024-12-13T01:43:30.103532622Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:43:30.104606 containerd[1549]: time="2024-12-13T01:43:30.104584153Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:43:30.105388 containerd[1549]: time="2024-12-13T01:43:30.104991865Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.43636855s" Dec 13 01:43:30.105388 containerd[1549]: time="2024-12-13T01:43:30.105011602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 01:43:30.105933 containerd[1549]: time="2024-12-13T01:43:30.105910405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 01:43:30.107178 containerd[1549]: time="2024-12-13T01:43:30.107130888Z" level=info msg="CreateContainer within sandbox \"7b16b20863325d422e1cf97f308b639d2ee595ff01860faaacc849be4300fb9d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 01:43:30.164426 containerd[1549]: time="2024-12-13T01:43:30.164386126Z" level=info msg="CreateContainer within sandbox \"7b16b20863325d422e1cf97f308b639d2ee595ff01860faaacc849be4300fb9d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9d6108d520079b486411ff2ba476575c2267c0594cbd81f602014c049dd9f517\"" Dec 13 01:43:30.164999 containerd[1549]: time="2024-12-13T01:43:30.164959812Z" level=info msg="StartContainer for \"9d6108d520079b486411ff2ba476575c2267c0594cbd81f602014c049dd9f517\"" Dec 13 01:43:30.195115 systemd[1]: Started cri-containerd-9d6108d520079b486411ff2ba476575c2267c0594cbd81f602014c049dd9f517.scope - libcontainer container 9d6108d520079b486411ff2ba476575c2267c0594cbd81f602014c049dd9f517. Dec 13 01:43:30.229733 containerd[1549]: time="2024-12-13T01:43:30.229636162Z" level=info msg="StartContainer for \"9d6108d520079b486411ff2ba476575c2267c0594cbd81f602014c049dd9f517\" returns successfully" Dec 13 01:43:30.233455 systemd[1]: cri-containerd-9d6108d520079b486411ff2ba476575c2267c0594cbd81f602014c049dd9f517.scope: Deactivated successfully. Dec 13 01:43:30.248410 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d6108d520079b486411ff2ba476575c2267c0594cbd81f602014c049dd9f517-rootfs.mount: Deactivated successfully. Dec 13 01:43:30.894496 containerd[1549]: time="2024-12-13T01:43:30.865739178Z" level=info msg="shim disconnected" id=9d6108d520079b486411ff2ba476575c2267c0594cbd81f602014c049dd9f517 namespace=k8s.io Dec 13 01:43:30.894496 containerd[1549]: time="2024-12-13T01:43:30.894461323Z" level=warning msg="cleaning up after shim disconnected" id=9d6108d520079b486411ff2ba476575c2267c0594cbd81f602014c049dd9f517 namespace=k8s.io Dec 13 01:43:30.894496 containerd[1549]: time="2024-12-13T01:43:30.894472837Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:43:31.881960 kubelet[2747]: E1213 01:43:31.881902 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sgckt" podUID="be0402b2-9394-4ec8-b88a-518cccbc701b" Dec 13 01:43:32.672675 containerd[1549]: time="2024-12-13T01:43:32.672640472Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:43:32.673102 containerd[1549]: time="2024-12-13T01:43:32.673058747Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Dec 13 01:43:32.674432 containerd[1549]: time="2024-12-13T01:43:32.674403822Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:43:32.677061 containerd[1549]: time="2024-12-13T01:43:32.677025871Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:43:32.677660 containerd[1549]: time="2024-12-13T01:43:32.677381368Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.571364638s" Dec 13 01:43:32.677660 containerd[1549]: time="2024-12-13T01:43:32.677401129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Dec 13 01:43:32.678061 containerd[1549]: time="2024-12-13T01:43:32.678050030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 01:43:32.689255 containerd[1549]: time="2024-12-13T01:43:32.689230297Z" level=info msg="CreateContainer within sandbox \"e997dedc8452bf17270287c0995f0d5dadaa268fe2d5e33f62dc89d17e389fdd\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 01:43:32.694305 containerd[1549]: time="2024-12-13T01:43:32.694283602Z" level=info msg="CreateContainer within sandbox \"e997dedc8452bf17270287c0995f0d5dadaa268fe2d5e33f62dc89d17e389fdd\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7f380287bd3e551f78fe925028da71449a74c21c3088b08ce5401864307e446b\"" Dec 13 01:43:32.695458 containerd[1549]: time="2024-12-13T01:43:32.695436313Z" level=info msg="StartContainer for \"7f380287bd3e551f78fe925028da71449a74c21c3088b08ce5401864307e446b\"" Dec 13 01:43:32.720145 systemd[1]: Started cri-containerd-7f380287bd3e551f78fe925028da71449a74c21c3088b08ce5401864307e446b.scope - libcontainer container 7f380287bd3e551f78fe925028da71449a74c21c3088b08ce5401864307e446b. Dec 13 01:43:32.757266 containerd[1549]: time="2024-12-13T01:43:32.757242715Z" level=info msg="StartContainer for \"7f380287bd3e551f78fe925028da71449a74c21c3088b08ce5401864307e446b\" returns successfully" Dec 13 01:43:33.056012 kubelet[2747]: I1213 01:43:33.055659 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-75c89bbff5-slv72" podStartSLOduration=1.9883852929999999 podStartE2EDuration="6.042360877s" podCreationTimestamp="2024-12-13 01:43:27 +0000 UTC" firstStartedPulling="2024-12-13 01:43:28.624011956 +0000 UTC m=+14.897701667" lastFinishedPulling="2024-12-13 01:43:32.677987545 +0000 UTC m=+18.951677251" observedRunningTime="2024-12-13 01:43:33.042182279 +0000 UTC m=+19.315871995" watchObservedRunningTime="2024-12-13 01:43:33.042360877 +0000 UTC m=+19.316050593" Dec 13 01:43:33.856627 kubelet[2747]: E1213 01:43:33.856381 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sgckt" podUID="be0402b2-9394-4ec8-b88a-518cccbc701b" Dec 13 01:43:33.958715 kubelet[2747]: I1213 01:43:33.958691 2747 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:43:36.056678 kubelet[2747]: E1213 01:43:36.056647 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sgckt" podUID="be0402b2-9394-4ec8-b88a-518cccbc701b" Dec 13 01:43:37.197557 containerd[1549]: time="2024-12-13T01:43:37.196596554Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:43:37.197557 containerd[1549]: time="2024-12-13T01:43:37.197103150Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Dec 13 01:43:37.198278 containerd[1549]: time="2024-12-13T01:43:37.197724067Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:43:37.199679 containerd[1549]: time="2024-12-13T01:43:37.199239884Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:43:37.200148 containerd[1549]: time="2024-12-13T01:43:37.199953194Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.521847578s" Dec 13 01:43:37.200148 containerd[1549]: time="2024-12-13T01:43:37.199991002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 01:43:37.203188 containerd[1549]: time="2024-12-13T01:43:37.202528264Z" level=info msg="CreateContainer within sandbox \"7b16b20863325d422e1cf97f308b639d2ee595ff01860faaacc849be4300fb9d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:43:37.236436 containerd[1549]: time="2024-12-13T01:43:37.236398616Z" level=info msg="CreateContainer within sandbox \"7b16b20863325d422e1cf97f308b639d2ee595ff01860faaacc849be4300fb9d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"fd1188a95a934d563b7d52c478ef6d4f01f5586fd286f9c43221b67144185676\"" Dec 13 01:43:37.242231 containerd[1549]: time="2024-12-13T01:43:37.236990573Z" level=info msg="StartContainer for \"fd1188a95a934d563b7d52c478ef6d4f01f5586fd286f9c43221b67144185676\"" Dec 13 01:43:37.326197 systemd[1]: Started cri-containerd-fd1188a95a934d563b7d52c478ef6d4f01f5586fd286f9c43221b67144185676.scope - libcontainer container fd1188a95a934d563b7d52c478ef6d4f01f5586fd286f9c43221b67144185676. Dec 13 01:43:37.346625 containerd[1549]: time="2024-12-13T01:43:37.346554943Z" level=info msg="StartContainer for \"fd1188a95a934d563b7d52c478ef6d4f01f5586fd286f9c43221b67144185676\" returns successfully" Dec 13 01:43:37.856363 kubelet[2747]: E1213 01:43:37.856170 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sgckt" podUID="be0402b2-9394-4ec8-b88a-518cccbc701b" Dec 13 01:43:39.088784 systemd[1]: cri-containerd-fd1188a95a934d563b7d52c478ef6d4f01f5586fd286f9c43221b67144185676.scope: Deactivated successfully. Dec 13 01:43:39.112526 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd1188a95a934d563b7d52c478ef6d4f01f5586fd286f9c43221b67144185676-rootfs.mount: Deactivated successfully. Dec 13 01:43:39.122159 containerd[1549]: time="2024-12-13T01:43:39.117333641Z" level=info msg="shim disconnected" id=fd1188a95a934d563b7d52c478ef6d4f01f5586fd286f9c43221b67144185676 namespace=k8s.io Dec 13 01:43:39.122159 containerd[1549]: time="2024-12-13T01:43:39.121573352Z" level=warning msg="cleaning up after shim disconnected" id=fd1188a95a934d563b7d52c478ef6d4f01f5586fd286f9c43221b67144185676 namespace=k8s.io Dec 13 01:43:39.122159 containerd[1549]: time="2024-12-13T01:43:39.121586594Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:43:39.154765 kubelet[2747]: I1213 01:43:39.154743 2747 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 01:43:39.190971 systemd[1]: Created slice kubepods-burstable-poddd95d309_c209_4cf1_a636_b98ab7a31667.slice - libcontainer container kubepods-burstable-poddd95d309_c209_4cf1_a636_b98ab7a31667.slice. Dec 13 01:43:39.200508 systemd[1]: Created slice kubepods-burstable-pod00e916a8_4e37_4540_9352_5c9af61a76e0.slice - libcontainer container kubepods-burstable-pod00e916a8_4e37_4540_9352_5c9af61a76e0.slice. Dec 13 01:43:39.208070 systemd[1]: Created slice kubepods-besteffort-pod49679827_80b0_45f6_a6cd_64adcfa67f6f.slice - libcontainer container kubepods-besteffort-pod49679827_80b0_45f6_a6cd_64adcfa67f6f.slice. Dec 13 01:43:39.215223 systemd[1]: Created slice kubepods-besteffort-pod91cdf967_f24e_4903_a5a6_9102cde06b2f.slice - libcontainer container kubepods-besteffort-pod91cdf967_f24e_4903_a5a6_9102cde06b2f.slice. Dec 13 01:43:39.218457 systemd[1]: Created slice kubepods-besteffort-podfec74b51_4f13_4ee0_a490_81de9f872b4f.slice - libcontainer container kubepods-besteffort-podfec74b51_4f13_4ee0_a490_81de9f872b4f.slice. Dec 13 01:43:39.291870 kubelet[2747]: I1213 01:43:39.291618 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dd95d309-c209-4cf1-a636-b98ab7a31667-config-volume\") pod \"coredns-6f6b679f8f-jp7n7\" (UID: \"dd95d309-c209-4cf1-a636-b98ab7a31667\") " pod="kube-system/coredns-6f6b679f8f-jp7n7" Dec 13 01:43:39.291870 kubelet[2747]: I1213 01:43:39.291656 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59sgs\" (UniqueName: \"kubernetes.io/projected/00e916a8-4e37-4540-9352-5c9af61a76e0-kube-api-access-59sgs\") pod \"coredns-6f6b679f8f-mt9t2\" (UID: \"00e916a8-4e37-4540-9352-5c9af61a76e0\") " pod="kube-system/coredns-6f6b679f8f-mt9t2" Dec 13 01:43:39.291870 kubelet[2747]: I1213 01:43:39.291678 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdmbc\" (UniqueName: \"kubernetes.io/projected/91cdf967-f24e-4903-a5a6-9102cde06b2f-kube-api-access-mdmbc\") pod \"calico-apiserver-d5bfd545-426d5\" (UID: \"91cdf967-f24e-4903-a5a6-9102cde06b2f\") " pod="calico-apiserver/calico-apiserver-d5bfd545-426d5" Dec 13 01:43:39.291870 kubelet[2747]: I1213 01:43:39.291696 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/49679827-80b0-45f6-a6cd-64adcfa67f6f-calico-apiserver-certs\") pod \"calico-apiserver-d5bfd545-r874s\" (UID: \"49679827-80b0-45f6-a6cd-64adcfa67f6f\") " pod="calico-apiserver/calico-apiserver-d5bfd545-r874s" Dec 13 01:43:39.291870 kubelet[2747]: I1213 01:43:39.291712 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp5rt\" (UniqueName: \"kubernetes.io/projected/fec74b51-4f13-4ee0-a490-81de9f872b4f-kube-api-access-zp5rt\") pod \"calico-kube-controllers-74d5794888-n2p4c\" (UID: \"fec74b51-4f13-4ee0-a490-81de9f872b4f\") " pod="calico-system/calico-kube-controllers-74d5794888-n2p4c" Dec 13 01:43:39.292463 kubelet[2747]: I1213 01:43:39.291730 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/91cdf967-f24e-4903-a5a6-9102cde06b2f-calico-apiserver-certs\") pod \"calico-apiserver-d5bfd545-426d5\" (UID: \"91cdf967-f24e-4903-a5a6-9102cde06b2f\") " pod="calico-apiserver/calico-apiserver-d5bfd545-426d5" Dec 13 01:43:39.292463 kubelet[2747]: I1213 01:43:39.291745 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00e916a8-4e37-4540-9352-5c9af61a76e0-config-volume\") pod \"coredns-6f6b679f8f-mt9t2\" (UID: \"00e916a8-4e37-4540-9352-5c9af61a76e0\") " pod="kube-system/coredns-6f6b679f8f-mt9t2" Dec 13 01:43:39.292463 kubelet[2747]: I1213 01:43:39.291760 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fec74b51-4f13-4ee0-a490-81de9f872b4f-tigera-ca-bundle\") pod \"calico-kube-controllers-74d5794888-n2p4c\" (UID: \"fec74b51-4f13-4ee0-a490-81de9f872b4f\") " pod="calico-system/calico-kube-controllers-74d5794888-n2p4c" Dec 13 01:43:39.292463 kubelet[2747]: I1213 01:43:39.291776 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5md6\" (UniqueName: \"kubernetes.io/projected/dd95d309-c209-4cf1-a636-b98ab7a31667-kube-api-access-b5md6\") pod \"coredns-6f6b679f8f-jp7n7\" (UID: \"dd95d309-c209-4cf1-a636-b98ab7a31667\") " pod="kube-system/coredns-6f6b679f8f-jp7n7" Dec 13 01:43:39.292463 kubelet[2747]: I1213 01:43:39.291790 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qln6x\" (UniqueName: \"kubernetes.io/projected/49679827-80b0-45f6-a6cd-64adcfa67f6f-kube-api-access-qln6x\") pod \"calico-apiserver-d5bfd545-r874s\" (UID: \"49679827-80b0-45f6-a6cd-64adcfa67f6f\") " pod="calico-apiserver/calico-apiserver-d5bfd545-r874s" Dec 13 01:43:39.499737 containerd[1549]: time="2024-12-13T01:43:39.499650220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-jp7n7,Uid:dd95d309-c209-4cf1-a636-b98ab7a31667,Namespace:kube-system,Attempt:0,}" Dec 13 01:43:39.504159 containerd[1549]: time="2024-12-13T01:43:39.503900715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-mt9t2,Uid:00e916a8-4e37-4540-9352-5c9af61a76e0,Namespace:kube-system,Attempt:0,}" Dec 13 01:43:39.507694 containerd[1549]: time="2024-12-13T01:43:39.507672805Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 01:43:39.517506 containerd[1549]: time="2024-12-13T01:43:39.517258015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d5bfd545-r874s,Uid:49679827-80b0-45f6-a6cd-64adcfa67f6f,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:43:39.524286 containerd[1549]: time="2024-12-13T01:43:39.524204394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d5bfd545-426d5,Uid:91cdf967-f24e-4903-a5a6-9102cde06b2f,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:43:39.540653 containerd[1549]: time="2024-12-13T01:43:39.540626725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74d5794888-n2p4c,Uid:fec74b51-4f13-4ee0-a490-81de9f872b4f,Namespace:calico-system,Attempt:0,}" Dec 13 01:43:39.753138 containerd[1549]: time="2024-12-13T01:43:39.752046947Z" level=error msg="Failed to destroy network for sandbox \"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:43:39.753138 containerd[1549]: time="2024-12-13T01:43:39.752348947Z" level=error msg="encountered an error cleaning up failed sandbox \"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:43:39.753138 containerd[1549]: time="2024-12-13T01:43:39.752392000Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74d5794888-n2p4c,Uid:fec74b51-4f13-4ee0-a490-81de9f872b4f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:43:39.753138 containerd[1549]: time="2024-12-13T01:43:39.752736753Z" level=error msg="Failed to destroy network for sandbox \"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:43:39.753479 containerd[1549]: time="2024-12-13T01:43:39.753464820Z" level=error msg="encountered an error cleaning up failed sandbox \"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:43:39.753554 containerd[1549]: time="2024-12-13T01:43:39.753535949Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d5bfd545-426d5,Uid:91cdf967-f24e-4903-a5a6-9102cde06b2f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:43:39.756830 kubelet[2747]: E1213 01:43:39.756807 2747 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:43:39.757200 containerd[1549]: time="2024-12-13T01:43:39.756930949Z" level=error msg="Failed to destroy network for sandbox \"9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:43:39.757200 containerd[1549]: time="2024-12-13T01:43:39.756931983Z" level=error msg="Failed to destroy network for sandbox \"ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:43:39.757265 kubelet[2747]: E1213 01:43:39.757060 2747 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d5bfd545-426d5" Dec 13 01:43:39.758047 containerd[1549]: time="2024-12-13T01:43:39.757196776Z" level=error msg="encountered an error cleaning up failed sandbox \"ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:43:39.758047 containerd[1549]: time="2024-12-13T01:43:39.757220394Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-jp7n7,Uid:dd95d309-c209-4cf1-a636-b98ab7a31667,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:43:39.758093 kubelet[2747]: E1213 01:43:39.756807 2747 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:43:39.758093 kubelet[2747]: E1213 01:43:39.757411 2747 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74d5794888-n2p4c" Dec 13 01:43:39.758289 containerd[1549]: time="2024-12-13T01:43:39.758213876Z" level=error msg="encountered an error cleaning up failed sandbox \"9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:43:39.758289 containerd[1549]: time="2024-12-13T01:43:39.758247791Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-mt9t2,Uid:00e916a8-4e37-4540-9352-5c9af61a76e0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:43:39.759270 containerd[1549]: time="2024-12-13T01:43:39.759116319Z" level=error msg="Failed to destroy network for sandbox \"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:43:39.759342 containerd[1549]: time="2024-12-13T01:43:39.759321837Z" level=error msg="encountered an error cleaning up failed sandbox \"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:43:39.759388 containerd[1549]: time="2024-12-13T01:43:39.759343361Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d5bfd545-r874s,Uid:49679827-80b0-45f6-a6cd-64adcfa67f6f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:43:39.761067 kubelet[2747]: E1213 01:43:39.761051 2747 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d5bfd545-426d5" Dec 13 01:43:39.761167 kubelet[2747]: E1213 01:43:39.761142 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d5bfd545-426d5_calico-apiserver(91cdf967-f24e-4903-a5a6-9102cde06b2f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d5bfd545-426d5_calico-apiserver(91cdf967-f24e-4903-a5a6-9102cde06b2f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d5bfd545-426d5" podUID="91cdf967-f24e-4903-a5a6-9102cde06b2f" Dec 13 01:43:39.761442 kubelet[2747]: E1213 01:43:39.761431 2747 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:43:39.761513 kubelet[2747]: E1213 01:43:39.761503 2747 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d5bfd545-r874s" Dec 13 01:43:39.762262 kubelet[2747]: E1213 01:43:39.762038 2747 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d5bfd545-r874s" Dec 13 01:43:39.762262 kubelet[2747]: E1213 01:43:39.762071 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d5bfd545-r874s_calico-apiserver(49679827-80b0-45f6-a6cd-64adcfa67f6f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d5bfd545-r874s_calico-apiserver(49679827-80b0-45f6-a6cd-64adcfa67f6f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d5bfd545-r874s" podUID="49679827-80b0-45f6-a6cd-64adcfa67f6f" Dec 13 01:43:39.762262 kubelet[2747]: E1213 01:43:39.761428 2747 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74d5794888-n2p4c" Dec 13 01:43:39.762355 kubelet[2747]: E1213 01:43:39.762109 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-74d5794888-n2p4c_calico-system(fec74b51-4f13-4ee0-a490-81de9f872b4f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-74d5794888-n2p4c_calico-system(fec74b51-4f13-4ee0-a490-81de9f872b4f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74d5794888-n2p4c" podUID="fec74b51-4f13-4ee0-a490-81de9f872b4f" Dec 13 01:43:39.762355 kubelet[2747]: E1213 01:43:39.761452 2747 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:43:39.762355 kubelet[2747]: E1213 01:43:39.762134 2747 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-jp7n7" Dec 13 01:43:39.762453 kubelet[2747]: E1213 01:43:39.762142 2747 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-jp7n7" Dec 13 01:43:39.762453 kubelet[2747]: E1213 01:43:39.762155 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-jp7n7_kube-system(dd95d309-c209-4cf1-a636-b98ab7a31667)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-jp7n7_kube-system(dd95d309-c209-4cf1-a636-b98ab7a31667)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-jp7n7" podUID="dd95d309-c209-4cf1-a636-b98ab7a31667" Dec 13 01:43:39.762453 kubelet[2747]: E1213 01:43:39.761462 2747 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:43:39.762871 kubelet[2747]: E1213 01:43:39.762169 2747 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-mt9t2" Dec 13 01:43:39.762871 kubelet[2747]: E1213 01:43:39.762176 2747 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-mt9t2" Dec 13 01:43:39.762871 kubelet[2747]: E1213 01:43:39.762188 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-mt9t2_kube-system(00e916a8-4e37-4540-9352-5c9af61a76e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-mt9t2_kube-system(00e916a8-4e37-4540-9352-5c9af61a76e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-mt9t2" podUID="00e916a8-4e37-4540-9352-5c9af61a76e0" Dec 13 01:43:39.860149 systemd[1]: Created slice kubepods-besteffort-podbe0402b2_9394_4ec8_b88a_518cccbc701b.slice - libcontainer container kubepods-besteffort-podbe0402b2_9394_4ec8_b88a_518cccbc701b.slice. Dec 13 01:43:39.862090 containerd[1549]: time="2024-12-13T01:43:39.862004811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sgckt,Uid:be0402b2-9394-4ec8-b88a-518cccbc701b,Namespace:calico-system,Attempt:0,}" Dec 13 01:43:39.897206 containerd[1549]: time="2024-12-13T01:43:39.897181905Z" level=error msg="Failed to destroy network for sandbox \"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:43:39.897754 containerd[1549]: time="2024-12-13T01:43:39.897468186Z" level=error msg="encountered an error cleaning up failed sandbox \"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:43:39.897754 containerd[1549]: time="2024-12-13T01:43:39.897502886Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sgckt,Uid:be0402b2-9394-4ec8-b88a-518cccbc701b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:43:39.900019 kubelet[2747]: E1213 01:43:39.897634 2747 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:43:39.900019 kubelet[2747]: E1213 01:43:39.897691 2747 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sgckt" Dec 13 01:43:39.900019 kubelet[2747]: E1213 01:43:39.897703 2747 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sgckt" Dec 13 01:43:39.900088 kubelet[2747]: E1213 01:43:39.897729 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-sgckt_calico-system(be0402b2-9394-4ec8-b88a-518cccbc701b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-sgckt_calico-system(be0402b2-9394-4ec8-b88a-518cccbc701b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sgckt" podUID="be0402b2-9394-4ec8-b88a-518cccbc701b" Dec 13 01:43:40.508523 kubelet[2747]: I1213 01:43:40.508485 2747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586" Dec 13 01:43:40.509836 kubelet[2747]: I1213 01:43:40.509820 2747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305" Dec 13 01:43:40.519808 kubelet[2747]: I1213 01:43:40.519426 2747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3" Dec 13 01:43:40.521197 kubelet[2747]: I1213 01:43:40.521178 2747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29" Dec 13 01:43:40.523990 kubelet[2747]: I1213 01:43:40.523787 2747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39" Dec 13 01:43:40.525638 kubelet[2747]: I1213 01:43:40.525628 2747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2" Dec 13 01:43:40.537605 containerd[1549]: time="2024-12-13T01:43:40.537088652Z" level=info msg="StopPodSandbox for \"9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2\"" Dec 13 01:43:40.537895 containerd[1549]: time="2024-12-13T01:43:40.537800817Z" level=info msg="StopPodSandbox for \"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\"" Dec 13 01:43:40.539719 containerd[1549]: time="2024-12-13T01:43:40.539553938Z" level=info msg="Ensure that sandbox dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305 in task-service has been cleanup successfully" Dec 13 01:43:40.539719 containerd[1549]: time="2024-12-13T01:43:40.539582562Z" level=info msg="StopPodSandbox for \"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\"" Dec 13 01:43:40.539719 containerd[1549]: time="2024-12-13T01:43:40.539653287Z" level=info msg="Ensure that sandbox 2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3 in task-service has been cleanup successfully" Dec 13 01:43:40.546271 containerd[1549]: time="2024-12-13T01:43:40.546255080Z" level=info msg="StopPodSandbox for \"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\"" Dec 13 01:43:40.546515 containerd[1549]: time="2024-12-13T01:43:40.546336389Z" level=info msg="Ensure that sandbox 1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29 in task-service has been cleanup successfully" Dec 13 01:43:40.546515 containerd[1549]: time="2024-12-13T01:43:40.546343682Z" level=info msg="StopPodSandbox for \"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\"" Dec 13 01:43:40.546515 containerd[1549]: time="2024-12-13T01:43:40.546421851Z" level=info msg="Ensure that sandbox d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586 in task-service has been cleanup successfully" Dec 13 01:43:40.547027 containerd[1549]: time="2024-12-13T01:43:40.547013913Z" level=info msg="StopPodSandbox for \"ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39\"" Dec 13 01:43:40.547101 containerd[1549]: time="2024-12-13T01:43:40.547088004Z" level=info msg="Ensure that sandbox ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39 in task-service has been cleanup successfully" Dec 13 01:43:40.547402 containerd[1549]: time="2024-12-13T01:43:40.539556510Z" level=info msg="Ensure that sandbox 9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2 in task-service has been cleanup successfully" Dec 13 01:43:40.598610 containerd[1549]: time="2024-12-13T01:43:40.598540411Z" level=error msg="StopPodSandbox for \"9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2\" failed" error="failed to destroy network for sandbox \"9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:43:40.598875 kubelet[2747]: E1213 01:43:40.598693 2747 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2" Dec 13 01:43:40.598875 kubelet[2747]: E1213 01:43:40.598743 2747 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2"} Dec 13 01:43:40.598875 kubelet[2747]: E1213 01:43:40.598798 2747 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"00e916a8-4e37-4540-9352-5c9af61a76e0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:43:40.598875 kubelet[2747]: E1213 01:43:40.598813 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"00e916a8-4e37-4540-9352-5c9af61a76e0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-mt9t2" podUID="00e916a8-4e37-4540-9352-5c9af61a76e0" Dec 13 01:43:40.599101 containerd[1549]: time="2024-12-13T01:43:40.599065695Z" level=error msg="StopPodSandbox for \"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\" failed" error="failed to destroy network for sandbox \"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:43:40.599988 kubelet[2747]: E1213 01:43:40.599566 2747 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305" Dec 13 01:43:40.599988 kubelet[2747]: E1213 01:43:40.599584 2747 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305"} Dec 13 01:43:40.599988 kubelet[2747]: E1213 01:43:40.599601 2747 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fec74b51-4f13-4ee0-a490-81de9f872b4f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:43:40.599988 kubelet[2747]: E1213 01:43:40.599612 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fec74b51-4f13-4ee0-a490-81de9f872b4f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74d5794888-n2p4c" podUID="fec74b51-4f13-4ee0-a490-81de9f872b4f" Dec 13 01:43:40.600718 containerd[1549]: time="2024-12-13T01:43:40.600694951Z" level=error msg="StopPodSandbox for \"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\" failed" error="failed to destroy network for sandbox \"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:43:40.600792 kubelet[2747]: E1213 01:43:40.600779 2747 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3" Dec 13 01:43:40.600825 kubelet[2747]: E1213 01:43:40.600795 2747 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3"} Dec 13 01:43:40.600847 kubelet[2747]: E1213 01:43:40.600809 2747 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"91cdf967-f24e-4903-a5a6-9102cde06b2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:43:40.600847 kubelet[2747]: E1213 01:43:40.600837 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"91cdf967-f24e-4903-a5a6-9102cde06b2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d5bfd545-426d5" podUID="91cdf967-f24e-4903-a5a6-9102cde06b2f" Dec 13 01:43:40.603354 containerd[1549]: time="2024-12-13T01:43:40.603333718Z" level=error msg="StopPodSandbox for \"ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39\" failed" error="failed to destroy network for sandbox \"ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:43:40.603426 kubelet[2747]: E1213 01:43:40.603410 2747 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39" Dec 13 01:43:40.603459 kubelet[2747]: E1213 01:43:40.603433 2747 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39"} Dec 13 01:43:40.603459 kubelet[2747]: E1213 01:43:40.603448 2747 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dd95d309-c209-4cf1-a636-b98ab7a31667\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:43:40.603516 kubelet[2747]: E1213 01:43:40.603458 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dd95d309-c209-4cf1-a636-b98ab7a31667\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-jp7n7" podUID="dd95d309-c209-4cf1-a636-b98ab7a31667" Dec 13 01:43:40.606034 containerd[1549]: time="2024-12-13T01:43:40.605949704Z" level=error msg="StopPodSandbox for \"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\" failed" error="failed to destroy network for sandbox \"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:43:40.606123 kubelet[2747]: E1213 01:43:40.606107 2747 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29" Dec 13 01:43:40.606149 kubelet[2747]: E1213 01:43:40.606126 2747 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29"} Dec 13 01:43:40.606149 kubelet[2747]: E1213 01:43:40.606144 2747 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"49679827-80b0-45f6-a6cd-64adcfa67f6f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:43:40.606201 kubelet[2747]: E1213 01:43:40.606155 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"49679827-80b0-45f6-a6cd-64adcfa67f6f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d5bfd545-r874s" podUID="49679827-80b0-45f6-a6cd-64adcfa67f6f" Dec 13 01:43:40.606442 containerd[1549]: time="2024-12-13T01:43:40.606422596Z" level=error msg="StopPodSandbox for \"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\" failed" error="failed to destroy network for sandbox \"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:43:40.606520 kubelet[2747]: E1213 01:43:40.606504 2747 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586" Dec 13 01:43:40.606548 kubelet[2747]: E1213 01:43:40.606522 2747 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586"} Dec 13 01:43:40.606548 kubelet[2747]: E1213 01:43:40.606535 2747 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"be0402b2-9394-4ec8-b88a-518cccbc701b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:43:40.606597 kubelet[2747]: E1213 01:43:40.606562 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"be0402b2-9394-4ec8-b88a-518cccbc701b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sgckt" podUID="be0402b2-9394-4ec8-b88a-518cccbc701b" Dec 13 01:43:41.025145 kubelet[2747]: I1213 01:43:41.025118 2747 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:43:44.380226 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3789831175.mount: Deactivated successfully. Dec 13 01:43:44.625133 containerd[1549]: time="2024-12-13T01:43:44.612118966Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:43:44.635905 containerd[1549]: time="2024-12-13T01:43:44.635776913Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Dec 13 01:43:44.674232 containerd[1549]: time="2024-12-13T01:43:44.674126892Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:43:44.713953 containerd[1549]: time="2024-12-13T01:43:44.713844583Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:43:44.738208 containerd[1549]: time="2024-12-13T01:43:44.737631783Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 5.22987722s" Dec 13 01:43:44.738208 containerd[1549]: time="2024-12-13T01:43:44.737660203Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 01:43:44.803123 containerd[1549]: time="2024-12-13T01:43:44.803094480Z" level=info msg="CreateContainer within sandbox \"7b16b20863325d422e1cf97f308b639d2ee595ff01860faaacc849be4300fb9d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 01:43:44.828775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4236154403.mount: Deactivated successfully. Dec 13 01:43:44.834081 containerd[1549]: time="2024-12-13T01:43:44.834061845Z" level=info msg="CreateContainer within sandbox \"7b16b20863325d422e1cf97f308b639d2ee595ff01860faaacc849be4300fb9d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e851492808c544efc8ad952ef7e7c73e45289fe4b1eb82fef08584db56396fbb\"" Dec 13 01:43:44.835092 containerd[1549]: time="2024-12-13T01:43:44.834611630Z" level=info msg="StartContainer for \"e851492808c544efc8ad952ef7e7c73e45289fe4b1eb82fef08584db56396fbb\"" Dec 13 01:43:44.905068 systemd[1]: Started cri-containerd-e851492808c544efc8ad952ef7e7c73e45289fe4b1eb82fef08584db56396fbb.scope - libcontainer container e851492808c544efc8ad952ef7e7c73e45289fe4b1eb82fef08584db56396fbb. Dec 13 01:43:44.933318 containerd[1549]: time="2024-12-13T01:43:44.933293911Z" level=info msg="StartContainer for \"e851492808c544efc8ad952ef7e7c73e45289fe4b1eb82fef08584db56396fbb\" returns successfully" Dec 13 01:43:45.040074 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 01:43:45.040425 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 01:43:45.080395 systemd[1]: cri-containerd-e851492808c544efc8ad952ef7e7c73e45289fe4b1eb82fef08584db56396fbb.scope: Deactivated successfully. Dec 13 01:43:45.293927 containerd[1549]: time="2024-12-13T01:43:45.290959130Z" level=info msg="shim disconnected" id=e851492808c544efc8ad952ef7e7c73e45289fe4b1eb82fef08584db56396fbb namespace=k8s.io Dec 13 01:43:45.293927 containerd[1549]: time="2024-12-13T01:43:45.293872655Z" level=warning msg="cleaning up after shim disconnected" id=e851492808c544efc8ad952ef7e7c73e45289fe4b1eb82fef08584db56396fbb namespace=k8s.io Dec 13 01:43:45.293927 containerd[1549]: time="2024-12-13T01:43:45.293879280Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:43:45.557258 kubelet[2747]: I1213 01:43:45.557015 2747 scope.go:117] "RemoveContainer" containerID="e851492808c544efc8ad952ef7e7c73e45289fe4b1eb82fef08584db56396fbb" Dec 13 01:43:45.560318 containerd[1549]: time="2024-12-13T01:43:45.560247131Z" level=info msg="CreateContainer within sandbox \"7b16b20863325d422e1cf97f308b639d2ee595ff01860faaacc849be4300fb9d\" for container &ContainerMetadata{Name:calico-node,Attempt:1,}" Dec 13 01:43:45.567428 containerd[1549]: time="2024-12-13T01:43:45.567370050Z" level=info msg="CreateContainer within sandbox \"7b16b20863325d422e1cf97f308b639d2ee595ff01860faaacc849be4300fb9d\" for &ContainerMetadata{Name:calico-node,Attempt:1,} returns container id \"3082505d9c48543114d06b8249db15ca472db123c3f25bff7380157c78158388\"" Dec 13 01:43:45.568114 containerd[1549]: time="2024-12-13T01:43:45.567700047Z" level=info msg="StartContainer for \"3082505d9c48543114d06b8249db15ca472db123c3f25bff7380157c78158388\"" Dec 13 01:43:45.589074 systemd[1]: Started cri-containerd-3082505d9c48543114d06b8249db15ca472db123c3f25bff7380157c78158388.scope - libcontainer container 3082505d9c48543114d06b8249db15ca472db123c3f25bff7380157c78158388. Dec 13 01:43:45.605901 containerd[1549]: time="2024-12-13T01:43:45.605877105Z" level=info msg="StartContainer for \"3082505d9c48543114d06b8249db15ca472db123c3f25bff7380157c78158388\" returns successfully" Dec 13 01:43:45.644860 systemd[1]: cri-containerd-3082505d9c48543114d06b8249db15ca472db123c3f25bff7380157c78158388.scope: Deactivated successfully. Dec 13 01:43:45.658807 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3082505d9c48543114d06b8249db15ca472db123c3f25bff7380157c78158388-rootfs.mount: Deactivated successfully. Dec 13 01:43:45.659299 containerd[1549]: time="2024-12-13T01:43:45.659194847Z" level=info msg="shim disconnected" id=3082505d9c48543114d06b8249db15ca472db123c3f25bff7380157c78158388 namespace=k8s.io Dec 13 01:43:45.659299 containerd[1549]: time="2024-12-13T01:43:45.659238402Z" level=warning msg="cleaning up after shim disconnected" id=3082505d9c48543114d06b8249db15ca472db123c3f25bff7380157c78158388 namespace=k8s.io Dec 13 01:43:45.659299 containerd[1549]: time="2024-12-13T01:43:45.659245894Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:43:46.604784 kubelet[2747]: I1213 01:43:46.604580 2747 scope.go:117] "RemoveContainer" containerID="e851492808c544efc8ad952ef7e7c73e45289fe4b1eb82fef08584db56396fbb" Dec 13 01:43:46.604784 kubelet[2747]: I1213 01:43:46.604657 2747 scope.go:117] "RemoveContainer" containerID="3082505d9c48543114d06b8249db15ca472db123c3f25bff7380157c78158388" Dec 13 01:43:46.612650 kubelet[2747]: E1213 01:43:46.612155 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-47x4r_calico-system(cafa636e-d27e-4a03-b0b6-680253dba261)\"" pod="calico-system/calico-node-47x4r" podUID="cafa636e-d27e-4a03-b0b6-680253dba261" Dec 13 01:43:46.617852 containerd[1549]: time="2024-12-13T01:43:46.617822080Z" level=info msg="RemoveContainer for \"e851492808c544efc8ad952ef7e7c73e45289fe4b1eb82fef08584db56396fbb\"" Dec 13 01:43:46.641312 containerd[1549]: time="2024-12-13T01:43:46.641283403Z" level=info msg="RemoveContainer for \"e851492808c544efc8ad952ef7e7c73e45289fe4b1eb82fef08584db56396fbb\" returns successfully" Dec 13 01:43:51.856792 containerd[1549]: time="2024-12-13T01:43:51.856770330Z" level=info msg="StopPodSandbox for \"ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39\"" Dec 13 01:43:51.955087 containerd[1549]: time="2024-12-13T01:43:51.955000435Z" level=error msg="StopPodSandbox for \"ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39\" failed" error="failed to destroy network for sandbox \"ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:43:51.955232 kubelet[2747]: E1213 01:43:51.955188 2747 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39" Dec 13 01:43:51.955564 kubelet[2747]: E1213 01:43:51.955246 2747 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39"} Dec 13 01:43:51.955564 kubelet[2747]: E1213 01:43:51.955281 2747 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dd95d309-c209-4cf1-a636-b98ab7a31667\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:43:51.955564 kubelet[2747]: E1213 01:43:51.955302 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dd95d309-c209-4cf1-a636-b98ab7a31667\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-jp7n7" podUID="dd95d309-c209-4cf1-a636-b98ab7a31667" Dec 13 01:43:52.857351 containerd[1549]: time="2024-12-13T01:43:52.856901862Z" level=info msg="StopPodSandbox for \"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\"" Dec 13 01:43:52.857351 containerd[1549]: time="2024-12-13T01:43:52.857010148Z" level=info msg="StopPodSandbox for \"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\"" Dec 13 01:43:52.859368 containerd[1549]: time="2024-12-13T01:43:52.858832500Z" level=info msg="StopPodSandbox for \"9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2\"" Dec 13 01:43:52.890106 containerd[1549]: time="2024-12-13T01:43:52.890075272Z" level=error msg="StopPodSandbox for \"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\" failed" error="failed to destroy network for sandbox \"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:43:52.890338 kubelet[2747]: E1213 01:43:52.890318 2747 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29" Dec 13 01:43:52.890452 kubelet[2747]: E1213 01:43:52.890373 2747 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29"} Dec 13 01:43:52.890452 kubelet[2747]: E1213 01:43:52.890395 2747 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"49679827-80b0-45f6-a6cd-64adcfa67f6f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:43:52.890452 kubelet[2747]: E1213 01:43:52.890409 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"49679827-80b0-45f6-a6cd-64adcfa67f6f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d5bfd545-r874s" podUID="49679827-80b0-45f6-a6cd-64adcfa67f6f" Dec 13 01:43:52.892288 containerd[1549]: time="2024-12-13T01:43:52.892082742Z" level=error msg="StopPodSandbox for \"9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2\" failed" error="failed to destroy network for sandbox \"9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:43:52.892469 kubelet[2747]: E1213 01:43:52.892205 2747 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2" Dec 13 01:43:52.892469 kubelet[2747]: E1213 01:43:52.892239 2747 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2"} Dec 13 01:43:52.892469 kubelet[2747]: E1213 01:43:52.892260 2747 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"00e916a8-4e37-4540-9352-5c9af61a76e0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:43:52.892469 kubelet[2747]: E1213 01:43:52.892282 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"00e916a8-4e37-4540-9352-5c9af61a76e0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-mt9t2" podUID="00e916a8-4e37-4540-9352-5c9af61a76e0" Dec 13 01:43:52.898995 containerd[1549]: time="2024-12-13T01:43:52.898959417Z" level=error msg="StopPodSandbox for \"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\" failed" error="failed to destroy network for sandbox \"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:43:52.899288 kubelet[2747]: E1213 01:43:52.899121 2747 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586" Dec 13 01:43:52.899288 kubelet[2747]: E1213 01:43:52.899155 2747 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586"} Dec 13 01:43:52.899288 kubelet[2747]: E1213 01:43:52.899178 2747 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"be0402b2-9394-4ec8-b88a-518cccbc701b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:43:52.899448 kubelet[2747]: E1213 01:43:52.899424 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"be0402b2-9394-4ec8-b88a-518cccbc701b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sgckt" podUID="be0402b2-9394-4ec8-b88a-518cccbc701b" Dec 13 01:43:53.858489 containerd[1549]: time="2024-12-13T01:43:53.858077615Z" level=info msg="StopPodSandbox for \"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\"" Dec 13 01:43:53.859268 containerd[1549]: time="2024-12-13T01:43:53.859251568Z" level=info msg="StopPodSandbox for \"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\"" Dec 13 01:43:53.893337 containerd[1549]: time="2024-12-13T01:43:53.893296405Z" level=error msg="StopPodSandbox for \"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\" failed" error="failed to destroy network for sandbox \"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:43:53.893475 kubelet[2747]: E1213 01:43:53.893445 2747 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3" Dec 13 01:43:53.893689 kubelet[2747]: E1213 01:43:53.893481 2747 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3"} Dec 13 01:43:53.893689 kubelet[2747]: E1213 01:43:53.893511 2747 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"91cdf967-f24e-4903-a5a6-9102cde06b2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:43:53.893689 kubelet[2747]: E1213 01:43:53.893531 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"91cdf967-f24e-4903-a5a6-9102cde06b2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d5bfd545-426d5" podUID="91cdf967-f24e-4903-a5a6-9102cde06b2f" Dec 13 01:43:53.894242 containerd[1549]: time="2024-12-13T01:43:53.894224348Z" level=error msg="StopPodSandbox for \"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\" failed" error="failed to destroy network for sandbox \"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:43:53.894407 kubelet[2747]: E1213 01:43:53.894387 2747 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305" Dec 13 01:43:53.894442 kubelet[2747]: E1213 01:43:53.894412 2747 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305"} Dec 13 01:43:53.894442 kubelet[2747]: E1213 01:43:53.894429 2747 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fec74b51-4f13-4ee0-a490-81de9f872b4f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:43:53.894497 kubelet[2747]: E1213 01:43:53.894442 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fec74b51-4f13-4ee0-a490-81de9f872b4f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74d5794888-n2p4c" podUID="fec74b51-4f13-4ee0-a490-81de9f872b4f" Dec 13 01:43:59.856747 kubelet[2747]: I1213 01:43:59.856040 2747 scope.go:117] "RemoveContainer" containerID="3082505d9c48543114d06b8249db15ca472db123c3f25bff7380157c78158388" Dec 13 01:43:59.858762 containerd[1549]: time="2024-12-13T01:43:59.858708538Z" level=info msg="CreateContainer within sandbox \"7b16b20863325d422e1cf97f308b639d2ee595ff01860faaacc849be4300fb9d\" for container &ContainerMetadata{Name:calico-node,Attempt:2,}" Dec 13 01:43:59.941592 containerd[1549]: time="2024-12-13T01:43:59.941502054Z" level=info msg="CreateContainer within sandbox \"7b16b20863325d422e1cf97f308b639d2ee595ff01860faaacc849be4300fb9d\" for &ContainerMetadata{Name:calico-node,Attempt:2,} returns container id \"fc46aa7e3bfbd10b5ac15c6ccc27ca63306bdc2ee4d7a3f384a3cb8acdb16d1c\"" Dec 13 01:43:59.942300 containerd[1549]: time="2024-12-13T01:43:59.942043212Z" level=info msg="StartContainer for \"fc46aa7e3bfbd10b5ac15c6ccc27ca63306bdc2ee4d7a3f384a3cb8acdb16d1c\"" Dec 13 01:43:59.973089 systemd[1]: Started cri-containerd-fc46aa7e3bfbd10b5ac15c6ccc27ca63306bdc2ee4d7a3f384a3cb8acdb16d1c.scope - libcontainer container fc46aa7e3bfbd10b5ac15c6ccc27ca63306bdc2ee4d7a3f384a3cb8acdb16d1c. Dec 13 01:43:59.990508 containerd[1549]: time="2024-12-13T01:43:59.990438923Z" level=info msg="StartContainer for \"fc46aa7e3bfbd10b5ac15c6ccc27ca63306bdc2ee4d7a3f384a3cb8acdb16d1c\" returns successfully" Dec 13 01:44:00.213169 systemd[1]: cri-containerd-fc46aa7e3bfbd10b5ac15c6ccc27ca63306bdc2ee4d7a3f384a3cb8acdb16d1c.scope: Deactivated successfully. Dec 13 01:44:00.227172 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc46aa7e3bfbd10b5ac15c6ccc27ca63306bdc2ee4d7a3f384a3cb8acdb16d1c-rootfs.mount: Deactivated successfully. Dec 13 01:44:00.231226 containerd[1549]: time="2024-12-13T01:44:00.231176345Z" level=info msg="shim disconnected" id=fc46aa7e3bfbd10b5ac15c6ccc27ca63306bdc2ee4d7a3f384a3cb8acdb16d1c namespace=k8s.io Dec 13 01:44:00.231226 containerd[1549]: time="2024-12-13T01:44:00.231218890Z" level=warning msg="cleaning up after shim disconnected" id=fc46aa7e3bfbd10b5ac15c6ccc27ca63306bdc2ee4d7a3f384a3cb8acdb16d1c namespace=k8s.io Dec 13 01:44:00.231422 containerd[1549]: time="2024-12-13T01:44:00.231228453Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:44:00.574680 kubelet[2747]: I1213 01:44:00.574603 2747 scope.go:117] "RemoveContainer" containerID="3082505d9c48543114d06b8249db15ca472db123c3f25bff7380157c78158388" Dec 13 01:44:00.574951 kubelet[2747]: I1213 01:44:00.574874 2747 scope.go:117] "RemoveContainer" containerID="fc46aa7e3bfbd10b5ac15c6ccc27ca63306bdc2ee4d7a3f384a3cb8acdb16d1c" Dec 13 01:44:00.576090 containerd[1549]: time="2024-12-13T01:44:00.576053333Z" level=info msg="RemoveContainer for \"3082505d9c48543114d06b8249db15ca472db123c3f25bff7380157c78158388\"" Dec 13 01:44:00.596414 containerd[1549]: time="2024-12-13T01:44:00.596385888Z" level=info msg="RemoveContainer for \"3082505d9c48543114d06b8249db15ca472db123c3f25bff7380157c78158388\" returns successfully" Dec 13 01:44:00.608106 kubelet[2747]: E1213 01:44:00.608047 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-47x4r_calico-system(cafa636e-d27e-4a03-b0b6-680253dba261)\"" pod="calico-system/calico-node-47x4r" podUID="cafa636e-d27e-4a03-b0b6-680253dba261" Dec 13 01:44:03.857309 containerd[1549]: time="2024-12-13T01:44:03.857281190Z" level=info msg="StopPodSandbox for \"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\"" Dec 13 01:44:03.876243 containerd[1549]: time="2024-12-13T01:44:03.876153038Z" level=error msg="StopPodSandbox for \"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\" failed" error="failed to destroy network for sandbox \"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:44:03.879515 kubelet[2747]: E1213 01:44:03.876280 2747 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29" Dec 13 01:44:03.879515 kubelet[2747]: E1213 01:44:03.876312 2747 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29"} Dec 13 01:44:03.879515 kubelet[2747]: E1213 01:44:03.876336 2747 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"49679827-80b0-45f6-a6cd-64adcfa67f6f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:44:03.879515 kubelet[2747]: E1213 01:44:03.876350 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"49679827-80b0-45f6-a6cd-64adcfa67f6f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d5bfd545-r874s" podUID="49679827-80b0-45f6-a6cd-64adcfa67f6f" Dec 13 01:44:05.856923 containerd[1549]: time="2024-12-13T01:44:05.856770985Z" level=info msg="StopPodSandbox for \"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\"" Dec 13 01:44:05.856923 containerd[1549]: time="2024-12-13T01:44:05.856777972Z" level=info msg="StopPodSandbox for \"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\"" Dec 13 01:44:05.880028 containerd[1549]: time="2024-12-13T01:44:05.879996129Z" level=error msg="StopPodSandbox for \"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\" failed" error="failed to destroy network for sandbox \"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:44:05.880314 kubelet[2747]: E1213 01:44:05.880135 2747 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3" Dec 13 01:44:05.880314 kubelet[2747]: E1213 01:44:05.880175 2747 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3"} Dec 13 01:44:05.880314 kubelet[2747]: E1213 01:44:05.880204 2747 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"91cdf967-f24e-4903-a5a6-9102cde06b2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:44:05.880314 kubelet[2747]: E1213 01:44:05.880222 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"91cdf967-f24e-4903-a5a6-9102cde06b2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d5bfd545-426d5" podUID="91cdf967-f24e-4903-a5a6-9102cde06b2f" Dec 13 01:44:05.883366 containerd[1549]: time="2024-12-13T01:44:05.883306223Z" level=error msg="StopPodSandbox for \"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\" failed" error="failed to destroy network for sandbox \"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:44:05.883498 kubelet[2747]: E1213 01:44:05.883378 2747 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305" Dec 13 01:44:05.883498 kubelet[2747]: E1213 01:44:05.883401 2747 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305"} Dec 13 01:44:05.883498 kubelet[2747]: E1213 01:44:05.883418 2747 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fec74b51-4f13-4ee0-a490-81de9f872b4f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:44:05.883498 kubelet[2747]: E1213 01:44:05.883429 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fec74b51-4f13-4ee0-a490-81de9f872b4f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74d5794888-n2p4c" podUID="fec74b51-4f13-4ee0-a490-81de9f872b4f" Dec 13 01:44:06.856318 containerd[1549]: time="2024-12-13T01:44:06.856039612Z" level=info msg="StopPodSandbox for \"ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39\"" Dec 13 01:44:06.856428 containerd[1549]: time="2024-12-13T01:44:06.856413112Z" level=info msg="StopPodSandbox for \"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\"" Dec 13 01:44:06.879038 containerd[1549]: time="2024-12-13T01:44:06.879009994Z" level=error msg="StopPodSandbox for \"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\" failed" error="failed to destroy network for sandbox \"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:44:06.879349 kubelet[2747]: E1213 01:44:06.879194 2747 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586" Dec 13 01:44:06.879349 kubelet[2747]: E1213 01:44:06.879229 2747 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586"} Dec 13 01:44:06.879349 kubelet[2747]: E1213 01:44:06.879254 2747 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"be0402b2-9394-4ec8-b88a-518cccbc701b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:44:06.881347 containerd[1549]: time="2024-12-13T01:44:06.881318757Z" level=error msg="StopPodSandbox for \"ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39\" failed" error="failed to destroy network for sandbox \"ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:44:06.881600 kubelet[2747]: E1213 01:44:06.881443 2747 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39" Dec 13 01:44:06.881600 kubelet[2747]: E1213 01:44:06.881473 2747 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39"} Dec 13 01:44:06.881600 kubelet[2747]: E1213 01:44:06.881495 2747 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dd95d309-c209-4cf1-a636-b98ab7a31667\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:44:06.881600 kubelet[2747]: E1213 01:44:06.881512 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dd95d309-c209-4cf1-a636-b98ab7a31667\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-jp7n7" podUID="dd95d309-c209-4cf1-a636-b98ab7a31667" Dec 13 01:44:06.890457 kubelet[2747]: E1213 01:44:06.890413 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"be0402b2-9394-4ec8-b88a-518cccbc701b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sgckt" podUID="be0402b2-9394-4ec8-b88a-518cccbc701b" Dec 13 01:44:07.857503 containerd[1549]: time="2024-12-13T01:44:07.856008893Z" level=info msg="StopPodSandbox for \"9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2\"" Dec 13 01:44:07.876313 containerd[1549]: time="2024-12-13T01:44:07.876279448Z" level=error msg="StopPodSandbox for \"9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2\" failed" error="failed to destroy network for sandbox \"9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:44:07.876598 kubelet[2747]: E1213 01:44:07.876423 2747 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2" Dec 13 01:44:07.876598 kubelet[2747]: E1213 01:44:07.876456 2747 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2"} Dec 13 01:44:07.876598 kubelet[2747]: E1213 01:44:07.876479 2747 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"00e916a8-4e37-4540-9352-5c9af61a76e0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:44:07.876598 kubelet[2747]: E1213 01:44:07.876499 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"00e916a8-4e37-4540-9352-5c9af61a76e0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-mt9t2" podUID="00e916a8-4e37-4540-9352-5c9af61a76e0" Dec 13 01:44:09.798771 kubelet[2747]: I1213 01:44:09.798650 2747 scope.go:117] "RemoveContainer" containerID="fc46aa7e3bfbd10b5ac15c6ccc27ca63306bdc2ee4d7a3f384a3cb8acdb16d1c" Dec 13 01:44:09.798771 kubelet[2747]: E1213 01:44:09.798740 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-47x4r_calico-system(cafa636e-d27e-4a03-b0b6-680253dba261)\"" pod="calico-system/calico-node-47x4r" podUID="cafa636e-d27e-4a03-b0b6-680253dba261" Dec 13 01:44:15.857136 containerd[1549]: time="2024-12-13T01:44:15.856915144Z" level=info msg="StopPodSandbox for \"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\"" Dec 13 01:44:15.879062 containerd[1549]: time="2024-12-13T01:44:15.878914876Z" level=error msg="StopPodSandbox for \"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\" failed" error="failed to destroy network for sandbox \"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:44:15.879159 kubelet[2747]: E1213 01:44:15.879067 2747 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29" Dec 13 01:44:15.879159 kubelet[2747]: E1213 01:44:15.879105 2747 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29"} Dec 13 01:44:15.879159 kubelet[2747]: E1213 01:44:15.879130 2747 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"49679827-80b0-45f6-a6cd-64adcfa67f6f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:44:15.879159 kubelet[2747]: E1213 01:44:15.879145 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"49679827-80b0-45f6-a6cd-64adcfa67f6f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d5bfd545-r874s" podUID="49679827-80b0-45f6-a6cd-64adcfa67f6f" Dec 13 01:44:16.856545 containerd[1549]: time="2024-12-13T01:44:16.856462125Z" level=info msg="StopPodSandbox for \"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\"" Dec 13 01:44:16.873390 containerd[1549]: time="2024-12-13T01:44:16.873355374Z" level=error msg="StopPodSandbox for \"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\" failed" error="failed to destroy network for sandbox \"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:44:16.873620 kubelet[2747]: E1213 01:44:16.873473 2747 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305" Dec 13 01:44:16.873620 kubelet[2747]: E1213 01:44:16.873503 2747 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305"} Dec 13 01:44:16.873620 kubelet[2747]: E1213 01:44:16.873526 2747 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fec74b51-4f13-4ee0-a490-81de9f872b4f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:44:16.873620 kubelet[2747]: E1213 01:44:16.873540 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fec74b51-4f13-4ee0-a490-81de9f872b4f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74d5794888-n2p4c" podUID="fec74b51-4f13-4ee0-a490-81de9f872b4f" Dec 13 01:44:18.856738 containerd[1549]: time="2024-12-13T01:44:18.856708182Z" level=info msg="StopPodSandbox for \"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\"" Dec 13 01:44:18.874066 containerd[1549]: time="2024-12-13T01:44:18.874009112Z" level=error msg="StopPodSandbox for \"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\" failed" error="failed to destroy network for sandbox \"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:44:18.874170 kubelet[2747]: E1213 01:44:18.874139 2747 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586" Dec 13 01:44:18.874342 kubelet[2747]: E1213 01:44:18.874173 2747 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586"} Dec 13 01:44:18.874342 kubelet[2747]: E1213 01:44:18.874198 2747 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"be0402b2-9394-4ec8-b88a-518cccbc701b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:44:18.874342 kubelet[2747]: E1213 01:44:18.874216 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"be0402b2-9394-4ec8-b88a-518cccbc701b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sgckt" podUID="be0402b2-9394-4ec8-b88a-518cccbc701b" Dec 13 01:44:19.857330 containerd[1549]: time="2024-12-13T01:44:19.857303970Z" level=info msg="StopPodSandbox for \"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\"" Dec 13 01:44:19.875923 containerd[1549]: time="2024-12-13T01:44:19.875887320Z" level=error msg="StopPodSandbox for \"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\" failed" error="failed to destroy network for sandbox \"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:44:19.876121 kubelet[2747]: E1213 01:44:19.876089 2747 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3" Dec 13 01:44:19.876298 kubelet[2747]: E1213 01:44:19.876132 2747 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3"} Dec 13 01:44:19.876298 kubelet[2747]: E1213 01:44:19.876159 2747 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"91cdf967-f24e-4903-a5a6-9102cde06b2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:44:19.876298 kubelet[2747]: E1213 01:44:19.876175 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"91cdf967-f24e-4903-a5a6-9102cde06b2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d5bfd545-426d5" podUID="91cdf967-f24e-4903-a5a6-9102cde06b2f" Dec 13 01:44:21.856963 containerd[1549]: time="2024-12-13T01:44:21.856922467Z" level=info msg="StopPodSandbox for \"ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39\"" Dec 13 01:44:21.877840 containerd[1549]: time="2024-12-13T01:44:21.877801162Z" level=error msg="StopPodSandbox for \"ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39\" failed" error="failed to destroy network for sandbox \"ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:44:21.878124 kubelet[2747]: E1213 01:44:21.877961 2747 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39" Dec 13 01:44:21.878124 kubelet[2747]: E1213 01:44:21.878008 2747 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39"} Dec 13 01:44:21.878124 kubelet[2747]: E1213 01:44:21.878032 2747 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dd95d309-c209-4cf1-a636-b98ab7a31667\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:44:21.878124 kubelet[2747]: E1213 01:44:21.878046 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dd95d309-c209-4cf1-a636-b98ab7a31667\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-jp7n7" podUID="dd95d309-c209-4cf1-a636-b98ab7a31667" Dec 13 01:44:23.859127 containerd[1549]: time="2024-12-13T01:44:23.859060646Z" level=info msg="StopPodSandbox for \"9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2\"" Dec 13 01:44:23.902640 containerd[1549]: time="2024-12-13T01:44:23.902600073Z" level=error msg="StopPodSandbox for \"9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2\" failed" error="failed to destroy network for sandbox \"9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:44:23.902787 kubelet[2747]: E1213 01:44:23.902755 2747 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2" Dec 13 01:44:23.902962 kubelet[2747]: E1213 01:44:23.902803 2747 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2"} Dec 13 01:44:23.902962 kubelet[2747]: E1213 01:44:23.902831 2747 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"00e916a8-4e37-4540-9352-5c9af61a76e0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:44:23.902962 kubelet[2747]: E1213 01:44:23.902847 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"00e916a8-4e37-4540-9352-5c9af61a76e0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-mt9t2" podUID="00e916a8-4e37-4540-9352-5c9af61a76e0" Dec 13 01:44:24.856754 kubelet[2747]: I1213 01:44:24.856586 2747 scope.go:117] "RemoveContainer" containerID="fc46aa7e3bfbd10b5ac15c6ccc27ca63306bdc2ee4d7a3f384a3cb8acdb16d1c" Dec 13 01:44:24.858473 containerd[1549]: time="2024-12-13T01:44:24.858444014Z" level=info msg="CreateContainer within sandbox \"7b16b20863325d422e1cf97f308b639d2ee595ff01860faaacc849be4300fb9d\" for container &ContainerMetadata{Name:calico-node,Attempt:3,}" Dec 13 01:44:24.867446 containerd[1549]: time="2024-12-13T01:44:24.867417085Z" level=info msg="CreateContainer within sandbox \"7b16b20863325d422e1cf97f308b639d2ee595ff01860faaacc849be4300fb9d\" for &ContainerMetadata{Name:calico-node,Attempt:3,} returns container id \"920f2acc9162dd9ff2a527e34404d9599b0d7322fae089413ef8ff2ec1f98d60\"" Dec 13 01:44:24.868126 containerd[1549]: time="2024-12-13T01:44:24.868027692Z" level=info msg="StartContainer for \"920f2acc9162dd9ff2a527e34404d9599b0d7322fae089413ef8ff2ec1f98d60\"" Dec 13 01:44:24.896074 systemd[1]: Started cri-containerd-920f2acc9162dd9ff2a527e34404d9599b0d7322fae089413ef8ff2ec1f98d60.scope - libcontainer container 920f2acc9162dd9ff2a527e34404d9599b0d7322fae089413ef8ff2ec1f98d60. Dec 13 01:44:24.915779 containerd[1549]: time="2024-12-13T01:44:24.915750961Z" level=info msg="StartContainer for \"920f2acc9162dd9ff2a527e34404d9599b0d7322fae089413ef8ff2ec1f98d60\" returns successfully" Dec 13 01:44:25.004545 systemd[1]: cri-containerd-920f2acc9162dd9ff2a527e34404d9599b0d7322fae089413ef8ff2ec1f98d60.scope: Deactivated successfully. Dec 13 01:44:25.016216 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-920f2acc9162dd9ff2a527e34404d9599b0d7322fae089413ef8ff2ec1f98d60-rootfs.mount: Deactivated successfully. Dec 13 01:44:25.030134 containerd[1549]: time="2024-12-13T01:44:25.030080799Z" level=info msg="shim disconnected" id=920f2acc9162dd9ff2a527e34404d9599b0d7322fae089413ef8ff2ec1f98d60 namespace=k8s.io Dec 13 01:44:25.030134 containerd[1549]: time="2024-12-13T01:44:25.030130342Z" level=warning msg="cleaning up after shim disconnected" id=920f2acc9162dd9ff2a527e34404d9599b0d7322fae089413ef8ff2ec1f98d60 namespace=k8s.io Dec 13 01:44:25.030271 containerd[1549]: time="2024-12-13T01:44:25.030140733Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:44:25.619366 kubelet[2747]: I1213 01:44:25.619151 2747 scope.go:117] "RemoveContainer" containerID="fc46aa7e3bfbd10b5ac15c6ccc27ca63306bdc2ee4d7a3f384a3cb8acdb16d1c" Dec 13 01:44:25.619871 kubelet[2747]: I1213 01:44:25.619417 2747 scope.go:117] "RemoveContainer" containerID="920f2acc9162dd9ff2a527e34404d9599b0d7322fae089413ef8ff2ec1f98d60" Dec 13 01:44:25.619871 kubelet[2747]: E1213 01:44:25.619509 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 40s restarting failed container=calico-node pod=calico-node-47x4r_calico-system(cafa636e-d27e-4a03-b0b6-680253dba261)\"" pod="calico-system/calico-node-47x4r" podUID="cafa636e-d27e-4a03-b0b6-680253dba261" Dec 13 01:44:25.620795 containerd[1549]: time="2024-12-13T01:44:25.620769444Z" level=info msg="RemoveContainer for \"fc46aa7e3bfbd10b5ac15c6ccc27ca63306bdc2ee4d7a3f384a3cb8acdb16d1c\"" Dec 13 01:44:25.623027 containerd[1549]: time="2024-12-13T01:44:25.623008618Z" level=info msg="RemoveContainer for \"fc46aa7e3bfbd10b5ac15c6ccc27ca63306bdc2ee4d7a3f384a3cb8acdb16d1c\" returns successfully" Dec 13 01:44:27.857199 containerd[1549]: time="2024-12-13T01:44:27.857164260Z" level=info msg="StopPodSandbox for \"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\"" Dec 13 01:44:27.875017 containerd[1549]: time="2024-12-13T01:44:27.874906308Z" level=error msg="StopPodSandbox for \"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\" failed" error="failed to destroy network for sandbox \"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:44:27.875113 kubelet[2747]: E1213 01:44:27.875033 2747 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305" Dec 13 01:44:27.875113 kubelet[2747]: E1213 01:44:27.875065 2747 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305"} Dec 13 01:44:27.875113 kubelet[2747]: E1213 01:44:27.875086 2747 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fec74b51-4f13-4ee0-a490-81de9f872b4f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:44:27.875113 kubelet[2747]: E1213 01:44:27.875101 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fec74b51-4f13-4ee0-a490-81de9f872b4f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74d5794888-n2p4c" podUID="fec74b51-4f13-4ee0-a490-81de9f872b4f" Dec 13 01:44:28.108355 containerd[1549]: time="2024-12-13T01:44:28.108288009Z" level=info msg="StopPodSandbox for \"7b16b20863325d422e1cf97f308b639d2ee595ff01860faaacc849be4300fb9d\"" Dec 13 01:44:28.109191 containerd[1549]: time="2024-12-13T01:44:28.108719012Z" level=info msg="Container to stop \"920f2acc9162dd9ff2a527e34404d9599b0d7322fae089413ef8ff2ec1f98d60\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:44:28.109191 containerd[1549]: time="2024-12-13T01:44:28.108737931Z" level=info msg="Container to stop \"fd1188a95a934d563b7d52c478ef6d4f01f5586fd286f9c43221b67144185676\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:44:28.109191 containerd[1549]: time="2024-12-13T01:44:28.108744736Z" level=info msg="Container to stop \"9d6108d520079b486411ff2ba476575c2267c0594cbd81f602014c049dd9f517\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:44:28.110676 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7b16b20863325d422e1cf97f308b639d2ee595ff01860faaacc849be4300fb9d-shm.mount: Deactivated successfully. Dec 13 01:44:28.128692 systemd[1]: cri-containerd-7b16b20863325d422e1cf97f308b639d2ee595ff01860faaacc849be4300fb9d.scope: Deactivated successfully. Dec 13 01:44:28.146143 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b16b20863325d422e1cf97f308b639d2ee595ff01860faaacc849be4300fb9d-rootfs.mount: Deactivated successfully. Dec 13 01:44:28.150998 containerd[1549]: time="2024-12-13T01:44:28.150929756Z" level=info msg="shim disconnected" id=7b16b20863325d422e1cf97f308b639d2ee595ff01860faaacc849be4300fb9d namespace=k8s.io Dec 13 01:44:28.150998 containerd[1549]: time="2024-12-13T01:44:28.150970708Z" level=warning msg="cleaning up after shim disconnected" id=7b16b20863325d422e1cf97f308b639d2ee595ff01860faaacc849be4300fb9d namespace=k8s.io Dec 13 01:44:28.151810 containerd[1549]: time="2024-12-13T01:44:28.151795428Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:44:28.167116 containerd[1549]: time="2024-12-13T01:44:28.167084262Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:44:28Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:44:28.168743 containerd[1549]: time="2024-12-13T01:44:28.168724084Z" level=info msg="TearDown network for sandbox \"7b16b20863325d422e1cf97f308b639d2ee595ff01860faaacc849be4300fb9d\" successfully" Dec 13 01:44:28.168743 containerd[1549]: time="2024-12-13T01:44:28.168739871Z" level=info msg="StopPodSandbox for \"7b16b20863325d422e1cf97f308b639d2ee595ff01860faaacc849be4300fb9d\" returns successfully" Dec 13 01:44:28.202821 kubelet[2747]: E1213 01:44:28.202795 2747 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cafa636e-d27e-4a03-b0b6-680253dba261" containerName="flexvol-driver" Dec 13 01:44:28.202821 kubelet[2747]: E1213 01:44:28.202820 2747 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cafa636e-d27e-4a03-b0b6-680253dba261" containerName="calico-node" Dec 13 01:44:28.202963 kubelet[2747]: E1213 01:44:28.202837 2747 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cafa636e-d27e-4a03-b0b6-680253dba261" containerName="install-cni" Dec 13 01:44:28.202963 kubelet[2747]: E1213 01:44:28.202853 2747 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cafa636e-d27e-4a03-b0b6-680253dba261" containerName="calico-node" Dec 13 01:44:28.202963 kubelet[2747]: E1213 01:44:28.202859 2747 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cafa636e-d27e-4a03-b0b6-680253dba261" containerName="calico-node" Dec 13 01:44:28.202963 kubelet[2747]: E1213 01:44:28.202868 2747 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cafa636e-d27e-4a03-b0b6-680253dba261" containerName="calico-node" Dec 13 01:44:28.202963 kubelet[2747]: I1213 01:44:28.202890 2747 memory_manager.go:354] "RemoveStaleState removing state" podUID="cafa636e-d27e-4a03-b0b6-680253dba261" containerName="calico-node" Dec 13 01:44:28.202963 kubelet[2747]: I1213 01:44:28.202894 2747 memory_manager.go:354] "RemoveStaleState removing state" podUID="cafa636e-d27e-4a03-b0b6-680253dba261" containerName="calico-node" Dec 13 01:44:28.202963 kubelet[2747]: I1213 01:44:28.202928 2747 memory_manager.go:354] "RemoveStaleState removing state" podUID="cafa636e-d27e-4a03-b0b6-680253dba261" containerName="calico-node" Dec 13 01:44:28.202963 kubelet[2747]: I1213 01:44:28.202933 2747 memory_manager.go:354] "RemoveStaleState removing state" podUID="cafa636e-d27e-4a03-b0b6-680253dba261" containerName="calico-node" Dec 13 01:44:28.208522 systemd[1]: Created slice kubepods-besteffort-pod78c4c1c1_ac12_4f4b_abe7_3c33de8f8b36.slice - libcontainer container kubepods-besteffort-pod78c4c1c1_ac12_4f4b_abe7_3c33de8f8b36.slice. Dec 13 01:44:28.332095 kubelet[2747]: I1213 01:44:28.332066 2747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/cafa636e-d27e-4a03-b0b6-680253dba261-policysync\") pod \"cafa636e-d27e-4a03-b0b6-680253dba261\" (UID: \"cafa636e-d27e-4a03-b0b6-680253dba261\") " Dec 13 01:44:28.332209 kubelet[2747]: I1213 01:44:28.332099 2747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cafa636e-d27e-4a03-b0b6-680253dba261-lib-modules\") pod \"cafa636e-d27e-4a03-b0b6-680253dba261\" (UID: \"cafa636e-d27e-4a03-b0b6-680253dba261\") " Dec 13 01:44:28.332209 kubelet[2747]: I1213 01:44:28.332118 2747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/cafa636e-d27e-4a03-b0b6-680253dba261-cni-net-dir\") pod \"cafa636e-d27e-4a03-b0b6-680253dba261\" (UID: \"cafa636e-d27e-4a03-b0b6-680253dba261\") " Dec 13 01:44:28.332209 kubelet[2747]: I1213 01:44:28.332131 2747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/cafa636e-d27e-4a03-b0b6-680253dba261-flexvol-driver-host\") pod \"cafa636e-d27e-4a03-b0b6-680253dba261\" (UID: \"cafa636e-d27e-4a03-b0b6-680253dba261\") " Dec 13 01:44:28.332209 kubelet[2747]: I1213 01:44:28.332151 2747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/cafa636e-d27e-4a03-b0b6-680253dba261-cni-log-dir\") pod \"cafa636e-d27e-4a03-b0b6-680253dba261\" (UID: \"cafa636e-d27e-4a03-b0b6-680253dba261\") " Dec 13 01:44:28.332209 kubelet[2747]: I1213 01:44:28.332164 2747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/cafa636e-d27e-4a03-b0b6-680253dba261-var-run-calico\") pod \"cafa636e-d27e-4a03-b0b6-680253dba261\" (UID: \"cafa636e-d27e-4a03-b0b6-680253dba261\") " Dec 13 01:44:28.332209 kubelet[2747]: I1213 01:44:28.332180 2747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cafa636e-d27e-4a03-b0b6-680253dba261-tigera-ca-bundle\") pod \"cafa636e-d27e-4a03-b0b6-680253dba261\" (UID: \"cafa636e-d27e-4a03-b0b6-680253dba261\") " Dec 13 01:44:28.332686 kubelet[2747]: I1213 01:44:28.332190 2747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cafa636e-d27e-4a03-b0b6-680253dba261-xtables-lock\") pod \"cafa636e-d27e-4a03-b0b6-680253dba261\" (UID: \"cafa636e-d27e-4a03-b0b6-680253dba261\") " Dec 13 01:44:28.332686 kubelet[2747]: I1213 01:44:28.332208 2747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/cafa636e-d27e-4a03-b0b6-680253dba261-node-certs\") pod \"cafa636e-d27e-4a03-b0b6-680253dba261\" (UID: \"cafa636e-d27e-4a03-b0b6-680253dba261\") " Dec 13 01:44:28.332686 kubelet[2747]: I1213 01:44:28.332220 2747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/cafa636e-d27e-4a03-b0b6-680253dba261-cni-bin-dir\") pod \"cafa636e-d27e-4a03-b0b6-680253dba261\" (UID: \"cafa636e-d27e-4a03-b0b6-680253dba261\") " Dec 13 01:44:28.332686 kubelet[2747]: I1213 01:44:28.332232 2747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mx4n8\" (UniqueName: \"kubernetes.io/projected/cafa636e-d27e-4a03-b0b6-680253dba261-kube-api-access-mx4n8\") pod \"cafa636e-d27e-4a03-b0b6-680253dba261\" (UID: \"cafa636e-d27e-4a03-b0b6-680253dba261\") " Dec 13 01:44:28.332686 kubelet[2747]: I1213 01:44:28.332244 2747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cafa636e-d27e-4a03-b0b6-680253dba261-var-lib-calico\") pod \"cafa636e-d27e-4a03-b0b6-680253dba261\" (UID: \"cafa636e-d27e-4a03-b0b6-680253dba261\") " Dec 13 01:44:28.332686 kubelet[2747]: I1213 01:44:28.332297 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/78c4c1c1-ac12-4f4b-abe7-3c33de8f8b36-cni-net-dir\") pod \"calico-node-2z6sq\" (UID: \"78c4c1c1-ac12-4f4b-abe7-3c33de8f8b36\") " pod="calico-system/calico-node-2z6sq" Dec 13 01:44:28.332870 kubelet[2747]: I1213 01:44:28.332317 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/78c4c1c1-ac12-4f4b-abe7-3c33de8f8b36-xtables-lock\") pod \"calico-node-2z6sq\" (UID: \"78c4c1c1-ac12-4f4b-abe7-3c33de8f8b36\") " pod="calico-system/calico-node-2z6sq" Dec 13 01:44:28.332870 kubelet[2747]: I1213 01:44:28.332331 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78c4c1c1-ac12-4f4b-abe7-3c33de8f8b36-tigera-ca-bundle\") pod \"calico-node-2z6sq\" (UID: \"78c4c1c1-ac12-4f4b-abe7-3c33de8f8b36\") " pod="calico-system/calico-node-2z6sq" Dec 13 01:44:28.332870 kubelet[2747]: I1213 01:44:28.332343 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78c4c1c1-ac12-4f4b-abe7-3c33de8f8b36-lib-modules\") pod \"calico-node-2z6sq\" (UID: \"78c4c1c1-ac12-4f4b-abe7-3c33de8f8b36\") " pod="calico-system/calico-node-2z6sq" Dec 13 01:44:28.332870 kubelet[2747]: I1213 01:44:28.332356 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/78c4c1c1-ac12-4f4b-abe7-3c33de8f8b36-flexvol-driver-host\") pod \"calico-node-2z6sq\" (UID: \"78c4c1c1-ac12-4f4b-abe7-3c33de8f8b36\") " pod="calico-system/calico-node-2z6sq" Dec 13 01:44:28.332870 kubelet[2747]: I1213 01:44:28.332368 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlchw\" (UniqueName: \"kubernetes.io/projected/78c4c1c1-ac12-4f4b-abe7-3c33de8f8b36-kube-api-access-mlchw\") pod \"calico-node-2z6sq\" (UID: \"78c4c1c1-ac12-4f4b-abe7-3c33de8f8b36\") " pod="calico-system/calico-node-2z6sq" Dec 13 01:44:28.333513 kubelet[2747]: I1213 01:44:28.332382 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/78c4c1c1-ac12-4f4b-abe7-3c33de8f8b36-var-run-calico\") pod \"calico-node-2z6sq\" (UID: \"78c4c1c1-ac12-4f4b-abe7-3c33de8f8b36\") " pod="calico-system/calico-node-2z6sq" Dec 13 01:44:28.333513 kubelet[2747]: I1213 01:44:28.332395 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/78c4c1c1-ac12-4f4b-abe7-3c33de8f8b36-var-lib-calico\") pod \"calico-node-2z6sq\" (UID: \"78c4c1c1-ac12-4f4b-abe7-3c33de8f8b36\") " pod="calico-system/calico-node-2z6sq" Dec 13 01:44:28.333513 kubelet[2747]: I1213 01:44:28.332409 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/78c4c1c1-ac12-4f4b-abe7-3c33de8f8b36-policysync\") pod \"calico-node-2z6sq\" (UID: \"78c4c1c1-ac12-4f4b-abe7-3c33de8f8b36\") " pod="calico-system/calico-node-2z6sq" Dec 13 01:44:28.333513 kubelet[2747]: I1213 01:44:28.332420 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/78c4c1c1-ac12-4f4b-abe7-3c33de8f8b36-cni-bin-dir\") pod \"calico-node-2z6sq\" (UID: \"78c4c1c1-ac12-4f4b-abe7-3c33de8f8b36\") " pod="calico-system/calico-node-2z6sq" Dec 13 01:44:28.333513 kubelet[2747]: I1213 01:44:28.332433 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/78c4c1c1-ac12-4f4b-abe7-3c33de8f8b36-node-certs\") pod \"calico-node-2z6sq\" (UID: \"78c4c1c1-ac12-4f4b-abe7-3c33de8f8b36\") " pod="calico-system/calico-node-2z6sq" Dec 13 01:44:28.333712 kubelet[2747]: I1213 01:44:28.332454 2747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/78c4c1c1-ac12-4f4b-abe7-3c33de8f8b36-cni-log-dir\") pod \"calico-node-2z6sq\" (UID: \"78c4c1c1-ac12-4f4b-abe7-3c33de8f8b36\") " pod="calico-system/calico-node-2z6sq" Dec 13 01:44:28.338570 systemd[1]: var-lib-kubelet-pods-cafa636e\x2dd27e\x2d4a03\x2db0b6\x2d680253dba261-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Dec 13 01:44:28.343243 kubelet[2747]: I1213 01:44:28.341801 2747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cafa636e-d27e-4a03-b0b6-680253dba261-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "cafa636e-d27e-4a03-b0b6-680253dba261" (UID: "cafa636e-d27e-4a03-b0b6-680253dba261"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:44:28.343243 kubelet[2747]: I1213 01:44:28.343007 2747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cafa636e-d27e-4a03-b0b6-680253dba261-policysync" (OuterVolumeSpecName: "policysync") pod "cafa636e-d27e-4a03-b0b6-680253dba261" (UID: "cafa636e-d27e-4a03-b0b6-680253dba261"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:44:28.343243 kubelet[2747]: I1213 01:44:28.343025 2747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cafa636e-d27e-4a03-b0b6-680253dba261-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "cafa636e-d27e-4a03-b0b6-680253dba261" (UID: "cafa636e-d27e-4a03-b0b6-680253dba261"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:44:28.343243 kubelet[2747]: I1213 01:44:28.343037 2747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cafa636e-d27e-4a03-b0b6-680253dba261-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "cafa636e-d27e-4a03-b0b6-680253dba261" (UID: "cafa636e-d27e-4a03-b0b6-680253dba261"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:44:28.343243 kubelet[2747]: I1213 01:44:28.343050 2747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cafa636e-d27e-4a03-b0b6-680253dba261-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "cafa636e-d27e-4a03-b0b6-680253dba261" (UID: "cafa636e-d27e-4a03-b0b6-680253dba261"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:44:28.343384 kubelet[2747]: I1213 01:44:28.343065 2747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cafa636e-d27e-4a03-b0b6-680253dba261-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "cafa636e-d27e-4a03-b0b6-680253dba261" (UID: "cafa636e-d27e-4a03-b0b6-680253dba261"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:44:28.343950 kubelet[2747]: I1213 01:44:28.341871 2747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cafa636e-d27e-4a03-b0b6-680253dba261-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "cafa636e-d27e-4a03-b0b6-680253dba261" (UID: "cafa636e-d27e-4a03-b0b6-680253dba261"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:44:28.343950 kubelet[2747]: I1213 01:44:28.343732 2747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cafa636e-d27e-4a03-b0b6-680253dba261-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "cafa636e-d27e-4a03-b0b6-680253dba261" (UID: "cafa636e-d27e-4a03-b0b6-680253dba261"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:44:28.343950 kubelet[2747]: I1213 01:44:28.343756 2747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cafa636e-d27e-4a03-b0b6-680253dba261-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "cafa636e-d27e-4a03-b0b6-680253dba261" (UID: "cafa636e-d27e-4a03-b0b6-680253dba261"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:44:28.343950 kubelet[2747]: I1213 01:44:28.343769 2747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cafa636e-d27e-4a03-b0b6-680253dba261-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "cafa636e-d27e-4a03-b0b6-680253dba261" (UID: "cafa636e-d27e-4a03-b0b6-680253dba261"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:44:28.351381 systemd[1]: var-lib-kubelet-pods-cafa636e\x2dd27e\x2d4a03\x2db0b6\x2d680253dba261-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Dec 13 01:44:28.351481 kubelet[2747]: I1213 01:44:28.351436 2747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cafa636e-d27e-4a03-b0b6-680253dba261-node-certs" (OuterVolumeSpecName: "node-certs") pod "cafa636e-d27e-4a03-b0b6-680253dba261" (UID: "cafa636e-d27e-4a03-b0b6-680253dba261"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 01:44:28.352909 kubelet[2747]: I1213 01:44:28.352861 2747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cafa636e-d27e-4a03-b0b6-680253dba261-kube-api-access-mx4n8" (OuterVolumeSpecName: "kube-api-access-mx4n8") pod "cafa636e-d27e-4a03-b0b6-680253dba261" (UID: "cafa636e-d27e-4a03-b0b6-680253dba261"). InnerVolumeSpecName "kube-api-access-mx4n8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:44:28.354482 systemd[1]: var-lib-kubelet-pods-cafa636e\x2dd27e\x2d4a03\x2db0b6\x2d680253dba261-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmx4n8.mount: Deactivated successfully. Dec 13 01:44:28.436314 kubelet[2747]: I1213 01:44:28.435336 2747 reconciler_common.go:288] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cafa636e-d27e-4a03-b0b6-680253dba261-var-lib-calico\") on node \"localhost\" DevicePath \"\"" Dec 13 01:44:28.436314 kubelet[2747]: I1213 01:44:28.436122 2747 reconciler_common.go:288] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/cafa636e-d27e-4a03-b0b6-680253dba261-policysync\") on node \"localhost\" DevicePath \"\"" Dec 13 01:44:28.436314 kubelet[2747]: I1213 01:44:28.436135 2747 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cafa636e-d27e-4a03-b0b6-680253dba261-lib-modules\") on node \"localhost\" DevicePath \"\"" Dec 13 01:44:28.436314 kubelet[2747]: I1213 01:44:28.436143 2747 reconciler_common.go:288] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/cafa636e-d27e-4a03-b0b6-680253dba261-flexvol-driver-host\") on node \"localhost\" DevicePath \"\"" Dec 13 01:44:28.436314 kubelet[2747]: I1213 01:44:28.436152 2747 reconciler_common.go:288] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/cafa636e-d27e-4a03-b0b6-680253dba261-cni-log-dir\") on node \"localhost\" DevicePath \"\"" Dec 13 01:44:28.436314 kubelet[2747]: I1213 01:44:28.436158 2747 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cafa636e-d27e-4a03-b0b6-680253dba261-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Dec 13 01:44:28.436314 kubelet[2747]: I1213 01:44:28.436165 2747 reconciler_common.go:288] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/cafa636e-d27e-4a03-b0b6-680253dba261-cni-bin-dir\") on node \"localhost\" DevicePath \"\"" Dec 13 01:44:28.436314 kubelet[2747]: I1213 01:44:28.436171 2747 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-mx4n8\" (UniqueName: \"kubernetes.io/projected/cafa636e-d27e-4a03-b0b6-680253dba261-kube-api-access-mx4n8\") on node \"localhost\" DevicePath \"\"" Dec 13 01:44:28.436545 kubelet[2747]: I1213 01:44:28.436178 2747 reconciler_common.go:288] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/cafa636e-d27e-4a03-b0b6-680253dba261-cni-net-dir\") on node \"localhost\" DevicePath \"\"" Dec 13 01:44:28.436545 kubelet[2747]: I1213 01:44:28.436184 2747 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cafa636e-d27e-4a03-b0b6-680253dba261-xtables-lock\") on node \"localhost\" DevicePath \"\"" Dec 13 01:44:28.436545 kubelet[2747]: I1213 01:44:28.436192 2747 reconciler_common.go:288] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/cafa636e-d27e-4a03-b0b6-680253dba261-var-run-calico\") on node \"localhost\" DevicePath \"\"" Dec 13 01:44:28.436545 kubelet[2747]: I1213 01:44:28.436198 2747 reconciler_common.go:288] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/cafa636e-d27e-4a03-b0b6-680253dba261-node-certs\") on node \"localhost\" DevicePath \"\"" Dec 13 01:44:28.511289 containerd[1549]: time="2024-12-13T01:44:28.511217297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2z6sq,Uid:78c4c1c1-ac12-4f4b-abe7-3c33de8f8b36,Namespace:calico-system,Attempt:0,}" Dec 13 01:44:28.524361 containerd[1549]: time="2024-12-13T01:44:28.524183869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:44:28.524640 containerd[1549]: time="2024-12-13T01:44:28.524319476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:44:28.524746 containerd[1549]: time="2024-12-13T01:44:28.524724895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:44:28.524967 containerd[1549]: time="2024-12-13T01:44:28.524923715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:44:28.542122 systemd[1]: Started cri-containerd-04b69a7a3c199d50a89e185b4664ba2a49cba292d4bc2060ac7991b8f7c2cf09.scope - libcontainer container 04b69a7a3c199d50a89e185b4664ba2a49cba292d4bc2060ac7991b8f7c2cf09. Dec 13 01:44:28.561711 containerd[1549]: time="2024-12-13T01:44:28.561184009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2z6sq,Uid:78c4c1c1-ac12-4f4b-abe7-3c33de8f8b36,Namespace:calico-system,Attempt:0,} returns sandbox id \"04b69a7a3c199d50a89e185b4664ba2a49cba292d4bc2060ac7991b8f7c2cf09\"" Dec 13 01:44:28.563891 containerd[1549]: time="2024-12-13T01:44:28.563839257Z" level=info msg="CreateContainer within sandbox \"04b69a7a3c199d50a89e185b4664ba2a49cba292d4bc2060ac7991b8f7c2cf09\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 01:44:28.570223 containerd[1549]: time="2024-12-13T01:44:28.569990241Z" level=info msg="CreateContainer within sandbox \"04b69a7a3c199d50a89e185b4664ba2a49cba292d4bc2060ac7991b8f7c2cf09\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5cb566f4c97d5328f8a453e5e3a36f1043afe3e4a4fe595e8e7c41e722650bb9\"" Dec 13 01:44:28.570564 containerd[1549]: time="2024-12-13T01:44:28.570379182Z" level=info msg="StartContainer for \"5cb566f4c97d5328f8a453e5e3a36f1043afe3e4a4fe595e8e7c41e722650bb9\"" Dec 13 01:44:28.591197 systemd[1]: Started cri-containerd-5cb566f4c97d5328f8a453e5e3a36f1043afe3e4a4fe595e8e7c41e722650bb9.scope - libcontainer container 5cb566f4c97d5328f8a453e5e3a36f1043afe3e4a4fe595e8e7c41e722650bb9. Dec 13 01:44:28.610890 containerd[1549]: time="2024-12-13T01:44:28.610865548Z" level=info msg="StartContainer for \"5cb566f4c97d5328f8a453e5e3a36f1043afe3e4a4fe595e8e7c41e722650bb9\" returns successfully" Dec 13 01:44:28.627175 kubelet[2747]: I1213 01:44:28.627013 2747 scope.go:117] "RemoveContainer" containerID="920f2acc9162dd9ff2a527e34404d9599b0d7322fae089413ef8ff2ec1f98d60" Dec 13 01:44:28.628322 containerd[1549]: time="2024-12-13T01:44:28.628046260Z" level=info msg="RemoveContainer for \"920f2acc9162dd9ff2a527e34404d9599b0d7322fae089413ef8ff2ec1f98d60\"" Dec 13 01:44:28.633637 systemd[1]: Removed slice kubepods-besteffort-podcafa636e_d27e_4a03_b0b6_680253dba261.slice - libcontainer container kubepods-besteffort-podcafa636e_d27e_4a03_b0b6_680253dba261.slice. Dec 13 01:44:28.636639 containerd[1549]: time="2024-12-13T01:44:28.636622913Z" level=info msg="RemoveContainer for \"920f2acc9162dd9ff2a527e34404d9599b0d7322fae089413ef8ff2ec1f98d60\" returns successfully" Dec 13 01:44:28.636997 kubelet[2747]: I1213 01:44:28.636836 2747 scope.go:117] "RemoveContainer" containerID="fd1188a95a934d563b7d52c478ef6d4f01f5586fd286f9c43221b67144185676" Dec 13 01:44:28.639927 containerd[1549]: time="2024-12-13T01:44:28.639616911Z" level=info msg="RemoveContainer for \"fd1188a95a934d563b7d52c478ef6d4f01f5586fd286f9c43221b67144185676\"" Dec 13 01:44:28.642054 containerd[1549]: time="2024-12-13T01:44:28.641838694Z" level=info msg="RemoveContainer for \"fd1188a95a934d563b7d52c478ef6d4f01f5586fd286f9c43221b67144185676\" returns successfully" Dec 13 01:44:28.642104 kubelet[2747]: I1213 01:44:28.642009 2747 scope.go:117] "RemoveContainer" containerID="9d6108d520079b486411ff2ba476575c2267c0594cbd81f602014c049dd9f517" Dec 13 01:44:28.642957 containerd[1549]: time="2024-12-13T01:44:28.642717282Z" level=info msg="RemoveContainer for \"9d6108d520079b486411ff2ba476575c2267c0594cbd81f602014c049dd9f517\"" Dec 13 01:44:28.644729 containerd[1549]: time="2024-12-13T01:44:28.644100958Z" level=info msg="RemoveContainer for \"9d6108d520079b486411ff2ba476575c2267c0594cbd81f602014c049dd9f517\" returns successfully" Dec 13 01:44:28.683852 systemd[1]: cri-containerd-5cb566f4c97d5328f8a453e5e3a36f1043afe3e4a4fe595e8e7c41e722650bb9.scope: Deactivated successfully. Dec 13 01:44:28.699520 containerd[1549]: time="2024-12-13T01:44:28.699481567Z" level=info msg="shim disconnected" id=5cb566f4c97d5328f8a453e5e3a36f1043afe3e4a4fe595e8e7c41e722650bb9 namespace=k8s.io Dec 13 01:44:28.699658 containerd[1549]: time="2024-12-13T01:44:28.699639577Z" level=warning msg="cleaning up after shim disconnected" id=5cb566f4c97d5328f8a453e5e3a36f1043afe3e4a4fe595e8e7c41e722650bb9 namespace=k8s.io Dec 13 01:44:28.699686 containerd[1549]: time="2024-12-13T01:44:28.699651882Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:44:28.708497 containerd[1549]: time="2024-12-13T01:44:28.708055314Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:44:28Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:44:29.634659 containerd[1549]: time="2024-12-13T01:44:29.634612028Z" level=info msg="CreateContainer within sandbox \"04b69a7a3c199d50a89e185b4664ba2a49cba292d4bc2060ac7991b8f7c2cf09\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:44:29.660821 containerd[1549]: time="2024-12-13T01:44:29.660789491Z" level=info msg="CreateContainer within sandbox \"04b69a7a3c199d50a89e185b4664ba2a49cba292d4bc2060ac7991b8f7c2cf09\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b2f6909d3510fea4f8f6cd69f551208a85fccb7780710619f538161ea86e48e7\"" Dec 13 01:44:29.662436 containerd[1549]: time="2024-12-13T01:44:29.662275303Z" level=info msg="StartContainer for \"b2f6909d3510fea4f8f6cd69f551208a85fccb7780710619f538161ea86e48e7\"" Dec 13 01:44:29.686811 systemd[1]: run-containerd-runc-k8s.io-b2f6909d3510fea4f8f6cd69f551208a85fccb7780710619f538161ea86e48e7-runc.O40Xk1.mount: Deactivated successfully. Dec 13 01:44:29.695066 systemd[1]: Started cri-containerd-b2f6909d3510fea4f8f6cd69f551208a85fccb7780710619f538161ea86e48e7.scope - libcontainer container b2f6909d3510fea4f8f6cd69f551208a85fccb7780710619f538161ea86e48e7. Dec 13 01:44:29.711358 containerd[1549]: time="2024-12-13T01:44:29.711339239Z" level=info msg="StartContainer for \"b2f6909d3510fea4f8f6cd69f551208a85fccb7780710619f538161ea86e48e7\" returns successfully" Dec 13 01:44:29.858410 containerd[1549]: time="2024-12-13T01:44:29.857653747Z" level=info msg="StopPodSandbox for \"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\"" Dec 13 01:44:29.858410 containerd[1549]: time="2024-12-13T01:44:29.857680804Z" level=info msg="StopPodSandbox for \"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\"" Dec 13 01:44:29.860817 kubelet[2747]: I1213 01:44:29.860436 2747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cafa636e-d27e-4a03-b0b6-680253dba261" path="/var/lib/kubelet/pods/cafa636e-d27e-4a03-b0b6-680253dba261/volumes" Dec 13 01:44:29.886836 containerd[1549]: time="2024-12-13T01:44:29.886712834Z" level=error msg="StopPodSandbox for \"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\" failed" error="failed to destroy network for sandbox \"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:44:29.887115 kubelet[2747]: E1213 01:44:29.886917 2747 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586" Dec 13 01:44:29.887115 kubelet[2747]: E1213 01:44:29.887060 2747 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586"} Dec 13 01:44:29.887115 kubelet[2747]: E1213 01:44:29.887087 2747 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"be0402b2-9394-4ec8-b88a-518cccbc701b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:44:29.887115 kubelet[2747]: E1213 01:44:29.887102 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"be0402b2-9394-4ec8-b88a-518cccbc701b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sgckt" podUID="be0402b2-9394-4ec8-b88a-518cccbc701b" Dec 13 01:44:29.888300 containerd[1549]: time="2024-12-13T01:44:29.888263434Z" level=error msg="StopPodSandbox for \"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\" failed" error="failed to destroy network for sandbox \"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:44:29.888375 kubelet[2747]: E1213 01:44:29.888357 2747 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29" Dec 13 01:44:29.888411 kubelet[2747]: E1213 01:44:29.888378 2747 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29"} Dec 13 01:44:29.888411 kubelet[2747]: E1213 01:44:29.888394 2747 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"49679827-80b0-45f6-a6cd-64adcfa67f6f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:44:29.888462 kubelet[2747]: E1213 01:44:29.888406 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"49679827-80b0-45f6-a6cd-64adcfa67f6f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d5bfd545-r874s" podUID="49679827-80b0-45f6-a6cd-64adcfa67f6f" Dec 13 01:44:30.856454 containerd[1549]: time="2024-12-13T01:44:30.856430139Z" level=info msg="StopPodSandbox for \"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\"" Dec 13 01:44:30.874757 containerd[1549]: time="2024-12-13T01:44:30.874701308Z" level=error msg="StopPodSandbox for \"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\" failed" error="failed to destroy network for sandbox \"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:44:30.874865 kubelet[2747]: E1213 01:44:30.874839 2747 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3" Dec 13 01:44:30.875070 kubelet[2747]: E1213 01:44:30.874874 2747 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3"} Dec 13 01:44:30.903765 kubelet[2747]: E1213 01:44:30.903735 2747 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"91cdf967-f24e-4903-a5a6-9102cde06b2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:44:30.903899 kubelet[2747]: E1213 01:44:30.903772 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"91cdf967-f24e-4903-a5a6-9102cde06b2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d5bfd545-426d5" podUID="91cdf967-f24e-4903-a5a6-9102cde06b2f" Dec 13 01:44:31.454384 systemd[1]: cri-containerd-b2f6909d3510fea4f8f6cd69f551208a85fccb7780710619f538161ea86e48e7.scope: Deactivated successfully. Dec 13 01:44:31.466896 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b2f6909d3510fea4f8f6cd69f551208a85fccb7780710619f538161ea86e48e7-rootfs.mount: Deactivated successfully. Dec 13 01:44:31.470418 containerd[1549]: time="2024-12-13T01:44:31.470377009Z" level=info msg="shim disconnected" id=b2f6909d3510fea4f8f6cd69f551208a85fccb7780710619f538161ea86e48e7 namespace=k8s.io Dec 13 01:44:31.470506 containerd[1549]: time="2024-12-13T01:44:31.470497052Z" level=warning msg="cleaning up after shim disconnected" id=b2f6909d3510fea4f8f6cd69f551208a85fccb7780710619f538161ea86e48e7 namespace=k8s.io Dec 13 01:44:31.470604 containerd[1549]: time="2024-12-13T01:44:31.470533271Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:44:31.510141 systemd[1]: Started sshd@7-139.178.70.108:22-139.178.89.65:53028.service - OpenSSH per-connection server daemon (139.178.89.65:53028). Dec 13 01:44:31.590634 sshd[4624]: Accepted publickey for core from 139.178.89.65 port 53028 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:44:31.593002 sshd[4624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:44:31.598204 systemd-logind[1526]: New session 10 of user core. Dec 13 01:44:31.603104 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:44:31.657319 containerd[1549]: time="2024-12-13T01:44:31.657038864Z" level=info msg="CreateContainer within sandbox \"04b69a7a3c199d50a89e185b4664ba2a49cba292d4bc2060ac7991b8f7c2cf09\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 01:44:31.665280 containerd[1549]: time="2024-12-13T01:44:31.665253246Z" level=info msg="CreateContainer within sandbox \"04b69a7a3c199d50a89e185b4664ba2a49cba292d4bc2060ac7991b8f7c2cf09\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"60f62a028e43ef919148002bfb475295ddd5839e9f7bf66fe80b60e5f1c19f36\"" Dec 13 01:44:31.665919 containerd[1549]: time="2024-12-13T01:44:31.665902233Z" level=info msg="StartContainer for \"60f62a028e43ef919148002bfb475295ddd5839e9f7bf66fe80b60e5f1c19f36\"" Dec 13 01:44:31.694100 systemd[1]: Started cri-containerd-60f62a028e43ef919148002bfb475295ddd5839e9f7bf66fe80b60e5f1c19f36.scope - libcontainer container 60f62a028e43ef919148002bfb475295ddd5839e9f7bf66fe80b60e5f1c19f36. Dec 13 01:44:31.715731 containerd[1549]: time="2024-12-13T01:44:31.715660922Z" level=info msg="StartContainer for \"60f62a028e43ef919148002bfb475295ddd5839e9f7bf66fe80b60e5f1c19f36\" returns successfully" Dec 13 01:44:32.093967 sshd[4624]: pam_unix(sshd:session): session closed for user core Dec 13 01:44:32.095781 systemd-logind[1526]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:44:32.098097 systemd[1]: sshd@7-139.178.70.108:22-139.178.89.65:53028.service: Deactivated successfully. Dec 13 01:44:32.099197 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:44:32.099865 systemd-logind[1526]: Removed session 10. Dec 13 01:44:32.674180 systemd[1]: run-containerd-runc-k8s.io-60f62a028e43ef919148002bfb475295ddd5839e9f7bf66fe80b60e5f1c19f36-runc.A14yTO.mount: Deactivated successfully. Dec 13 01:44:33.448047 kernel: bpftool[4834]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 01:44:33.599844 systemd-networkd[1456]: vxlan.calico: Link UP Dec 13 01:44:33.599850 systemd-networkd[1456]: vxlan.calico: Gained carrier Dec 13 01:44:35.274080 systemd-networkd[1456]: vxlan.calico: Gained IPv6LL Dec 13 01:44:35.857382 containerd[1549]: time="2024-12-13T01:44:35.856856910Z" level=info msg="StopPodSandbox for \"ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39\"" Dec 13 01:44:35.857382 containerd[1549]: time="2024-12-13T01:44:35.856935148Z" level=info msg="StopPodSandbox for \"9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2\"" Dec 13 01:44:35.977702 kubelet[2747]: I1213 01:44:35.962767 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2z6sq" podStartSLOduration=7.959468218 podStartE2EDuration="7.959468218s" podCreationTimestamp="2024-12-13 01:44:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:44:32.656593313 +0000 UTC m=+78.930283035" watchObservedRunningTime="2024-12-13 01:44:35.959468218 +0000 UTC m=+82.233157927" Dec 13 01:44:35.985061 containerd[1549]: 2024-12-13 01:44:35.960 [INFO][4966] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2" Dec 13 01:44:35.985061 containerd[1549]: 2024-12-13 01:44:35.961 [INFO][4966] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2" iface="eth0" netns="/var/run/netns/cni-8abb453b-687f-3b8d-bab7-b62ea7adfc14" Dec 13 01:44:35.985061 containerd[1549]: 2024-12-13 01:44:35.962 [INFO][4966] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2" iface="eth0" netns="/var/run/netns/cni-8abb453b-687f-3b8d-bab7-b62ea7adfc14" Dec 13 01:44:35.985061 containerd[1549]: 2024-12-13 01:44:35.962 [INFO][4966] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2" iface="eth0" netns="/var/run/netns/cni-8abb453b-687f-3b8d-bab7-b62ea7adfc14" Dec 13 01:44:35.985061 containerd[1549]: 2024-12-13 01:44:35.963 [INFO][4966] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2" Dec 13 01:44:35.985061 containerd[1549]: 2024-12-13 01:44:35.963 [INFO][4966] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2" Dec 13 01:44:35.985061 containerd[1549]: 2024-12-13 01:44:35.978 [INFO][4980] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2" HandleID="k8s-pod-network.9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2" Workload="localhost-k8s-coredns--6f6b679f8f--mt9t2-eth0" Dec 13 01:44:35.985061 containerd[1549]: 2024-12-13 01:44:35.978 [INFO][4980] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:44:35.985061 containerd[1549]: 2024-12-13 01:44:35.978 [INFO][4980] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:44:35.985061 containerd[1549]: 2024-12-13 01:44:35.982 [WARNING][4980] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2" HandleID="k8s-pod-network.9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2" Workload="localhost-k8s-coredns--6f6b679f8f--mt9t2-eth0" Dec 13 01:44:35.985061 containerd[1549]: 2024-12-13 01:44:35.982 [INFO][4980] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2" HandleID="k8s-pod-network.9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2" Workload="localhost-k8s-coredns--6f6b679f8f--mt9t2-eth0" Dec 13 01:44:35.985061 containerd[1549]: 2024-12-13 01:44:35.983 [INFO][4980] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:44:35.985061 containerd[1549]: 2024-12-13 01:44:35.983 [INFO][4966] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2" Dec 13 01:44:35.987427 systemd[1]: run-netns-cni\x2d8abb453b\x2d687f\x2d3b8d\x2dbab7\x2db62ea7adfc14.mount: Deactivated successfully. Dec 13 01:44:35.989131 containerd[1549]: time="2024-12-13T01:44:35.989015450Z" level=info msg="TearDown network for sandbox \"9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2\" successfully" Dec 13 01:44:35.989179 containerd[1549]: time="2024-12-13T01:44:35.989170083Z" level=info msg="StopPodSandbox for \"9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2\" returns successfully" Dec 13 01:44:35.989664 containerd[1549]: time="2024-12-13T01:44:35.989652479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-mt9t2,Uid:00e916a8-4e37-4540-9352-5c9af61a76e0,Namespace:kube-system,Attempt:1,}" Dec 13 01:44:36.008545 containerd[1549]: 2024-12-13 01:44:35.960 [INFO][4967] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39" Dec 13 01:44:36.008545 containerd[1549]: 2024-12-13 01:44:35.961 [INFO][4967] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39" iface="eth0" netns="/var/run/netns/cni-3f4d6b09-d5bd-2eb9-1449-65411f6e09b9" Dec 13 01:44:36.008545 containerd[1549]: 2024-12-13 01:44:35.962 [INFO][4967] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39" iface="eth0" netns="/var/run/netns/cni-3f4d6b09-d5bd-2eb9-1449-65411f6e09b9" Dec 13 01:44:36.008545 containerd[1549]: 2024-12-13 01:44:35.962 [INFO][4967] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39" iface="eth0" netns="/var/run/netns/cni-3f4d6b09-d5bd-2eb9-1449-65411f6e09b9" Dec 13 01:44:36.008545 containerd[1549]: 2024-12-13 01:44:35.962 [INFO][4967] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39" Dec 13 01:44:36.008545 containerd[1549]: 2024-12-13 01:44:35.962 [INFO][4967] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39" Dec 13 01:44:36.008545 containerd[1549]: 2024-12-13 01:44:35.990 [INFO][4979] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39" HandleID="k8s-pod-network.ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39" Workload="localhost-k8s-coredns--6f6b679f8f--jp7n7-eth0" Dec 13 01:44:36.008545 containerd[1549]: 2024-12-13 01:44:35.990 [INFO][4979] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:44:36.008545 containerd[1549]: 2024-12-13 01:44:35.990 [INFO][4979] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:44:36.008545 containerd[1549]: 2024-12-13 01:44:36.000 [WARNING][4979] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39" HandleID="k8s-pod-network.ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39" Workload="localhost-k8s-coredns--6f6b679f8f--jp7n7-eth0" Dec 13 01:44:36.008545 containerd[1549]: 2024-12-13 01:44:36.000 [INFO][4979] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39" HandleID="k8s-pod-network.ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39" Workload="localhost-k8s-coredns--6f6b679f8f--jp7n7-eth0" Dec 13 01:44:36.008545 containerd[1549]: 2024-12-13 01:44:36.002 [INFO][4979] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:44:36.008545 containerd[1549]: 2024-12-13 01:44:36.005 [INFO][4967] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39" Dec 13 01:44:36.009787 containerd[1549]: time="2024-12-13T01:44:36.009372470Z" level=info msg="TearDown network for sandbox \"ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39\" successfully" Dec 13 01:44:36.009787 containerd[1549]: time="2024-12-13T01:44:36.009388274Z" level=info msg="StopPodSandbox for \"ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39\" returns successfully" Dec 13 01:44:36.010840 containerd[1549]: time="2024-12-13T01:44:36.010817338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-jp7n7,Uid:dd95d309-c209-4cf1-a636-b98ab7a31667,Namespace:kube-system,Attempt:1,}" Dec 13 01:44:36.012487 systemd[1]: run-netns-cni\x2d3f4d6b09\x2dd5bd\x2d2eb9\x2d1449\x2d65411f6e09b9.mount: Deactivated successfully. Dec 13 01:44:36.089183 systemd-networkd[1456]: calif5aa172434c: Link UP Dec 13 01:44:36.089530 systemd-networkd[1456]: calif5aa172434c: Gained carrier Dec 13 01:44:36.097544 containerd[1549]: 2024-12-13 01:44:36.034 [INFO][4995] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--mt9t2-eth0 coredns-6f6b679f8f- kube-system 00e916a8-4e37-4540-9352-5c9af61a76e0 928 0 2024-12-13 01:43:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-mt9t2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif5aa172434c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d22496edb868a08cdb0599e11068606ce65ee42ec8e9b6a849452d487f84bf2b" Namespace="kube-system" Pod="coredns-6f6b679f8f-mt9t2" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--mt9t2-" Dec 13 01:44:36.097544 containerd[1549]: 2024-12-13 01:44:36.035 [INFO][4995] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d22496edb868a08cdb0599e11068606ce65ee42ec8e9b6a849452d487f84bf2b" Namespace="kube-system" Pod="coredns-6f6b679f8f-mt9t2" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--mt9t2-eth0" Dec 13 01:44:36.097544 containerd[1549]: 2024-12-13 01:44:36.059 [INFO][5013] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d22496edb868a08cdb0599e11068606ce65ee42ec8e9b6a849452d487f84bf2b" HandleID="k8s-pod-network.d22496edb868a08cdb0599e11068606ce65ee42ec8e9b6a849452d487f84bf2b" Workload="localhost-k8s-coredns--6f6b679f8f--mt9t2-eth0" Dec 13 01:44:36.097544 containerd[1549]: 2024-12-13 01:44:36.064 [INFO][5013] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d22496edb868a08cdb0599e11068606ce65ee42ec8e9b6a849452d487f84bf2b" HandleID="k8s-pod-network.d22496edb868a08cdb0599e11068606ce65ee42ec8e9b6a849452d487f84bf2b" Workload="localhost-k8s-coredns--6f6b679f8f--mt9t2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290b70), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-mt9t2", "timestamp":"2024-12-13 01:44:36.059236969 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:44:36.097544 containerd[1549]: 2024-12-13 01:44:36.064 [INFO][5013] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:44:36.097544 containerd[1549]: 2024-12-13 01:44:36.064 [INFO][5013] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:44:36.097544 containerd[1549]: 2024-12-13 01:44:36.064 [INFO][5013] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:44:36.097544 containerd[1549]: 2024-12-13 01:44:36.066 [INFO][5013] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d22496edb868a08cdb0599e11068606ce65ee42ec8e9b6a849452d487f84bf2b" host="localhost" Dec 13 01:44:36.097544 containerd[1549]: 2024-12-13 01:44:36.069 [INFO][5013] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:44:36.097544 containerd[1549]: 2024-12-13 01:44:36.072 [INFO][5013] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:44:36.097544 containerd[1549]: 2024-12-13 01:44:36.073 [INFO][5013] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:44:36.097544 containerd[1549]: 2024-12-13 01:44:36.079 [INFO][5013] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:44:36.097544 containerd[1549]: 2024-12-13 01:44:36.079 [INFO][5013] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d22496edb868a08cdb0599e11068606ce65ee42ec8e9b6a849452d487f84bf2b" host="localhost" Dec 13 01:44:36.097544 containerd[1549]: 2024-12-13 01:44:36.080 [INFO][5013] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d22496edb868a08cdb0599e11068606ce65ee42ec8e9b6a849452d487f84bf2b Dec 13 01:44:36.097544 containerd[1549]: 2024-12-13 01:44:36.082 [INFO][5013] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d22496edb868a08cdb0599e11068606ce65ee42ec8e9b6a849452d487f84bf2b" host="localhost" Dec 13 01:44:36.097544 containerd[1549]: 2024-12-13 01:44:36.084 [INFO][5013] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.d22496edb868a08cdb0599e11068606ce65ee42ec8e9b6a849452d487f84bf2b" host="localhost" Dec 13 01:44:36.097544 containerd[1549]: 2024-12-13 01:44:36.084 [INFO][5013] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.d22496edb868a08cdb0599e11068606ce65ee42ec8e9b6a849452d487f84bf2b" host="localhost" Dec 13 01:44:36.097544 containerd[1549]: 2024-12-13 01:44:36.084 [INFO][5013] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:44:36.097544 containerd[1549]: 2024-12-13 01:44:36.084 [INFO][5013] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="d22496edb868a08cdb0599e11068606ce65ee42ec8e9b6a849452d487f84bf2b" HandleID="k8s-pod-network.d22496edb868a08cdb0599e11068606ce65ee42ec8e9b6a849452d487f84bf2b" Workload="localhost-k8s-coredns--6f6b679f8f--mt9t2-eth0" Dec 13 01:44:36.099017 containerd[1549]: 2024-12-13 01:44:36.086 [INFO][4995] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d22496edb868a08cdb0599e11068606ce65ee42ec8e9b6a849452d487f84bf2b" Namespace="kube-system" Pod="coredns-6f6b679f8f-mt9t2" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--mt9t2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--mt9t2-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"00e916a8-4e37-4540-9352-5c9af61a76e0", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 43, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-mt9t2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif5aa172434c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:44:36.099017 containerd[1549]: 2024-12-13 01:44:36.086 [INFO][4995] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="d22496edb868a08cdb0599e11068606ce65ee42ec8e9b6a849452d487f84bf2b" Namespace="kube-system" Pod="coredns-6f6b679f8f-mt9t2" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--mt9t2-eth0" Dec 13 01:44:36.099017 containerd[1549]: 2024-12-13 01:44:36.086 [INFO][4995] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif5aa172434c ContainerID="d22496edb868a08cdb0599e11068606ce65ee42ec8e9b6a849452d487f84bf2b" Namespace="kube-system" Pod="coredns-6f6b679f8f-mt9t2" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--mt9t2-eth0" Dec 13 01:44:36.099017 containerd[1549]: 2024-12-13 01:44:36.090 [INFO][4995] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d22496edb868a08cdb0599e11068606ce65ee42ec8e9b6a849452d487f84bf2b" Namespace="kube-system" Pod="coredns-6f6b679f8f-mt9t2" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--mt9t2-eth0" Dec 13 01:44:36.099017 containerd[1549]: 2024-12-13 01:44:36.090 [INFO][4995] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d22496edb868a08cdb0599e11068606ce65ee42ec8e9b6a849452d487f84bf2b" Namespace="kube-system" Pod="coredns-6f6b679f8f-mt9t2" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--mt9t2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--mt9t2-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"00e916a8-4e37-4540-9352-5c9af61a76e0", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 43, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d22496edb868a08cdb0599e11068606ce65ee42ec8e9b6a849452d487f84bf2b", Pod:"coredns-6f6b679f8f-mt9t2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif5aa172434c", MAC:"f2:cd:e5:42:c3:bc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:44:36.099017 containerd[1549]: 2024-12-13 01:44:36.095 [INFO][4995] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d22496edb868a08cdb0599e11068606ce65ee42ec8e9b6a849452d487f84bf2b" Namespace="kube-system" Pod="coredns-6f6b679f8f-mt9t2" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--mt9t2-eth0" Dec 13 01:44:36.115683 containerd[1549]: time="2024-12-13T01:44:36.115583142Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:44:36.115753 containerd[1549]: time="2024-12-13T01:44:36.115658219Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:44:36.115753 containerd[1549]: time="2024-12-13T01:44:36.115673264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:44:36.116761 containerd[1549]: time="2024-12-13T01:44:36.115793430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:44:36.129067 systemd[1]: Started cri-containerd-d22496edb868a08cdb0599e11068606ce65ee42ec8e9b6a849452d487f84bf2b.scope - libcontainer container d22496edb868a08cdb0599e11068606ce65ee42ec8e9b6a849452d487f84bf2b. Dec 13 01:44:36.136542 systemd-resolved[1459]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:44:36.155360 containerd[1549]: time="2024-12-13T01:44:36.155338910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-mt9t2,Uid:00e916a8-4e37-4540-9352-5c9af61a76e0,Namespace:kube-system,Attempt:1,} returns sandbox id \"d22496edb868a08cdb0599e11068606ce65ee42ec8e9b6a849452d487f84bf2b\"" Dec 13 01:44:36.157006 containerd[1549]: time="2024-12-13T01:44:36.156954571Z" level=info msg="CreateContainer within sandbox \"d22496edb868a08cdb0599e11068606ce65ee42ec8e9b6a849452d487f84bf2b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:44:36.168482 containerd[1549]: time="2024-12-13T01:44:36.168413505Z" level=info msg="CreateContainer within sandbox \"d22496edb868a08cdb0599e11068606ce65ee42ec8e9b6a849452d487f84bf2b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3fb4b522e18549ad36f02520b907ed41de348e63569d6bba5dc90c0a9bca6502\"" Dec 13 01:44:36.169452 containerd[1549]: time="2024-12-13T01:44:36.169433480Z" level=info msg="StartContainer for \"3fb4b522e18549ad36f02520b907ed41de348e63569d6bba5dc90c0a9bca6502\"" Dec 13 01:44:36.189720 systemd-networkd[1456]: calibb99f553843: Link UP Dec 13 01:44:36.190094 systemd-networkd[1456]: calibb99f553843: Gained carrier Dec 13 01:44:36.193116 systemd[1]: Started cri-containerd-3fb4b522e18549ad36f02520b907ed41de348e63569d6bba5dc90c0a9bca6502.scope - libcontainer container 3fb4b522e18549ad36f02520b907ed41de348e63569d6bba5dc90c0a9bca6502. Dec 13 01:44:36.204462 containerd[1549]: 2024-12-13 01:44:36.050 [INFO][5004] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--jp7n7-eth0 coredns-6f6b679f8f- kube-system dd95d309-c209-4cf1-a636-b98ab7a31667 929 0 2024-12-13 01:43:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-jp7n7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibb99f553843 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b1068176250e6df36c711871aad9468eec7ecdf4e8ea67d29ceb79368ac9a80b" Namespace="kube-system" Pod="coredns-6f6b679f8f-jp7n7" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--jp7n7-" Dec 13 01:44:36.204462 containerd[1549]: 2024-12-13 01:44:36.050 [INFO][5004] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b1068176250e6df36c711871aad9468eec7ecdf4e8ea67d29ceb79368ac9a80b" Namespace="kube-system" Pod="coredns-6f6b679f8f-jp7n7" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--jp7n7-eth0" Dec 13 01:44:36.204462 containerd[1549]: 2024-12-13 01:44:36.068 [INFO][5019] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b1068176250e6df36c711871aad9468eec7ecdf4e8ea67d29ceb79368ac9a80b" HandleID="k8s-pod-network.b1068176250e6df36c711871aad9468eec7ecdf4e8ea67d29ceb79368ac9a80b" Workload="localhost-k8s-coredns--6f6b679f8f--jp7n7-eth0" Dec 13 01:44:36.204462 containerd[1549]: 2024-12-13 01:44:36.074 [INFO][5019] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b1068176250e6df36c711871aad9468eec7ecdf4e8ea67d29ceb79368ac9a80b" HandleID="k8s-pod-network.b1068176250e6df36c711871aad9468eec7ecdf4e8ea67d29ceb79368ac9a80b" Workload="localhost-k8s-coredns--6f6b679f8f--jp7n7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00042bc50), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-jp7n7", "timestamp":"2024-12-13 01:44:36.068450591 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:44:36.204462 containerd[1549]: 2024-12-13 01:44:36.074 [INFO][5019] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:44:36.204462 containerd[1549]: 2024-12-13 01:44:36.085 [INFO][5019] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:44:36.204462 containerd[1549]: 2024-12-13 01:44:36.085 [INFO][5019] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:44:36.204462 containerd[1549]: 2024-12-13 01:44:36.167 [INFO][5019] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b1068176250e6df36c711871aad9468eec7ecdf4e8ea67d29ceb79368ac9a80b" host="localhost" Dec 13 01:44:36.204462 containerd[1549]: 2024-12-13 01:44:36.171 [INFO][5019] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:44:36.204462 containerd[1549]: 2024-12-13 01:44:36.173 [INFO][5019] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:44:36.204462 containerd[1549]: 2024-12-13 01:44:36.175 [INFO][5019] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:44:36.204462 containerd[1549]: 2024-12-13 01:44:36.177 [INFO][5019] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:44:36.204462 containerd[1549]: 2024-12-13 01:44:36.177 [INFO][5019] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b1068176250e6df36c711871aad9468eec7ecdf4e8ea67d29ceb79368ac9a80b" host="localhost" Dec 13 01:44:36.204462 containerd[1549]: 2024-12-13 01:44:36.178 [INFO][5019] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b1068176250e6df36c711871aad9468eec7ecdf4e8ea67d29ceb79368ac9a80b Dec 13 01:44:36.204462 containerd[1549]: 2024-12-13 01:44:36.180 [INFO][5019] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b1068176250e6df36c711871aad9468eec7ecdf4e8ea67d29ceb79368ac9a80b" host="localhost" Dec 13 01:44:36.204462 containerd[1549]: 2024-12-13 01:44:36.185 [INFO][5019] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.b1068176250e6df36c711871aad9468eec7ecdf4e8ea67d29ceb79368ac9a80b" host="localhost" Dec 13 01:44:36.204462 containerd[1549]: 2024-12-13 01:44:36.185 [INFO][5019] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.b1068176250e6df36c711871aad9468eec7ecdf4e8ea67d29ceb79368ac9a80b" host="localhost" Dec 13 01:44:36.204462 containerd[1549]: 2024-12-13 01:44:36.185 [INFO][5019] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:44:36.204462 containerd[1549]: 2024-12-13 01:44:36.185 [INFO][5019] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="b1068176250e6df36c711871aad9468eec7ecdf4e8ea67d29ceb79368ac9a80b" HandleID="k8s-pod-network.b1068176250e6df36c711871aad9468eec7ecdf4e8ea67d29ceb79368ac9a80b" Workload="localhost-k8s-coredns--6f6b679f8f--jp7n7-eth0" Dec 13 01:44:36.205389 containerd[1549]: 2024-12-13 01:44:36.187 [INFO][5004] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b1068176250e6df36c711871aad9468eec7ecdf4e8ea67d29ceb79368ac9a80b" Namespace="kube-system" Pod="coredns-6f6b679f8f-jp7n7" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--jp7n7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--jp7n7-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"dd95d309-c209-4cf1-a636-b98ab7a31667", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 43, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-jp7n7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibb99f553843", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:44:36.205389 containerd[1549]: 2024-12-13 01:44:36.187 [INFO][5004] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="b1068176250e6df36c711871aad9468eec7ecdf4e8ea67d29ceb79368ac9a80b" Namespace="kube-system" Pod="coredns-6f6b679f8f-jp7n7" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--jp7n7-eth0" Dec 13 01:44:36.205389 containerd[1549]: 2024-12-13 01:44:36.187 [INFO][5004] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibb99f553843 ContainerID="b1068176250e6df36c711871aad9468eec7ecdf4e8ea67d29ceb79368ac9a80b" Namespace="kube-system" Pod="coredns-6f6b679f8f-jp7n7" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--jp7n7-eth0" Dec 13 01:44:36.205389 containerd[1549]: 2024-12-13 01:44:36.189 [INFO][5004] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b1068176250e6df36c711871aad9468eec7ecdf4e8ea67d29ceb79368ac9a80b" Namespace="kube-system" Pod="coredns-6f6b679f8f-jp7n7" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--jp7n7-eth0" Dec 13 01:44:36.205389 containerd[1549]: 2024-12-13 01:44:36.190 [INFO][5004] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b1068176250e6df36c711871aad9468eec7ecdf4e8ea67d29ceb79368ac9a80b" Namespace="kube-system" Pod="coredns-6f6b679f8f-jp7n7" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--jp7n7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--jp7n7-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"dd95d309-c209-4cf1-a636-b98ab7a31667", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 43, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b1068176250e6df36c711871aad9468eec7ecdf4e8ea67d29ceb79368ac9a80b", Pod:"coredns-6f6b679f8f-jp7n7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibb99f553843", MAC:"ee:92:18:c4:53:8d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:44:36.205389 containerd[1549]: 2024-12-13 01:44:36.201 [INFO][5004] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b1068176250e6df36c711871aad9468eec7ecdf4e8ea67d29ceb79368ac9a80b" Namespace="kube-system" Pod="coredns-6f6b679f8f-jp7n7" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--jp7n7-eth0" Dec 13 01:44:36.228075 containerd[1549]: time="2024-12-13T01:44:36.227836406Z" level=info msg="StartContainer for \"3fb4b522e18549ad36f02520b907ed41de348e63569d6bba5dc90c0a9bca6502\" returns successfully" Dec 13 01:44:36.232582 containerd[1549]: time="2024-12-13T01:44:36.232414911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:44:36.232582 containerd[1549]: time="2024-12-13T01:44:36.232458409Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:44:36.232582 containerd[1549]: time="2024-12-13T01:44:36.232465794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:44:36.232582 containerd[1549]: time="2024-12-13T01:44:36.232518814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:44:36.247063 systemd[1]: Started cri-containerd-b1068176250e6df36c711871aad9468eec7ecdf4e8ea67d29ceb79368ac9a80b.scope - libcontainer container b1068176250e6df36c711871aad9468eec7ecdf4e8ea67d29ceb79368ac9a80b. Dec 13 01:44:36.253762 systemd-resolved[1459]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:44:36.275068 containerd[1549]: time="2024-12-13T01:44:36.274990079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-jp7n7,Uid:dd95d309-c209-4cf1-a636-b98ab7a31667,Namespace:kube-system,Attempt:1,} returns sandbox id \"b1068176250e6df36c711871aad9468eec7ecdf4e8ea67d29ceb79368ac9a80b\"" Dec 13 01:44:36.277462 containerd[1549]: time="2024-12-13T01:44:36.277374114Z" level=info msg="CreateContainer within sandbox \"b1068176250e6df36c711871aad9468eec7ecdf4e8ea67d29ceb79368ac9a80b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:44:36.285383 containerd[1549]: time="2024-12-13T01:44:36.285361438Z" level=info msg="CreateContainer within sandbox \"b1068176250e6df36c711871aad9468eec7ecdf4e8ea67d29ceb79368ac9a80b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"df318fa8803bb47d136e7f9f195fe8ecd731e99fdb5615545fe3bb845e01ac06\"" Dec 13 01:44:36.285935 containerd[1549]: time="2024-12-13T01:44:36.285645406Z" level=info msg="StartContainer for \"df318fa8803bb47d136e7f9f195fe8ecd731e99fdb5615545fe3bb845e01ac06\"" Dec 13 01:44:36.304119 systemd[1]: Started cri-containerd-df318fa8803bb47d136e7f9f195fe8ecd731e99fdb5615545fe3bb845e01ac06.scope - libcontainer container df318fa8803bb47d136e7f9f195fe8ecd731e99fdb5615545fe3bb845e01ac06. Dec 13 01:44:36.322279 containerd[1549]: time="2024-12-13T01:44:36.322249121Z" level=info msg="StartContainer for \"df318fa8803bb47d136e7f9f195fe8ecd731e99fdb5615545fe3bb845e01ac06\" returns successfully" Dec 13 01:44:36.667997 kubelet[2747]: I1213 01:44:36.667917 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-jp7n7" podStartSLOduration=75.667903093 podStartE2EDuration="1m15.667903093s" podCreationTimestamp="2024-12-13 01:43:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:44:36.660558102 +0000 UTC m=+82.934247826" watchObservedRunningTime="2024-12-13 01:44:36.667903093 +0000 UTC m=+82.941592812" Dec 13 01:44:37.184770 systemd[1]: Started sshd@8-139.178.70.108:22-139.178.89.65:53032.service - OpenSSH per-connection server daemon (139.178.89.65:53032). Dec 13 01:44:37.370562 sshd[5213]: Accepted publickey for core from 139.178.89.65 port 53032 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:44:37.371189 sshd[5213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:44:37.374538 systemd-logind[1526]: New session 11 of user core. Dec 13 01:44:37.387076 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:44:37.578076 systemd-networkd[1456]: calibb99f553843: Gained IPv6LL Dec 13 01:44:37.771420 systemd-networkd[1456]: calif5aa172434c: Gained IPv6LL Dec 13 01:44:37.795943 sshd[5213]: pam_unix(sshd:session): session closed for user core Dec 13 01:44:37.797910 systemd[1]: sshd@8-139.178.70.108:22-139.178.89.65:53032.service: Deactivated successfully. Dec 13 01:44:37.798847 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:44:37.799297 systemd-logind[1526]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:44:37.799917 systemd-logind[1526]: Removed session 11. Dec 13 01:44:40.856468 containerd[1549]: time="2024-12-13T01:44:40.856245675Z" level=info msg="StopPodSandbox for \"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\"" Dec 13 01:44:40.856881 containerd[1549]: time="2024-12-13T01:44:40.856754314Z" level=info msg="StopPodSandbox for \"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\"" Dec 13 01:44:40.969124 kubelet[2747]: I1213 01:44:40.969087 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-mt9t2" podStartSLOduration=79.969069665 podStartE2EDuration="1m19.969069665s" podCreationTimestamp="2024-12-13 01:43:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:44:36.687269905 +0000 UTC m=+82.960959619" watchObservedRunningTime="2024-12-13 01:44:40.969069665 +0000 UTC m=+87.242759373" Dec 13 01:44:40.999417 containerd[1549]: 2024-12-13 01:44:40.967 [INFO][5262] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29" Dec 13 01:44:40.999417 containerd[1549]: 2024-12-13 01:44:40.967 [INFO][5262] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29" iface="eth0" netns="/var/run/netns/cni-c5d9acbc-fea0-108e-13f7-2c7710fdc2c0" Dec 13 01:44:40.999417 containerd[1549]: 2024-12-13 01:44:40.967 [INFO][5262] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29" iface="eth0" netns="/var/run/netns/cni-c5d9acbc-fea0-108e-13f7-2c7710fdc2c0" Dec 13 01:44:40.999417 containerd[1549]: 2024-12-13 01:44:40.970 [INFO][5262] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29" iface="eth0" netns="/var/run/netns/cni-c5d9acbc-fea0-108e-13f7-2c7710fdc2c0" Dec 13 01:44:40.999417 containerd[1549]: 2024-12-13 01:44:40.971 [INFO][5262] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29" Dec 13 01:44:40.999417 containerd[1549]: 2024-12-13 01:44:40.971 [INFO][5262] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29" Dec 13 01:44:40.999417 containerd[1549]: 2024-12-13 01:44:40.989 [INFO][5273] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29" HandleID="k8s-pod-network.1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29" Workload="localhost-k8s-calico--apiserver--d5bfd545--r874s-eth0" Dec 13 01:44:40.999417 containerd[1549]: 2024-12-13 01:44:40.989 [INFO][5273] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:44:40.999417 containerd[1549]: 2024-12-13 01:44:40.989 [INFO][5273] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:44:40.999417 containerd[1549]: 2024-12-13 01:44:40.994 [WARNING][5273] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29" HandleID="k8s-pod-network.1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29" Workload="localhost-k8s-calico--apiserver--d5bfd545--r874s-eth0" Dec 13 01:44:40.999417 containerd[1549]: 2024-12-13 01:44:40.994 [INFO][5273] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29" HandleID="k8s-pod-network.1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29" Workload="localhost-k8s-calico--apiserver--d5bfd545--r874s-eth0" Dec 13 01:44:40.999417 containerd[1549]: 2024-12-13 01:44:40.995 [INFO][5273] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:44:40.999417 containerd[1549]: 2024-12-13 01:44:40.998 [INFO][5262] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29" Dec 13 01:44:41.018606 containerd[1549]: time="2024-12-13T01:44:41.000274873Z" level=info msg="TearDown network for sandbox \"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\" successfully" Dec 13 01:44:41.018606 containerd[1549]: time="2024-12-13T01:44:41.000293300Z" level=info msg="StopPodSandbox for \"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\" returns successfully" Dec 13 01:44:41.018606 containerd[1549]: time="2024-12-13T01:44:41.002201059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d5bfd545-r874s,Uid:49679827-80b0-45f6-a6cd-64adcfa67f6f,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:44:41.018606 containerd[1549]: 2024-12-13 01:44:40.982 [INFO][5261] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305" Dec 13 01:44:41.018606 containerd[1549]: 2024-12-13 01:44:40.982 [INFO][5261] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305" iface="eth0" netns="/var/run/netns/cni-fe0f2dfe-0e61-4ef9-09f7-b67908fbc5ab" Dec 13 01:44:41.018606 containerd[1549]: 2024-12-13 01:44:40.983 [INFO][5261] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305" iface="eth0" netns="/var/run/netns/cni-fe0f2dfe-0e61-4ef9-09f7-b67908fbc5ab" Dec 13 01:44:41.018606 containerd[1549]: 2024-12-13 01:44:40.984 [INFO][5261] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305" iface="eth0" netns="/var/run/netns/cni-fe0f2dfe-0e61-4ef9-09f7-b67908fbc5ab" Dec 13 01:44:41.018606 containerd[1549]: 2024-12-13 01:44:40.984 [INFO][5261] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305" Dec 13 01:44:41.018606 containerd[1549]: 2024-12-13 01:44:40.984 [INFO][5261] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305" Dec 13 01:44:41.018606 containerd[1549]: 2024-12-13 01:44:41.007 [INFO][5277] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305" HandleID="k8s-pod-network.dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305" Workload="localhost-k8s-calico--kube--controllers--74d5794888--n2p4c-eth0" Dec 13 01:44:41.018606 containerd[1549]: 2024-12-13 01:44:41.007 [INFO][5277] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:44:41.018606 containerd[1549]: 2024-12-13 01:44:41.007 [INFO][5277] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:44:41.018606 containerd[1549]: 2024-12-13 01:44:41.010 [WARNING][5277] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305" HandleID="k8s-pod-network.dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305" Workload="localhost-k8s-calico--kube--controllers--74d5794888--n2p4c-eth0" Dec 13 01:44:41.018606 containerd[1549]: 2024-12-13 01:44:41.010 [INFO][5277] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305" HandleID="k8s-pod-network.dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305" Workload="localhost-k8s-calico--kube--controllers--74d5794888--n2p4c-eth0" Dec 13 01:44:41.018606 containerd[1549]: 2024-12-13 01:44:41.011 [INFO][5277] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:44:41.018606 containerd[1549]: 2024-12-13 01:44:41.012 [INFO][5261] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305" Dec 13 01:44:41.018606 containerd[1549]: time="2024-12-13T01:44:41.013626629Z" level=info msg="TearDown network for sandbox \"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\" successfully" Dec 13 01:44:41.018606 containerd[1549]: time="2024-12-13T01:44:41.013643524Z" level=info msg="StopPodSandbox for \"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\" returns successfully" Dec 13 01:44:41.018606 containerd[1549]: time="2024-12-13T01:44:41.014479421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74d5794888-n2p4c,Uid:fec74b51-4f13-4ee0-a490-81de9f872b4f,Namespace:calico-system,Attempt:1,}" Dec 13 01:44:41.001555 systemd[1]: run-netns-cni\x2dc5d9acbc\x2dfea0\x2d108e\x2d13f7\x2d2c7710fdc2c0.mount: Deactivated successfully. Dec 13 01:44:41.015221 systemd[1]: run-netns-cni\x2dfe0f2dfe\x2d0e61\x2d4ef9\x2d09f7\x2db67908fbc5ab.mount: Deactivated successfully. Dec 13 01:44:41.223166 systemd-networkd[1456]: cali17706e7d6a3: Link UP Dec 13 01:44:41.223286 systemd-networkd[1456]: cali17706e7d6a3: Gained carrier Dec 13 01:44:41.233828 containerd[1549]: 2024-12-13 01:44:41.169 [INFO][5285] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--74d5794888--n2p4c-eth0 calico-kube-controllers-74d5794888- calico-system fec74b51-4f13-4ee0-a490-81de9f872b4f 975 0 2024-12-13 01:43:27 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:74d5794888 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-74d5794888-n2p4c eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali17706e7d6a3 [] []}} ContainerID="182bf7d3fade0f3d53c744ba54adbad85841dabd505b251b2a9b423a6d8fcab8" Namespace="calico-system" Pod="calico-kube-controllers-74d5794888-n2p4c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74d5794888--n2p4c-" Dec 13 01:44:41.233828 containerd[1549]: 2024-12-13 01:44:41.169 [INFO][5285] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="182bf7d3fade0f3d53c744ba54adbad85841dabd505b251b2a9b423a6d8fcab8" Namespace="calico-system" Pod="calico-kube-controllers-74d5794888-n2p4c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74d5794888--n2p4c-eth0" Dec 13 01:44:41.233828 containerd[1549]: 2024-12-13 01:44:41.192 [INFO][5311] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="182bf7d3fade0f3d53c744ba54adbad85841dabd505b251b2a9b423a6d8fcab8" HandleID="k8s-pod-network.182bf7d3fade0f3d53c744ba54adbad85841dabd505b251b2a9b423a6d8fcab8" Workload="localhost-k8s-calico--kube--controllers--74d5794888--n2p4c-eth0" Dec 13 01:44:41.233828 containerd[1549]: 2024-12-13 01:44:41.198 [INFO][5311] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="182bf7d3fade0f3d53c744ba54adbad85841dabd505b251b2a9b423a6d8fcab8" HandleID="k8s-pod-network.182bf7d3fade0f3d53c744ba54adbad85841dabd505b251b2a9b423a6d8fcab8" Workload="localhost-k8s-calico--kube--controllers--74d5794888--n2p4c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031aaf0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-74d5794888-n2p4c", "timestamp":"2024-12-13 01:44:41.192331189 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:44:41.233828 containerd[1549]: 2024-12-13 01:44:41.198 [INFO][5311] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:44:41.233828 containerd[1549]: 2024-12-13 01:44:41.198 [INFO][5311] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:44:41.233828 containerd[1549]: 2024-12-13 01:44:41.198 [INFO][5311] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:44:41.233828 containerd[1549]: 2024-12-13 01:44:41.199 [INFO][5311] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.182bf7d3fade0f3d53c744ba54adbad85841dabd505b251b2a9b423a6d8fcab8" host="localhost" Dec 13 01:44:41.233828 containerd[1549]: 2024-12-13 01:44:41.201 [INFO][5311] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:44:41.233828 containerd[1549]: 2024-12-13 01:44:41.205 [INFO][5311] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:44:41.233828 containerd[1549]: 2024-12-13 01:44:41.206 [INFO][5311] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:44:41.233828 containerd[1549]: 2024-12-13 01:44:41.207 [INFO][5311] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:44:41.233828 containerd[1549]: 2024-12-13 01:44:41.207 [INFO][5311] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.182bf7d3fade0f3d53c744ba54adbad85841dabd505b251b2a9b423a6d8fcab8" host="localhost" Dec 13 01:44:41.233828 containerd[1549]: 2024-12-13 01:44:41.208 [INFO][5311] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.182bf7d3fade0f3d53c744ba54adbad85841dabd505b251b2a9b423a6d8fcab8 Dec 13 01:44:41.233828 containerd[1549]: 2024-12-13 01:44:41.210 [INFO][5311] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.182bf7d3fade0f3d53c744ba54adbad85841dabd505b251b2a9b423a6d8fcab8" host="localhost" Dec 13 01:44:41.233828 containerd[1549]: 2024-12-13 01:44:41.216 [INFO][5311] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.182bf7d3fade0f3d53c744ba54adbad85841dabd505b251b2a9b423a6d8fcab8" host="localhost" Dec 13 01:44:41.233828 containerd[1549]: 2024-12-13 01:44:41.216 [INFO][5311] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.182bf7d3fade0f3d53c744ba54adbad85841dabd505b251b2a9b423a6d8fcab8" host="localhost" Dec 13 01:44:41.233828 containerd[1549]: 2024-12-13 01:44:41.216 [INFO][5311] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:44:41.233828 containerd[1549]: 2024-12-13 01:44:41.216 [INFO][5311] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="182bf7d3fade0f3d53c744ba54adbad85841dabd505b251b2a9b423a6d8fcab8" HandleID="k8s-pod-network.182bf7d3fade0f3d53c744ba54adbad85841dabd505b251b2a9b423a6d8fcab8" Workload="localhost-k8s-calico--kube--controllers--74d5794888--n2p4c-eth0" Dec 13 01:44:41.234376 containerd[1549]: 2024-12-13 01:44:41.218 [INFO][5285] cni-plugin/k8s.go 386: Populated endpoint ContainerID="182bf7d3fade0f3d53c744ba54adbad85841dabd505b251b2a9b423a6d8fcab8" Namespace="calico-system" Pod="calico-kube-controllers-74d5794888-n2p4c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74d5794888--n2p4c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--74d5794888--n2p4c-eth0", GenerateName:"calico-kube-controllers-74d5794888-", Namespace:"calico-system", SelfLink:"", UID:"fec74b51-4f13-4ee0-a490-81de9f872b4f", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 43, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74d5794888", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-74d5794888-n2p4c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali17706e7d6a3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:44:41.234376 containerd[1549]: 2024-12-13 01:44:41.218 [INFO][5285] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="182bf7d3fade0f3d53c744ba54adbad85841dabd505b251b2a9b423a6d8fcab8" Namespace="calico-system" Pod="calico-kube-controllers-74d5794888-n2p4c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74d5794888--n2p4c-eth0" Dec 13 01:44:41.234376 containerd[1549]: 2024-12-13 01:44:41.219 [INFO][5285] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali17706e7d6a3 ContainerID="182bf7d3fade0f3d53c744ba54adbad85841dabd505b251b2a9b423a6d8fcab8" Namespace="calico-system" Pod="calico-kube-controllers-74d5794888-n2p4c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74d5794888--n2p4c-eth0" Dec 13 01:44:41.234376 containerd[1549]: 2024-12-13 01:44:41.220 [INFO][5285] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="182bf7d3fade0f3d53c744ba54adbad85841dabd505b251b2a9b423a6d8fcab8" Namespace="calico-system" Pod="calico-kube-controllers-74d5794888-n2p4c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74d5794888--n2p4c-eth0" Dec 13 01:44:41.234376 containerd[1549]: 2024-12-13 01:44:41.220 [INFO][5285] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="182bf7d3fade0f3d53c744ba54adbad85841dabd505b251b2a9b423a6d8fcab8" Namespace="calico-system" Pod="calico-kube-controllers-74d5794888-n2p4c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74d5794888--n2p4c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--74d5794888--n2p4c-eth0", GenerateName:"calico-kube-controllers-74d5794888-", Namespace:"calico-system", SelfLink:"", UID:"fec74b51-4f13-4ee0-a490-81de9f872b4f", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 43, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74d5794888", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"182bf7d3fade0f3d53c744ba54adbad85841dabd505b251b2a9b423a6d8fcab8", Pod:"calico-kube-controllers-74d5794888-n2p4c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali17706e7d6a3", MAC:"e2:e1:d5:10:7c:28", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:44:41.234376 containerd[1549]: 2024-12-13 01:44:41.231 [INFO][5285] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="182bf7d3fade0f3d53c744ba54adbad85841dabd505b251b2a9b423a6d8fcab8" Namespace="calico-system" Pod="calico-kube-controllers-74d5794888-n2p4c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74d5794888--n2p4c-eth0" Dec 13 01:44:41.265368 containerd[1549]: time="2024-12-13T01:44:41.265302930Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:44:41.265368 containerd[1549]: time="2024-12-13T01:44:41.265354500Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:44:41.265486 containerd[1549]: time="2024-12-13T01:44:41.265369178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:44:41.266060 containerd[1549]: time="2024-12-13T01:44:41.266001415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:44:41.293082 systemd[1]: Started cri-containerd-182bf7d3fade0f3d53c744ba54adbad85841dabd505b251b2a9b423a6d8fcab8.scope - libcontainer container 182bf7d3fade0f3d53c744ba54adbad85841dabd505b251b2a9b423a6d8fcab8. Dec 13 01:44:41.300810 systemd-resolved[1459]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:44:41.327544 containerd[1549]: time="2024-12-13T01:44:41.327452402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74d5794888-n2p4c,Uid:fec74b51-4f13-4ee0-a490-81de9f872b4f,Namespace:calico-system,Attempt:1,} returns sandbox id \"182bf7d3fade0f3d53c744ba54adbad85841dabd505b251b2a9b423a6d8fcab8\"" Dec 13 01:44:41.331594 systemd-networkd[1456]: calic2838fdd58b: Link UP Dec 13 01:44:41.333034 systemd-networkd[1456]: calic2838fdd58b: Gained carrier Dec 13 01:44:41.350417 containerd[1549]: 2024-12-13 01:44:41.167 [INFO][5287] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--d5bfd545--r874s-eth0 calico-apiserver-d5bfd545- calico-apiserver 49679827-80b0-45f6-a6cd-64adcfa67f6f 973 0 2024-12-13 01:43:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d5bfd545 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-d5bfd545-r874s eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic2838fdd58b [] []}} ContainerID="1c7f1a23307be20f29b5954a16fdbfd24e96bdfafbdffce52c69bdb1ae169ccb" Namespace="calico-apiserver" Pod="calico-apiserver-d5bfd545-r874s" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5bfd545--r874s-" Dec 13 01:44:41.350417 containerd[1549]: 2024-12-13 01:44:41.167 [INFO][5287] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1c7f1a23307be20f29b5954a16fdbfd24e96bdfafbdffce52c69bdb1ae169ccb" Namespace="calico-apiserver" Pod="calico-apiserver-d5bfd545-r874s" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5bfd545--r874s-eth0" Dec 13 01:44:41.350417 containerd[1549]: 2024-12-13 01:44:41.194 [INFO][5309] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1c7f1a23307be20f29b5954a16fdbfd24e96bdfafbdffce52c69bdb1ae169ccb" HandleID="k8s-pod-network.1c7f1a23307be20f29b5954a16fdbfd24e96bdfafbdffce52c69bdb1ae169ccb" Workload="localhost-k8s-calico--apiserver--d5bfd545--r874s-eth0" Dec 13 01:44:41.350417 containerd[1549]: 2024-12-13 01:44:41.200 [INFO][5309] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1c7f1a23307be20f29b5954a16fdbfd24e96bdfafbdffce52c69bdb1ae169ccb" HandleID="k8s-pod-network.1c7f1a23307be20f29b5954a16fdbfd24e96bdfafbdffce52c69bdb1ae169ccb" Workload="localhost-k8s-calico--apiserver--d5bfd545--r874s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318f20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-d5bfd545-r874s", "timestamp":"2024-12-13 01:44:41.194089022 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:44:41.350417 containerd[1549]: 2024-12-13 01:44:41.200 [INFO][5309] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:44:41.350417 containerd[1549]: 2024-12-13 01:44:41.216 [INFO][5309] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:44:41.350417 containerd[1549]: 2024-12-13 01:44:41.217 [INFO][5309] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:44:41.350417 containerd[1549]: 2024-12-13 01:44:41.305 [INFO][5309] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1c7f1a23307be20f29b5954a16fdbfd24e96bdfafbdffce52c69bdb1ae169ccb" host="localhost" Dec 13 01:44:41.350417 containerd[1549]: 2024-12-13 01:44:41.307 [INFO][5309] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:44:41.350417 containerd[1549]: 2024-12-13 01:44:41.310 [INFO][5309] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:44:41.350417 containerd[1549]: 2024-12-13 01:44:41.311 [INFO][5309] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:44:41.350417 containerd[1549]: 2024-12-13 01:44:41.312 [INFO][5309] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:44:41.350417 containerd[1549]: 2024-12-13 01:44:41.312 [INFO][5309] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1c7f1a23307be20f29b5954a16fdbfd24e96bdfafbdffce52c69bdb1ae169ccb" host="localhost" Dec 13 01:44:41.350417 containerd[1549]: 2024-12-13 01:44:41.313 [INFO][5309] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1c7f1a23307be20f29b5954a16fdbfd24e96bdfafbdffce52c69bdb1ae169ccb Dec 13 01:44:41.350417 containerd[1549]: 2024-12-13 01:44:41.315 [INFO][5309] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1c7f1a23307be20f29b5954a16fdbfd24e96bdfafbdffce52c69bdb1ae169ccb" host="localhost" Dec 13 01:44:41.350417 containerd[1549]: 2024-12-13 01:44:41.325 [INFO][5309] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.1c7f1a23307be20f29b5954a16fdbfd24e96bdfafbdffce52c69bdb1ae169ccb" host="localhost" Dec 13 01:44:41.350417 containerd[1549]: 2024-12-13 01:44:41.325 [INFO][5309] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.1c7f1a23307be20f29b5954a16fdbfd24e96bdfafbdffce52c69bdb1ae169ccb" host="localhost" Dec 13 01:44:41.350417 containerd[1549]: 2024-12-13 01:44:41.325 [INFO][5309] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:44:41.350417 containerd[1549]: 2024-12-13 01:44:41.325 [INFO][5309] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="1c7f1a23307be20f29b5954a16fdbfd24e96bdfafbdffce52c69bdb1ae169ccb" HandleID="k8s-pod-network.1c7f1a23307be20f29b5954a16fdbfd24e96bdfafbdffce52c69bdb1ae169ccb" Workload="localhost-k8s-calico--apiserver--d5bfd545--r874s-eth0" Dec 13 01:44:41.364841 containerd[1549]: 2024-12-13 01:44:41.328 [INFO][5287] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1c7f1a23307be20f29b5954a16fdbfd24e96bdfafbdffce52c69bdb1ae169ccb" Namespace="calico-apiserver" Pod="calico-apiserver-d5bfd545-r874s" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5bfd545--r874s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d5bfd545--r874s-eth0", GenerateName:"calico-apiserver-d5bfd545-", Namespace:"calico-apiserver", SelfLink:"", UID:"49679827-80b0-45f6-a6cd-64adcfa67f6f", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 43, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d5bfd545", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-d5bfd545-r874s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic2838fdd58b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:44:41.364841 containerd[1549]: 2024-12-13 01:44:41.328 [INFO][5287] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="1c7f1a23307be20f29b5954a16fdbfd24e96bdfafbdffce52c69bdb1ae169ccb" Namespace="calico-apiserver" Pod="calico-apiserver-d5bfd545-r874s" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5bfd545--r874s-eth0" Dec 13 01:44:41.364841 containerd[1549]: 2024-12-13 01:44:41.328 [INFO][5287] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic2838fdd58b ContainerID="1c7f1a23307be20f29b5954a16fdbfd24e96bdfafbdffce52c69bdb1ae169ccb" Namespace="calico-apiserver" Pod="calico-apiserver-d5bfd545-r874s" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5bfd545--r874s-eth0" Dec 13 01:44:41.364841 containerd[1549]: 2024-12-13 01:44:41.331 [INFO][5287] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1c7f1a23307be20f29b5954a16fdbfd24e96bdfafbdffce52c69bdb1ae169ccb" Namespace="calico-apiserver" Pod="calico-apiserver-d5bfd545-r874s" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5bfd545--r874s-eth0" Dec 13 01:44:41.364841 containerd[1549]: 2024-12-13 01:44:41.333 [INFO][5287] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1c7f1a23307be20f29b5954a16fdbfd24e96bdfafbdffce52c69bdb1ae169ccb" Namespace="calico-apiserver" Pod="calico-apiserver-d5bfd545-r874s" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5bfd545--r874s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d5bfd545--r874s-eth0", GenerateName:"calico-apiserver-d5bfd545-", Namespace:"calico-apiserver", SelfLink:"", UID:"49679827-80b0-45f6-a6cd-64adcfa67f6f", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 43, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d5bfd545", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1c7f1a23307be20f29b5954a16fdbfd24e96bdfafbdffce52c69bdb1ae169ccb", Pod:"calico-apiserver-d5bfd545-r874s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic2838fdd58b", MAC:"a2:16:38:e0:50:78", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:44:41.364841 containerd[1549]: 2024-12-13 01:44:41.348 [INFO][5287] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1c7f1a23307be20f29b5954a16fdbfd24e96bdfafbdffce52c69bdb1ae169ccb" Namespace="calico-apiserver" Pod="calico-apiserver-d5bfd545-r874s" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5bfd545--r874s-eth0" Dec 13 01:44:41.367762 containerd[1549]: time="2024-12-13T01:44:41.367608041Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:44:41.367762 containerd[1549]: time="2024-12-13T01:44:41.367637863Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:44:41.367762 containerd[1549]: time="2024-12-13T01:44:41.367644803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:44:41.367762 containerd[1549]: time="2024-12-13T01:44:41.367684033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:44:41.381072 systemd[1]: Started cri-containerd-1c7f1a23307be20f29b5954a16fdbfd24e96bdfafbdffce52c69bdb1ae169ccb.scope - libcontainer container 1c7f1a23307be20f29b5954a16fdbfd24e96bdfafbdffce52c69bdb1ae169ccb. Dec 13 01:44:41.390869 systemd-resolved[1459]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:44:41.418046 containerd[1549]: time="2024-12-13T01:44:41.418004947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d5bfd545-r874s,Uid:49679827-80b0-45f6-a6cd-64adcfa67f6f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1c7f1a23307be20f29b5954a16fdbfd24e96bdfafbdffce52c69bdb1ae169ccb\"" Dec 13 01:44:41.604737 containerd[1549]: time="2024-12-13T01:44:41.604526808Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 01:44:42.698122 systemd-networkd[1456]: cali17706e7d6a3: Gained IPv6LL Dec 13 01:44:42.762078 systemd-networkd[1456]: calic2838fdd58b: Gained IPv6LL Dec 13 01:44:42.809158 systemd[1]: Started sshd@9-139.178.70.108:22-139.178.89.65:48250.service - OpenSSH per-connection server daemon (139.178.89.65:48250). Dec 13 01:44:43.583778 sshd[5433]: Accepted publickey for core from 139.178.89.65 port 48250 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:44:43.584922 sshd[5433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:44:43.593777 systemd-logind[1526]: New session 12 of user core. Dec 13 01:44:43.599090 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:44:43.918407 containerd[1549]: time="2024-12-13T01:44:43.918093298Z" level=info msg="StopPodSandbox for \"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\"" Dec 13 01:44:44.042993 containerd[1549]: 2024-12-13 01:44:44.003 [INFO][5461] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586" Dec 13 01:44:44.042993 containerd[1549]: 2024-12-13 01:44:44.003 [INFO][5461] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586" iface="eth0" netns="/var/run/netns/cni-3a737ede-eda6-2cc9-d12a-dd090209fe35" Dec 13 01:44:44.042993 containerd[1549]: 2024-12-13 01:44:44.003 [INFO][5461] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586" iface="eth0" netns="/var/run/netns/cni-3a737ede-eda6-2cc9-d12a-dd090209fe35" Dec 13 01:44:44.042993 containerd[1549]: 2024-12-13 01:44:44.003 [INFO][5461] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586" iface="eth0" netns="/var/run/netns/cni-3a737ede-eda6-2cc9-d12a-dd090209fe35" Dec 13 01:44:44.042993 containerd[1549]: 2024-12-13 01:44:44.003 [INFO][5461] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586" Dec 13 01:44:44.042993 containerd[1549]: 2024-12-13 01:44:44.003 [INFO][5461] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586" Dec 13 01:44:44.042993 containerd[1549]: 2024-12-13 01:44:44.018 [INFO][5471] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586" HandleID="k8s-pod-network.d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586" Workload="localhost-k8s-csi--node--driver--sgckt-eth0" Dec 13 01:44:44.042993 containerd[1549]: 2024-12-13 01:44:44.018 [INFO][5471] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:44:44.042993 containerd[1549]: 2024-12-13 01:44:44.018 [INFO][5471] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:44:44.042993 containerd[1549]: 2024-12-13 01:44:44.037 [WARNING][5471] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586" HandleID="k8s-pod-network.d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586" Workload="localhost-k8s-csi--node--driver--sgckt-eth0" Dec 13 01:44:44.042993 containerd[1549]: 2024-12-13 01:44:44.037 [INFO][5471] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586" HandleID="k8s-pod-network.d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586" Workload="localhost-k8s-csi--node--driver--sgckt-eth0" Dec 13 01:44:44.042993 containerd[1549]: 2024-12-13 01:44:44.037 [INFO][5471] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:44:44.042993 containerd[1549]: 2024-12-13 01:44:44.040 [INFO][5461] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586" Dec 13 01:44:44.049790 containerd[1549]: time="2024-12-13T01:44:44.043137459Z" level=info msg="TearDown network for sandbox \"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\" successfully" Dec 13 01:44:44.049790 containerd[1549]: time="2024-12-13T01:44:44.043159623Z" level=info msg="StopPodSandbox for \"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\" returns successfully" Dec 13 01:44:44.049790 containerd[1549]: time="2024-12-13T01:44:44.044762097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sgckt,Uid:be0402b2-9394-4ec8-b88a-518cccbc701b,Namespace:calico-system,Attempt:1,}" Dec 13 01:44:44.046028 systemd[1]: run-netns-cni\x2d3a737ede\x2deda6\x2d2cc9\x2dd12a\x2ddd090209fe35.mount: Deactivated successfully. Dec 13 01:44:44.215335 sshd[5433]: pam_unix(sshd:session): session closed for user core Dec 13 01:44:44.221849 systemd[1]: sshd@9-139.178.70.108:22-139.178.89.65:48250.service: Deactivated successfully. Dec 13 01:44:44.223133 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:44:44.225034 systemd-logind[1526]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:44:44.231196 systemd[1]: Started sshd@10-139.178.70.108:22-139.178.89.65:48254.service - OpenSSH per-connection server daemon (139.178.89.65:48254). Dec 13 01:44:44.234332 systemd-logind[1526]: Removed session 12. Dec 13 01:44:44.269650 sshd[5497]: Accepted publickey for core from 139.178.89.65 port 48254 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:44:44.270818 sshd[5497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:44:44.273622 systemd-logind[1526]: New session 13 of user core. Dec 13 01:44:44.282080 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:44:44.395500 systemd-networkd[1456]: calib2c4a92082e: Link UP Dec 13 01:44:44.396492 systemd-networkd[1456]: calib2c4a92082e: Gained carrier Dec 13 01:44:44.407583 containerd[1549]: 2024-12-13 01:44:44.194 [INFO][5478] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--sgckt-eth0 csi-node-driver- calico-system be0402b2-9394-4ec8-b88a-518cccbc701b 1003 0 2024-12-13 01:43:27 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-sgckt eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib2c4a92082e [] []}} ContainerID="52969008ad526ad72a9aa34db91530463db9fe7361f00f761089ebb01a9bb5c0" Namespace="calico-system" Pod="csi-node-driver-sgckt" WorkloadEndpoint="localhost-k8s-csi--node--driver--sgckt-" Dec 13 01:44:44.407583 containerd[1549]: 2024-12-13 01:44:44.194 [INFO][5478] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="52969008ad526ad72a9aa34db91530463db9fe7361f00f761089ebb01a9bb5c0" Namespace="calico-system" Pod="csi-node-driver-sgckt" WorkloadEndpoint="localhost-k8s-csi--node--driver--sgckt-eth0" Dec 13 01:44:44.407583 containerd[1549]: 2024-12-13 01:44:44.228 [INFO][5490] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="52969008ad526ad72a9aa34db91530463db9fe7361f00f761089ebb01a9bb5c0" HandleID="k8s-pod-network.52969008ad526ad72a9aa34db91530463db9fe7361f00f761089ebb01a9bb5c0" Workload="localhost-k8s-csi--node--driver--sgckt-eth0" Dec 13 01:44:44.407583 containerd[1549]: 2024-12-13 01:44:44.238 [INFO][5490] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="52969008ad526ad72a9aa34db91530463db9fe7361f00f761089ebb01a9bb5c0" HandleID="k8s-pod-network.52969008ad526ad72a9aa34db91530463db9fe7361f00f761089ebb01a9bb5c0" Workload="localhost-k8s-csi--node--driver--sgckt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318fd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-sgckt", "timestamp":"2024-12-13 01:44:44.228396489 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:44:44.407583 containerd[1549]: 2024-12-13 01:44:44.238 [INFO][5490] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:44:44.407583 containerd[1549]: 2024-12-13 01:44:44.239 [INFO][5490] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:44:44.407583 containerd[1549]: 2024-12-13 01:44:44.239 [INFO][5490] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:44:44.407583 containerd[1549]: 2024-12-13 01:44:44.241 [INFO][5490] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.52969008ad526ad72a9aa34db91530463db9fe7361f00f761089ebb01a9bb5c0" host="localhost" Dec 13 01:44:44.407583 containerd[1549]: 2024-12-13 01:44:44.368 [INFO][5490] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:44:44.407583 containerd[1549]: 2024-12-13 01:44:44.373 [INFO][5490] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:44:44.407583 containerd[1549]: 2024-12-13 01:44:44.376 [INFO][5490] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:44:44.407583 containerd[1549]: 2024-12-13 01:44:44.378 [INFO][5490] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:44:44.407583 containerd[1549]: 2024-12-13 01:44:44.378 [INFO][5490] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.52969008ad526ad72a9aa34db91530463db9fe7361f00f761089ebb01a9bb5c0" host="localhost" Dec 13 01:44:44.407583 containerd[1549]: 2024-12-13 01:44:44.381 [INFO][5490] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.52969008ad526ad72a9aa34db91530463db9fe7361f00f761089ebb01a9bb5c0 Dec 13 01:44:44.407583 containerd[1549]: 2024-12-13 01:44:44.384 [INFO][5490] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.52969008ad526ad72a9aa34db91530463db9fe7361f00f761089ebb01a9bb5c0" host="localhost" Dec 13 01:44:44.407583 containerd[1549]: 2024-12-13 01:44:44.390 [INFO][5490] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.52969008ad526ad72a9aa34db91530463db9fe7361f00f761089ebb01a9bb5c0" host="localhost" Dec 13 01:44:44.407583 containerd[1549]: 2024-12-13 01:44:44.390 [INFO][5490] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.52969008ad526ad72a9aa34db91530463db9fe7361f00f761089ebb01a9bb5c0" host="localhost" Dec 13 01:44:44.407583 containerd[1549]: 2024-12-13 01:44:44.390 [INFO][5490] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:44:44.407583 containerd[1549]: 2024-12-13 01:44:44.390 [INFO][5490] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="52969008ad526ad72a9aa34db91530463db9fe7361f00f761089ebb01a9bb5c0" HandleID="k8s-pod-network.52969008ad526ad72a9aa34db91530463db9fe7361f00f761089ebb01a9bb5c0" Workload="localhost-k8s-csi--node--driver--sgckt-eth0" Dec 13 01:44:44.408389 containerd[1549]: 2024-12-13 01:44:44.392 [INFO][5478] cni-plugin/k8s.go 386: Populated endpoint ContainerID="52969008ad526ad72a9aa34db91530463db9fe7361f00f761089ebb01a9bb5c0" Namespace="calico-system" Pod="csi-node-driver-sgckt" WorkloadEndpoint="localhost-k8s-csi--node--driver--sgckt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--sgckt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"be0402b2-9394-4ec8-b88a-518cccbc701b", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 43, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-sgckt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib2c4a92082e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:44:44.408389 containerd[1549]: 2024-12-13 01:44:44.392 [INFO][5478] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="52969008ad526ad72a9aa34db91530463db9fe7361f00f761089ebb01a9bb5c0" Namespace="calico-system" Pod="csi-node-driver-sgckt" WorkloadEndpoint="localhost-k8s-csi--node--driver--sgckt-eth0" Dec 13 01:44:44.408389 containerd[1549]: 2024-12-13 01:44:44.392 [INFO][5478] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib2c4a92082e ContainerID="52969008ad526ad72a9aa34db91530463db9fe7361f00f761089ebb01a9bb5c0" Namespace="calico-system" Pod="csi-node-driver-sgckt" WorkloadEndpoint="localhost-k8s-csi--node--driver--sgckt-eth0" Dec 13 01:44:44.408389 containerd[1549]: 2024-12-13 01:44:44.394 [INFO][5478] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="52969008ad526ad72a9aa34db91530463db9fe7361f00f761089ebb01a9bb5c0" Namespace="calico-system" Pod="csi-node-driver-sgckt" WorkloadEndpoint="localhost-k8s-csi--node--driver--sgckt-eth0" Dec 13 01:44:44.408389 containerd[1549]: 2024-12-13 01:44:44.395 [INFO][5478] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="52969008ad526ad72a9aa34db91530463db9fe7361f00f761089ebb01a9bb5c0" Namespace="calico-system" Pod="csi-node-driver-sgckt" WorkloadEndpoint="localhost-k8s-csi--node--driver--sgckt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--sgckt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"be0402b2-9394-4ec8-b88a-518cccbc701b", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 43, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"52969008ad526ad72a9aa34db91530463db9fe7361f00f761089ebb01a9bb5c0", Pod:"csi-node-driver-sgckt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib2c4a92082e", MAC:"ca:9b:d4:2d:60:f1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:44:44.408389 containerd[1549]: 2024-12-13 01:44:44.403 [INFO][5478] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="52969008ad526ad72a9aa34db91530463db9fe7361f00f761089ebb01a9bb5c0" Namespace="calico-system" Pod="csi-node-driver-sgckt" WorkloadEndpoint="localhost-k8s-csi--node--driver--sgckt-eth0" Dec 13 01:44:44.458191 containerd[1549]: time="2024-12-13T01:44:44.458026558Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:44:44.458191 containerd[1549]: time="2024-12-13T01:44:44.458071836Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:44:44.458191 containerd[1549]: time="2024-12-13T01:44:44.458087605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:44:44.458191 containerd[1549]: time="2024-12-13T01:44:44.458144362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:44:44.482073 systemd[1]: Started cri-containerd-52969008ad526ad72a9aa34db91530463db9fe7361f00f761089ebb01a9bb5c0.scope - libcontainer container 52969008ad526ad72a9aa34db91530463db9fe7361f00f761089ebb01a9bb5c0. Dec 13 01:44:44.495825 systemd-resolved[1459]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:44:44.503356 containerd[1549]: time="2024-12-13T01:44:44.503332534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sgckt,Uid:be0402b2-9394-4ec8-b88a-518cccbc701b,Namespace:calico-system,Attempt:1,} returns sandbox id \"52969008ad526ad72a9aa34db91530463db9fe7361f00f761089ebb01a9bb5c0\"" Dec 13 01:44:44.544947 sshd[5497]: pam_unix(sshd:session): session closed for user core Dec 13 01:44:44.551892 systemd[1]: sshd@10-139.178.70.108:22-139.178.89.65:48254.service: Deactivated successfully. Dec 13 01:44:44.554256 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:44:44.555315 systemd-logind[1526]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:44:44.562695 systemd[1]: Started sshd@11-139.178.70.108:22-139.178.89.65:48256.service - OpenSSH per-connection server daemon (139.178.89.65:48256). Dec 13 01:44:44.564231 systemd-logind[1526]: Removed session 13. Dec 13 01:44:44.607593 sshd[5568]: Accepted publickey for core from 139.178.89.65 port 48256 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:44:44.608604 sshd[5568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:44:44.611560 systemd-logind[1526]: New session 14 of user core. Dec 13 01:44:44.617220 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:44:44.705688 sshd[5568]: pam_unix(sshd:session): session closed for user core Dec 13 01:44:44.707657 systemd[1]: sshd@11-139.178.70.108:22-139.178.89.65:48256.service: Deactivated successfully. Dec 13 01:44:44.709106 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:44:44.709640 systemd-logind[1526]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:44:44.710283 systemd-logind[1526]: Removed session 14. Dec 13 01:44:44.857058 containerd[1549]: time="2024-12-13T01:44:44.856393809Z" level=info msg="StopPodSandbox for \"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\"" Dec 13 01:44:44.928995 containerd[1549]: 2024-12-13 01:44:44.893 [INFO][5592] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3" Dec 13 01:44:44.928995 containerd[1549]: 2024-12-13 01:44:44.894 [INFO][5592] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3" iface="eth0" netns="/var/run/netns/cni-273ca947-f0ae-8e99-9485-1b854765b21a" Dec 13 01:44:44.928995 containerd[1549]: 2024-12-13 01:44:44.894 [INFO][5592] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3" iface="eth0" netns="/var/run/netns/cni-273ca947-f0ae-8e99-9485-1b854765b21a" Dec 13 01:44:44.928995 containerd[1549]: 2024-12-13 01:44:44.894 [INFO][5592] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3" iface="eth0" netns="/var/run/netns/cni-273ca947-f0ae-8e99-9485-1b854765b21a" Dec 13 01:44:44.928995 containerd[1549]: 2024-12-13 01:44:44.894 [INFO][5592] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3" Dec 13 01:44:44.928995 containerd[1549]: 2024-12-13 01:44:44.894 [INFO][5592] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3" Dec 13 01:44:44.928995 containerd[1549]: 2024-12-13 01:44:44.921 [INFO][5598] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3" HandleID="k8s-pod-network.2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3" Workload="localhost-k8s-calico--apiserver--d5bfd545--426d5-eth0" Dec 13 01:44:44.928995 containerd[1549]: 2024-12-13 01:44:44.921 [INFO][5598] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:44:44.928995 containerd[1549]: 2024-12-13 01:44:44.921 [INFO][5598] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:44:44.928995 containerd[1549]: 2024-12-13 01:44:44.925 [WARNING][5598] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3" HandleID="k8s-pod-network.2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3" Workload="localhost-k8s-calico--apiserver--d5bfd545--426d5-eth0" Dec 13 01:44:44.928995 containerd[1549]: 2024-12-13 01:44:44.925 [INFO][5598] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3" HandleID="k8s-pod-network.2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3" Workload="localhost-k8s-calico--apiserver--d5bfd545--426d5-eth0" Dec 13 01:44:44.928995 containerd[1549]: 2024-12-13 01:44:44.926 [INFO][5598] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:44:44.928995 containerd[1549]: 2024-12-13 01:44:44.927 [INFO][5592] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3" Dec 13 01:44:44.930052 containerd[1549]: time="2024-12-13T01:44:44.929061582Z" level=info msg="TearDown network for sandbox \"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\" successfully" Dec 13 01:44:44.930052 containerd[1549]: time="2024-12-13T01:44:44.929078581Z" level=info msg="StopPodSandbox for \"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\" returns successfully" Dec 13 01:44:44.930052 containerd[1549]: time="2024-12-13T01:44:44.929631250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d5bfd545-426d5,Uid:91cdf967-f24e-4903-a5a6-9102cde06b2f,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:44:45.033833 systemd-networkd[1456]: cali45970deb1b5: Link UP Dec 13 01:44:45.034725 systemd-networkd[1456]: cali45970deb1b5: Gained carrier Dec 13 01:44:45.047994 containerd[1549]: 2024-12-13 01:44:44.975 [INFO][5606] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--d5bfd545--426d5-eth0 calico-apiserver-d5bfd545- calico-apiserver 91cdf967-f24e-4903-a5a6-9102cde06b2f 1026 0 2024-12-13 01:43:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d5bfd545 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-d5bfd545-426d5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali45970deb1b5 [] []}} ContainerID="1563280be7dc4909077c6dc07459b22c19e6fc778fcb1e0fc1f74eaa58adc9d3" Namespace="calico-apiserver" Pod="calico-apiserver-d5bfd545-426d5" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5bfd545--426d5-" Dec 13 01:44:45.047994 containerd[1549]: 2024-12-13 01:44:44.975 [INFO][5606] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1563280be7dc4909077c6dc07459b22c19e6fc778fcb1e0fc1f74eaa58adc9d3" Namespace="calico-apiserver" Pod="calico-apiserver-d5bfd545-426d5" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5bfd545--426d5-eth0" Dec 13 01:44:45.047994 containerd[1549]: 2024-12-13 01:44:45.006 [INFO][5618] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1563280be7dc4909077c6dc07459b22c19e6fc778fcb1e0fc1f74eaa58adc9d3" HandleID="k8s-pod-network.1563280be7dc4909077c6dc07459b22c19e6fc778fcb1e0fc1f74eaa58adc9d3" Workload="localhost-k8s-calico--apiserver--d5bfd545--426d5-eth0" Dec 13 01:44:45.047994 containerd[1549]: 2024-12-13 01:44:45.013 [INFO][5618] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1563280be7dc4909077c6dc07459b22c19e6fc778fcb1e0fc1f74eaa58adc9d3" HandleID="k8s-pod-network.1563280be7dc4909077c6dc07459b22c19e6fc778fcb1e0fc1f74eaa58adc9d3" Workload="localhost-k8s-calico--apiserver--d5bfd545--426d5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051370), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-d5bfd545-426d5", "timestamp":"2024-12-13 01:44:45.006874891 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:44:45.047994 containerd[1549]: 2024-12-13 01:44:45.013 [INFO][5618] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:44:45.047994 containerd[1549]: 2024-12-13 01:44:45.013 [INFO][5618] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:44:45.047994 containerd[1549]: 2024-12-13 01:44:45.013 [INFO][5618] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:44:45.047994 containerd[1549]: 2024-12-13 01:44:45.014 [INFO][5618] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1563280be7dc4909077c6dc07459b22c19e6fc778fcb1e0fc1f74eaa58adc9d3" host="localhost" Dec 13 01:44:45.047994 containerd[1549]: 2024-12-13 01:44:45.017 [INFO][5618] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:44:45.047994 containerd[1549]: 2024-12-13 01:44:45.020 [INFO][5618] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:44:45.047994 containerd[1549]: 2024-12-13 01:44:45.021 [INFO][5618] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:44:45.047994 containerd[1549]: 2024-12-13 01:44:45.022 [INFO][5618] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:44:45.047994 containerd[1549]: 2024-12-13 01:44:45.022 [INFO][5618] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1563280be7dc4909077c6dc07459b22c19e6fc778fcb1e0fc1f74eaa58adc9d3" host="localhost" Dec 13 01:44:45.047994 containerd[1549]: 2024-12-13 01:44:45.023 [INFO][5618] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1563280be7dc4909077c6dc07459b22c19e6fc778fcb1e0fc1f74eaa58adc9d3 Dec 13 01:44:45.047994 containerd[1549]: 2024-12-13 01:44:45.025 [INFO][5618] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1563280be7dc4909077c6dc07459b22c19e6fc778fcb1e0fc1f74eaa58adc9d3" host="localhost" Dec 13 01:44:45.047994 containerd[1549]: 2024-12-13 01:44:45.029 [INFO][5618] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.1563280be7dc4909077c6dc07459b22c19e6fc778fcb1e0fc1f74eaa58adc9d3" host="localhost" Dec 13 01:44:45.047994 containerd[1549]: 2024-12-13 01:44:45.029 [INFO][5618] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.1563280be7dc4909077c6dc07459b22c19e6fc778fcb1e0fc1f74eaa58adc9d3" host="localhost" Dec 13 01:44:45.047994 containerd[1549]: 2024-12-13 01:44:45.029 [INFO][5618] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:44:45.047994 containerd[1549]: 2024-12-13 01:44:45.029 [INFO][5618] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="1563280be7dc4909077c6dc07459b22c19e6fc778fcb1e0fc1f74eaa58adc9d3" HandleID="k8s-pod-network.1563280be7dc4909077c6dc07459b22c19e6fc778fcb1e0fc1f74eaa58adc9d3" Workload="localhost-k8s-calico--apiserver--d5bfd545--426d5-eth0" Dec 13 01:44:45.048812 containerd[1549]: 2024-12-13 01:44:45.031 [INFO][5606] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1563280be7dc4909077c6dc07459b22c19e6fc778fcb1e0fc1f74eaa58adc9d3" Namespace="calico-apiserver" Pod="calico-apiserver-d5bfd545-426d5" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5bfd545--426d5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d5bfd545--426d5-eth0", GenerateName:"calico-apiserver-d5bfd545-", Namespace:"calico-apiserver", SelfLink:"", UID:"91cdf967-f24e-4903-a5a6-9102cde06b2f", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 43, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d5bfd545", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-d5bfd545-426d5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali45970deb1b5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:44:45.048812 containerd[1549]: 2024-12-13 01:44:45.031 [INFO][5606] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="1563280be7dc4909077c6dc07459b22c19e6fc778fcb1e0fc1f74eaa58adc9d3" Namespace="calico-apiserver" Pod="calico-apiserver-d5bfd545-426d5" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5bfd545--426d5-eth0" Dec 13 01:44:45.048812 containerd[1549]: 2024-12-13 01:44:45.031 [INFO][5606] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali45970deb1b5 ContainerID="1563280be7dc4909077c6dc07459b22c19e6fc778fcb1e0fc1f74eaa58adc9d3" Namespace="calico-apiserver" Pod="calico-apiserver-d5bfd545-426d5" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5bfd545--426d5-eth0" Dec 13 01:44:45.048812 containerd[1549]: 2024-12-13 01:44:45.035 [INFO][5606] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1563280be7dc4909077c6dc07459b22c19e6fc778fcb1e0fc1f74eaa58adc9d3" Namespace="calico-apiserver" Pod="calico-apiserver-d5bfd545-426d5" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5bfd545--426d5-eth0" Dec 13 01:44:45.048812 containerd[1549]: 2024-12-13 01:44:45.035 [INFO][5606] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1563280be7dc4909077c6dc07459b22c19e6fc778fcb1e0fc1f74eaa58adc9d3" Namespace="calico-apiserver" Pod="calico-apiserver-d5bfd545-426d5" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5bfd545--426d5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d5bfd545--426d5-eth0", GenerateName:"calico-apiserver-d5bfd545-", Namespace:"calico-apiserver", SelfLink:"", UID:"91cdf967-f24e-4903-a5a6-9102cde06b2f", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 43, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d5bfd545", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1563280be7dc4909077c6dc07459b22c19e6fc778fcb1e0fc1f74eaa58adc9d3", Pod:"calico-apiserver-d5bfd545-426d5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali45970deb1b5", MAC:"c2:e5:07:03:3a:97", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:44:45.048812 containerd[1549]: 2024-12-13 01:44:45.042 [INFO][5606] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1563280be7dc4909077c6dc07459b22c19e6fc778fcb1e0fc1f74eaa58adc9d3" Namespace="calico-apiserver" Pod="calico-apiserver-d5bfd545-426d5" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5bfd545--426d5-eth0" Dec 13 01:44:45.048838 systemd[1]: run-containerd-runc-k8s.io-52969008ad526ad72a9aa34db91530463db9fe7361f00f761089ebb01a9bb5c0-runc.wnnicx.mount: Deactivated successfully. Dec 13 01:44:45.048891 systemd[1]: run-netns-cni\x2d273ca947\x2df0ae\x2d8e99\x2d9485\x2d1b854765b21a.mount: Deactivated successfully. Dec 13 01:44:45.075535 containerd[1549]: time="2024-12-13T01:44:45.075377322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:44:45.075535 containerd[1549]: time="2024-12-13T01:44:45.075409134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:44:45.075535 containerd[1549]: time="2024-12-13T01:44:45.075415966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:44:45.075535 containerd[1549]: time="2024-12-13T01:44:45.075490098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:44:45.095096 systemd[1]: Started cri-containerd-1563280be7dc4909077c6dc07459b22c19e6fc778fcb1e0fc1f74eaa58adc9d3.scope - libcontainer container 1563280be7dc4909077c6dc07459b22c19e6fc778fcb1e0fc1f74eaa58adc9d3. Dec 13 01:44:45.103446 systemd-resolved[1459]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:44:45.122518 containerd[1549]: time="2024-12-13T01:44:45.122376149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d5bfd545-426d5,Uid:91cdf967-f24e-4903-a5a6-9102cde06b2f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1563280be7dc4909077c6dc07459b22c19e6fc778fcb1e0fc1f74eaa58adc9d3\"" Dec 13 01:44:45.939926 containerd[1549]: time="2024-12-13T01:44:45.939643819Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Dec 13 01:44:45.939926 containerd[1549]: time="2024-12-13T01:44:45.939889962Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:44:45.944262 containerd[1549]: time="2024-12-13T01:44:45.944214724Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:44:45.945161 containerd[1549]: time="2024-12-13T01:44:45.944647828Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 4.340084616s" Dec 13 01:44:45.945161 containerd[1549]: time="2024-12-13T01:44:45.944667950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Dec 13 01:44:45.945161 containerd[1549]: time="2024-12-13T01:44:45.944901212Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:44:45.946724 containerd[1549]: time="2024-12-13T01:44:45.946685103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:44:45.960195 containerd[1549]: time="2024-12-13T01:44:45.960166920Z" level=info msg="CreateContainer within sandbox \"182bf7d3fade0f3d53c744ba54adbad85841dabd505b251b2a9b423a6d8fcab8\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 01:44:45.970782 containerd[1549]: time="2024-12-13T01:44:45.970752126Z" level=info msg="CreateContainer within sandbox \"182bf7d3fade0f3d53c744ba54adbad85841dabd505b251b2a9b423a6d8fcab8\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"3ec8190070aa81ede6889d4865e620ed0e0afa98a960192cf4a84bfe025b532c\"" Dec 13 01:44:45.971724 containerd[1549]: time="2024-12-13T01:44:45.971126976Z" level=info msg="StartContainer for \"3ec8190070aa81ede6889d4865e620ed0e0afa98a960192cf4a84bfe025b532c\"" Dec 13 01:44:45.996136 systemd[1]: Started cri-containerd-3ec8190070aa81ede6889d4865e620ed0e0afa98a960192cf4a84bfe025b532c.scope - libcontainer container 3ec8190070aa81ede6889d4865e620ed0e0afa98a960192cf4a84bfe025b532c. Dec 13 01:44:46.026173 systemd-networkd[1456]: calib2c4a92082e: Gained IPv6LL Dec 13 01:44:46.034389 containerd[1549]: time="2024-12-13T01:44:46.034053580Z" level=info msg="StartContainer for \"3ec8190070aa81ede6889d4865e620ed0e0afa98a960192cf4a84bfe025b532c\" returns successfully" Dec 13 01:44:46.090084 systemd-networkd[1456]: cali45970deb1b5: Gained IPv6LL Dec 13 01:44:46.718624 systemd[1]: run-containerd-runc-k8s.io-3ec8190070aa81ede6889d4865e620ed0e0afa98a960192cf4a84bfe025b532c-runc.3M1kh2.mount: Deactivated successfully. Dec 13 01:44:46.758994 kubelet[2747]: I1213 01:44:46.758021 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-74d5794888-n2p4c" podStartSLOduration=75.139677764 podStartE2EDuration="1m19.758003538s" podCreationTimestamp="2024-12-13 01:43:27 +0000 UTC" firstStartedPulling="2024-12-13 01:44:41.328295604 +0000 UTC m=+87.601985309" lastFinishedPulling="2024-12-13 01:44:45.946621378 +0000 UTC m=+92.220311083" observedRunningTime="2024-12-13 01:44:46.693411856 +0000 UTC m=+92.967101578" watchObservedRunningTime="2024-12-13 01:44:46.758003538 +0000 UTC m=+93.031693254" Dec 13 01:44:49.614370 containerd[1549]: time="2024-12-13T01:44:49.614341047Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:44:49.615073 containerd[1549]: time="2024-12-13T01:44:49.615035229Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Dec 13 01:44:49.615315 containerd[1549]: time="2024-12-13T01:44:49.615297937Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:44:49.617028 containerd[1549]: time="2024-12-13T01:44:49.617010923Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:44:49.617857 containerd[1549]: time="2024-12-13T01:44:49.617833265Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.671102825s" Dec 13 01:44:49.617857 containerd[1549]: time="2024-12-13T01:44:49.617854458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 01:44:49.618592 containerd[1549]: time="2024-12-13T01:44:49.618575022Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 01:44:49.620226 containerd[1549]: time="2024-12-13T01:44:49.620210498Z" level=info msg="CreateContainer within sandbox \"1c7f1a23307be20f29b5954a16fdbfd24e96bdfafbdffce52c69bdb1ae169ccb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:44:49.645919 containerd[1549]: time="2024-12-13T01:44:49.645888394Z" level=info msg="CreateContainer within sandbox \"1c7f1a23307be20f29b5954a16fdbfd24e96bdfafbdffce52c69bdb1ae169ccb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9eaf29773941f5bb3afece23318b8ea23aecc8c9c942fa0133e322b81039ab13\"" Dec 13 01:44:49.647529 containerd[1549]: time="2024-12-13T01:44:49.647511319Z" level=info msg="StartContainer for \"9eaf29773941f5bb3afece23318b8ea23aecc8c9c942fa0133e322b81039ab13\"" Dec 13 01:44:49.672073 systemd[1]: Started cri-containerd-9eaf29773941f5bb3afece23318b8ea23aecc8c9c942fa0133e322b81039ab13.scope - libcontainer container 9eaf29773941f5bb3afece23318b8ea23aecc8c9c942fa0133e322b81039ab13. Dec 13 01:44:49.704343 containerd[1549]: time="2024-12-13T01:44:49.704317051Z" level=info msg="StartContainer for \"9eaf29773941f5bb3afece23318b8ea23aecc8c9c942fa0133e322b81039ab13\" returns successfully" Dec 13 01:44:49.716046 systemd[1]: Started sshd@12-139.178.70.108:22-139.178.89.65:35416.service - OpenSSH per-connection server daemon (139.178.89.65:35416). Dec 13 01:44:49.784332 sshd[5790]: Accepted publickey for core from 139.178.89.65 port 35416 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:44:49.786332 sshd[5790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:44:49.791890 systemd-logind[1526]: New session 15 of user core. Dec 13 01:44:49.797093 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:44:50.267924 sshd[5790]: pam_unix(sshd:session): session closed for user core Dec 13 01:44:50.269848 systemd[1]: sshd@12-139.178.70.108:22-139.178.89.65:35416.service: Deactivated successfully. Dec 13 01:44:50.272161 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:44:50.274826 systemd-logind[1526]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:44:50.276363 systemd-logind[1526]: Removed session 15. Dec 13 01:44:50.762106 kubelet[2747]: I1213 01:44:50.762070 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-d5bfd545-r874s" podStartSLOduration=75.59867702 podStartE2EDuration="1m23.762053631s" podCreationTimestamp="2024-12-13 01:43:27 +0000 UTC" firstStartedPulling="2024-12-13 01:44:41.455053058 +0000 UTC m=+87.728742763" lastFinishedPulling="2024-12-13 01:44:49.618429669 +0000 UTC m=+95.892119374" observedRunningTime="2024-12-13 01:44:50.725909146 +0000 UTC m=+96.999598860" watchObservedRunningTime="2024-12-13 01:44:50.762053631 +0000 UTC m=+97.035743345" Dec 13 01:44:52.137851 containerd[1549]: time="2024-12-13T01:44:52.137813731Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:44:52.162287 containerd[1549]: time="2024-12-13T01:44:52.162247017Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Dec 13 01:44:52.183521 containerd[1549]: time="2024-12-13T01:44:52.183474429Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:44:52.272178 containerd[1549]: time="2024-12-13T01:44:52.272125908Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:44:52.273003 containerd[1549]: time="2024-12-13T01:44:52.272619574Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.654024917s" Dec 13 01:44:52.273003 containerd[1549]: time="2024-12-13T01:44:52.272644676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 01:44:52.311244 containerd[1549]: time="2024-12-13T01:44:52.273544077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:44:52.500279 containerd[1549]: time="2024-12-13T01:44:52.500242281Z" level=info msg="CreateContainer within sandbox \"52969008ad526ad72a9aa34db91530463db9fe7361f00f761089ebb01a9bb5c0\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 01:44:52.560996 containerd[1549]: time="2024-12-13T01:44:52.560848859Z" level=info msg="CreateContainer within sandbox \"52969008ad526ad72a9aa34db91530463db9fe7361f00f761089ebb01a9bb5c0\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b94b098ad31781d4523a4f81af89eb6ba4b640b523e0f83fbff4fdd369b574b3\"" Dec 13 01:44:52.561986 containerd[1549]: time="2024-12-13T01:44:52.561161858Z" level=info msg="StartContainer for \"b94b098ad31781d4523a4f81af89eb6ba4b640b523e0f83fbff4fdd369b574b3\"" Dec 13 01:44:52.591152 systemd[1]: Started cri-containerd-b94b098ad31781d4523a4f81af89eb6ba4b640b523e0f83fbff4fdd369b574b3.scope - libcontainer container b94b098ad31781d4523a4f81af89eb6ba4b640b523e0f83fbff4fdd369b574b3. Dec 13 01:44:52.631035 containerd[1549]: time="2024-12-13T01:44:52.631006959Z" level=info msg="StartContainer for \"b94b098ad31781d4523a4f81af89eb6ba4b640b523e0f83fbff4fdd369b574b3\" returns successfully" Dec 13 01:44:52.668878 containerd[1549]: time="2024-12-13T01:44:52.668274435Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:44:52.669954 containerd[1549]: time="2024-12-13T01:44:52.669216645Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Dec 13 01:44:52.671230 containerd[1549]: time="2024-12-13T01:44:52.670269676Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 396.707738ms" Dec 13 01:44:52.671230 containerd[1549]: time="2024-12-13T01:44:52.670291067Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 01:44:52.681284 containerd[1549]: time="2024-12-13T01:44:52.680988440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 01:44:52.682272 containerd[1549]: time="2024-12-13T01:44:52.682249495Z" level=info msg="CreateContainer within sandbox \"1563280be7dc4909077c6dc07459b22c19e6fc778fcb1e0fc1f74eaa58adc9d3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:44:52.693966 containerd[1549]: time="2024-12-13T01:44:52.693621289Z" level=info msg="CreateContainer within sandbox \"1563280be7dc4909077c6dc07459b22c19e6fc778fcb1e0fc1f74eaa58adc9d3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"80a81069a037dc39ca2463aba25e21e5b9bf48875a799a9881204a41e83a0cf0\"" Dec 13 01:44:52.694688 containerd[1549]: time="2024-12-13T01:44:52.694672747Z" level=info msg="StartContainer for \"80a81069a037dc39ca2463aba25e21e5b9bf48875a799a9881204a41e83a0cf0\"" Dec 13 01:44:52.697266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2680396654.mount: Deactivated successfully. Dec 13 01:44:52.720079 systemd[1]: Started cri-containerd-80a81069a037dc39ca2463aba25e21e5b9bf48875a799a9881204a41e83a0cf0.scope - libcontainer container 80a81069a037dc39ca2463aba25e21e5b9bf48875a799a9881204a41e83a0cf0. Dec 13 01:44:52.776258 containerd[1549]: time="2024-12-13T01:44:52.775620030Z" level=info msg="StartContainer for \"80a81069a037dc39ca2463aba25e21e5b9bf48875a799a9881204a41e83a0cf0\" returns successfully" Dec 13 01:44:52.940042 kubelet[2747]: I1213 01:44:52.939970 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-d5bfd545-426d5" podStartSLOduration=78.384108689 podStartE2EDuration="1m25.939943299s" podCreationTimestamp="2024-12-13 01:43:27 +0000 UTC" firstStartedPulling="2024-12-13 01:44:45.124032336 +0000 UTC m=+91.397722041" lastFinishedPulling="2024-12-13 01:44:52.679866946 +0000 UTC m=+98.953556651" observedRunningTime="2024-12-13 01:44:52.932074896 +0000 UTC m=+99.205764602" watchObservedRunningTime="2024-12-13 01:44:52.939943299 +0000 UTC m=+99.213633008" Dec 13 01:44:54.283252 containerd[1549]: time="2024-12-13T01:44:54.283226935Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:44:54.292314 containerd[1549]: time="2024-12-13T01:44:54.292281016Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Dec 13 01:44:54.298009 containerd[1549]: time="2024-12-13T01:44:54.297766583Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:44:54.301615 containerd[1549]: time="2024-12-13T01:44:54.301569391Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:44:54.302100 containerd[1549]: time="2024-12-13T01:44:54.301998576Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.620983485s" Dec 13 01:44:54.302100 containerd[1549]: time="2024-12-13T01:44:54.302018467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 01:44:54.467122 containerd[1549]: time="2024-12-13T01:44:54.466967990Z" level=info msg="CreateContainer within sandbox \"52969008ad526ad72a9aa34db91530463db9fe7361f00f761089ebb01a9bb5c0\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 01:44:54.513400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2963519616.mount: Deactivated successfully. Dec 13 01:44:54.656828 containerd[1549]: time="2024-12-13T01:44:54.656570811Z" level=info msg="CreateContainer within sandbox \"52969008ad526ad72a9aa34db91530463db9fe7361f00f761089ebb01a9bb5c0\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"25a0600db1ee1c79ac5c8e6cbef0e7fac41be56049906e910a04030a171fbcca\"" Dec 13 01:44:54.657116 containerd[1549]: time="2024-12-13T01:44:54.657082359Z" level=info msg="StartContainer for \"25a0600db1ee1c79ac5c8e6cbef0e7fac41be56049906e910a04030a171fbcca\"" Dec 13 01:44:54.683155 systemd[1]: Started cri-containerd-25a0600db1ee1c79ac5c8e6cbef0e7fac41be56049906e910a04030a171fbcca.scope - libcontainer container 25a0600db1ee1c79ac5c8e6cbef0e7fac41be56049906e910a04030a171fbcca. Dec 13 01:44:54.748418 containerd[1549]: time="2024-12-13T01:44:54.748335855Z" level=info msg="StartContainer for \"25a0600db1ee1c79ac5c8e6cbef0e7fac41be56049906e910a04030a171fbcca\" returns successfully" Dec 13 01:44:55.299247 kubelet[2747]: I1213 01:44:55.295759 2747 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 01:44:55.312904 kubelet[2747]: I1213 01:44:55.312821 2747 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 01:44:55.325230 systemd[1]: Started sshd@13-139.178.70.108:22-139.178.89.65:35428.service - OpenSSH per-connection server daemon (139.178.89.65:35428). Dec 13 01:44:55.886481 sshd[6017]: Accepted publickey for core from 139.178.89.65 port 35428 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:44:55.900310 sshd[6017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:44:55.912712 systemd-logind[1526]: New session 16 of user core. Dec 13 01:44:55.920079 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:44:57.139728 sshd[6017]: pam_unix(sshd:session): session closed for user core Dec 13 01:44:57.143917 systemd[1]: sshd@13-139.178.70.108:22-139.178.89.65:35428.service: Deactivated successfully. Dec 13 01:44:57.145658 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:44:57.147077 systemd-logind[1526]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:44:57.148965 systemd-logind[1526]: Removed session 16. Dec 13 01:44:58.983995 kubelet[2747]: I1213 01:44:58.983820 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-sgckt" podStartSLOduration=82.136107564 podStartE2EDuration="1m31.965188372s" podCreationTimestamp="2024-12-13 01:43:27 +0000 UTC" firstStartedPulling="2024-12-13 01:44:44.505216584 +0000 UTC m=+90.778906289" lastFinishedPulling="2024-12-13 01:44:54.334297393 +0000 UTC m=+100.607987097" observedRunningTime="2024-12-13 01:44:54.895173361 +0000 UTC m=+101.168863070" watchObservedRunningTime="2024-12-13 01:44:58.965188372 +0000 UTC m=+105.238878079" Dec 13 01:45:02.150364 systemd[1]: Started sshd@14-139.178.70.108:22-139.178.89.65:38258.service - OpenSSH per-connection server daemon (139.178.89.65:38258). Dec 13 01:45:02.330531 sshd[6073]: Accepted publickey for core from 139.178.89.65 port 38258 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:45:02.331641 sshd[6073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:45:02.335013 systemd-logind[1526]: New session 17 of user core. Dec 13 01:45:02.339153 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:45:03.387148 sshd[6073]: pam_unix(sshd:session): session closed for user core Dec 13 01:45:03.389001 systemd[1]: sshd@14-139.178.70.108:22-139.178.89.65:38258.service: Deactivated successfully. Dec 13 01:45:03.390413 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:45:03.391658 systemd-logind[1526]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:45:03.392289 systemd-logind[1526]: Removed session 17. Dec 13 01:45:08.395549 systemd[1]: Started sshd@15-139.178.70.108:22-139.178.89.65:48024.service - OpenSSH per-connection server daemon (139.178.89.65:48024). Dec 13 01:45:08.452898 sshd[6089]: Accepted publickey for core from 139.178.89.65 port 48024 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:45:08.454111 sshd[6089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:45:08.457172 systemd-logind[1526]: New session 18 of user core. Dec 13 01:45:08.465075 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:45:08.570646 sshd[6089]: pam_unix(sshd:session): session closed for user core Dec 13 01:45:08.577323 systemd[1]: sshd@15-139.178.70.108:22-139.178.89.65:48024.service: Deactivated successfully. Dec 13 01:45:08.578496 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:45:08.579347 systemd-logind[1526]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:45:08.584125 systemd[1]: Started sshd@16-139.178.70.108:22-139.178.89.65:48028.service - OpenSSH per-connection server daemon (139.178.89.65:48028). Dec 13 01:45:08.584731 systemd-logind[1526]: Removed session 18. Dec 13 01:45:08.644354 sshd[6102]: Accepted publickey for core from 139.178.89.65 port 48028 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:45:08.645775 sshd[6102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:45:08.648872 systemd-logind[1526]: New session 19 of user core. Dec 13 01:45:08.656125 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:45:09.919400 systemd[1]: Started sshd@17-139.178.70.108:22-139.178.89.65:48036.service - OpenSSH per-connection server daemon (139.178.89.65:48036). Dec 13 01:45:09.924647 sshd[6102]: pam_unix(sshd:session): session closed for user core Dec 13 01:45:09.974781 systemd[1]: sshd@16-139.178.70.108:22-139.178.89.65:48028.service: Deactivated successfully. Dec 13 01:45:09.976871 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:45:09.978228 systemd-logind[1526]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:45:09.979295 systemd-logind[1526]: Removed session 19. Dec 13 01:45:10.037927 sshd[6111]: Accepted publickey for core from 139.178.89.65 port 48036 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:45:10.038814 sshd[6111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:45:10.043403 systemd-logind[1526]: New session 20 of user core. Dec 13 01:45:10.053178 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:45:12.209631 sshd[6111]: pam_unix(sshd:session): session closed for user core Dec 13 01:45:12.232798 systemd[1]: sshd@17-139.178.70.108:22-139.178.89.65:48036.service: Deactivated successfully. Dec 13 01:45:12.234387 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:45:12.235217 systemd-logind[1526]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:45:12.237194 systemd[1]: Started sshd@18-139.178.70.108:22-139.178.89.65:48040.service - OpenSSH per-connection server daemon (139.178.89.65:48040). Dec 13 01:45:12.237836 systemd-logind[1526]: Removed session 20. Dec 13 01:45:12.372161 sshd[6128]: Accepted publickey for core from 139.178.89.65 port 48040 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:45:12.373337 sshd[6128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:45:12.376736 systemd-logind[1526]: New session 21 of user core. Dec 13 01:45:12.383160 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:45:13.433296 sshd[6128]: pam_unix(sshd:session): session closed for user core Dec 13 01:45:13.441647 systemd[1]: sshd@18-139.178.70.108:22-139.178.89.65:48040.service: Deactivated successfully. Dec 13 01:45:13.443053 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:45:13.443849 systemd-logind[1526]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:45:13.447186 systemd[1]: Started sshd@19-139.178.70.108:22-139.178.89.65:48042.service - OpenSSH per-connection server daemon (139.178.89.65:48042). Dec 13 01:45:13.447928 systemd-logind[1526]: Removed session 21. Dec 13 01:45:13.504185 sshd[6146]: Accepted publickey for core from 139.178.89.65 port 48042 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:45:13.505072 sshd[6146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:45:13.508091 systemd-logind[1526]: New session 22 of user core. Dec 13 01:45:13.513130 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:45:13.636304 sshd[6146]: pam_unix(sshd:session): session closed for user core Dec 13 01:45:13.638302 systemd-logind[1526]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:45:13.638511 systemd[1]: sshd@19-139.178.70.108:22-139.178.89.65:48042.service: Deactivated successfully. Dec 13 01:45:13.639934 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:45:13.641280 systemd-logind[1526]: Removed session 22. Dec 13 01:45:14.397200 containerd[1549]: time="2024-12-13T01:45:14.397168773Z" level=info msg="StopPodSandbox for \"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\"" Dec 13 01:45:16.009297 containerd[1549]: 2024-12-13 01:45:15.804 [WARNING][6177] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d5bfd545--r874s-eth0", GenerateName:"calico-apiserver-d5bfd545-", Namespace:"calico-apiserver", SelfLink:"", UID:"49679827-80b0-45f6-a6cd-64adcfa67f6f", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 43, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d5bfd545", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1c7f1a23307be20f29b5954a16fdbfd24e96bdfafbdffce52c69bdb1ae169ccb", Pod:"calico-apiserver-d5bfd545-r874s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic2838fdd58b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:45:16.009297 containerd[1549]: 2024-12-13 01:45:15.807 [INFO][6177] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29" Dec 13 01:45:16.009297 containerd[1549]: 2024-12-13 01:45:15.807 [INFO][6177] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29" iface="eth0" netns="" Dec 13 01:45:16.009297 containerd[1549]: 2024-12-13 01:45:15.807 [INFO][6177] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29" Dec 13 01:45:16.009297 containerd[1549]: 2024-12-13 01:45:15.807 [INFO][6177] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29" Dec 13 01:45:16.009297 containerd[1549]: 2024-12-13 01:45:15.990 [INFO][6184] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29" HandleID="k8s-pod-network.1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29" Workload="localhost-k8s-calico--apiserver--d5bfd545--r874s-eth0" Dec 13 01:45:16.009297 containerd[1549]: 2024-12-13 01:45:15.995 [INFO][6184] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:45:16.009297 containerd[1549]: 2024-12-13 01:45:15.996 [INFO][6184] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:45:16.009297 containerd[1549]: 2024-12-13 01:45:16.006 [WARNING][6184] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29" HandleID="k8s-pod-network.1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29" Workload="localhost-k8s-calico--apiserver--d5bfd545--r874s-eth0" Dec 13 01:45:16.009297 containerd[1549]: 2024-12-13 01:45:16.006 [INFO][6184] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29" HandleID="k8s-pod-network.1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29" Workload="localhost-k8s-calico--apiserver--d5bfd545--r874s-eth0" Dec 13 01:45:16.009297 containerd[1549]: 2024-12-13 01:45:16.007 [INFO][6184] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:45:16.009297 containerd[1549]: 2024-12-13 01:45:16.008 [INFO][6177] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29" Dec 13 01:45:16.012685 containerd[1549]: time="2024-12-13T01:45:16.012658063Z" level=info msg="TearDown network for sandbox \"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\" successfully" Dec 13 01:45:16.012685 containerd[1549]: time="2024-12-13T01:45:16.012683708Z" level=info msg="StopPodSandbox for \"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\" returns successfully" Dec 13 01:45:16.169822 containerd[1549]: time="2024-12-13T01:45:16.169663321Z" level=info msg="RemovePodSandbox for \"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\"" Dec 13 01:45:16.184335 containerd[1549]: time="2024-12-13T01:45:16.184287909Z" level=info msg="Forcibly stopping sandbox \"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\"" Dec 13 01:45:16.318323 containerd[1549]: 2024-12-13 01:45:16.235 [WARNING][6202] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d5bfd545--r874s-eth0", GenerateName:"calico-apiserver-d5bfd545-", Namespace:"calico-apiserver", SelfLink:"", UID:"49679827-80b0-45f6-a6cd-64adcfa67f6f", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 43, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d5bfd545", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1c7f1a23307be20f29b5954a16fdbfd24e96bdfafbdffce52c69bdb1ae169ccb", Pod:"calico-apiserver-d5bfd545-r874s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic2838fdd58b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:45:16.318323 containerd[1549]: 2024-12-13 01:45:16.235 [INFO][6202] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29" Dec 13 01:45:16.318323 containerd[1549]: 2024-12-13 01:45:16.235 [INFO][6202] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29" iface="eth0" netns="" Dec 13 01:45:16.318323 containerd[1549]: 2024-12-13 01:45:16.235 [INFO][6202] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29" Dec 13 01:45:16.318323 containerd[1549]: 2024-12-13 01:45:16.235 [INFO][6202] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29" Dec 13 01:45:16.318323 containerd[1549]: 2024-12-13 01:45:16.309 [INFO][6208] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29" HandleID="k8s-pod-network.1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29" Workload="localhost-k8s-calico--apiserver--d5bfd545--r874s-eth0" Dec 13 01:45:16.318323 containerd[1549]: 2024-12-13 01:45:16.309 [INFO][6208] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:45:16.318323 containerd[1549]: 2024-12-13 01:45:16.309 [INFO][6208] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:45:16.318323 containerd[1549]: 2024-12-13 01:45:16.313 [WARNING][6208] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29" HandleID="k8s-pod-network.1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29" Workload="localhost-k8s-calico--apiserver--d5bfd545--r874s-eth0" Dec 13 01:45:16.318323 containerd[1549]: 2024-12-13 01:45:16.313 [INFO][6208] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29" HandleID="k8s-pod-network.1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29" Workload="localhost-k8s-calico--apiserver--d5bfd545--r874s-eth0" Dec 13 01:45:16.318323 containerd[1549]: 2024-12-13 01:45:16.314 [INFO][6208] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:45:16.318323 containerd[1549]: 2024-12-13 01:45:16.316 [INFO][6202] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29" Dec 13 01:45:16.323399 containerd[1549]: time="2024-12-13T01:45:16.318595833Z" level=info msg="TearDown network for sandbox \"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\" successfully" Dec 13 01:45:16.328104 containerd[1549]: time="2024-12-13T01:45:16.328069734Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:45:16.335306 containerd[1549]: time="2024-12-13T01:45:16.335153674Z" level=info msg="RemovePodSandbox \"1b51c939f6f260d5a5a12a14aeaacebbc0c75f9bc1996a08c14d5fef36190d29\" returns successfully" Dec 13 01:45:16.338342 containerd[1549]: time="2024-12-13T01:45:16.338151770Z" level=info msg="StopPodSandbox for \"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\"" Dec 13 01:45:16.411869 containerd[1549]: 2024-12-13 01:45:16.362 [WARNING][6226] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d5bfd545--426d5-eth0", GenerateName:"calico-apiserver-d5bfd545-", Namespace:"calico-apiserver", SelfLink:"", UID:"91cdf967-f24e-4903-a5a6-9102cde06b2f", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 43, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d5bfd545", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1563280be7dc4909077c6dc07459b22c19e6fc778fcb1e0fc1f74eaa58adc9d3", Pod:"calico-apiserver-d5bfd545-426d5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali45970deb1b5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:45:16.411869 containerd[1549]: 2024-12-13 01:45:16.362 [INFO][6226] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3" Dec 13 01:45:16.411869 containerd[1549]: 2024-12-13 01:45:16.362 [INFO][6226] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3" iface="eth0" netns="" Dec 13 01:45:16.411869 containerd[1549]: 2024-12-13 01:45:16.362 [INFO][6226] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3" Dec 13 01:45:16.411869 containerd[1549]: 2024-12-13 01:45:16.362 [INFO][6226] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3" Dec 13 01:45:16.411869 containerd[1549]: 2024-12-13 01:45:16.404 [INFO][6232] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3" HandleID="k8s-pod-network.2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3" Workload="localhost-k8s-calico--apiserver--d5bfd545--426d5-eth0" Dec 13 01:45:16.411869 containerd[1549]: 2024-12-13 01:45:16.404 [INFO][6232] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:45:16.411869 containerd[1549]: 2024-12-13 01:45:16.404 [INFO][6232] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:45:16.411869 containerd[1549]: 2024-12-13 01:45:16.408 [WARNING][6232] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3" HandleID="k8s-pod-network.2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3" Workload="localhost-k8s-calico--apiserver--d5bfd545--426d5-eth0" Dec 13 01:45:16.411869 containerd[1549]: 2024-12-13 01:45:16.408 [INFO][6232] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3" HandleID="k8s-pod-network.2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3" Workload="localhost-k8s-calico--apiserver--d5bfd545--426d5-eth0" Dec 13 01:45:16.411869 containerd[1549]: 2024-12-13 01:45:16.409 [INFO][6232] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:45:16.411869 containerd[1549]: 2024-12-13 01:45:16.410 [INFO][6226] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3" Dec 13 01:45:16.417600 containerd[1549]: time="2024-12-13T01:45:16.412350130Z" level=info msg="TearDown network for sandbox \"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\" successfully" Dec 13 01:45:16.417600 containerd[1549]: time="2024-12-13T01:45:16.412368397Z" level=info msg="StopPodSandbox for \"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\" returns successfully" Dec 13 01:45:16.417600 containerd[1549]: time="2024-12-13T01:45:16.412661751Z" level=info msg="RemovePodSandbox for \"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\"" Dec 13 01:45:16.417600 containerd[1549]: time="2024-12-13T01:45:16.412676795Z" level=info msg="Forcibly stopping sandbox \"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\"" Dec 13 01:45:16.500683 containerd[1549]: 2024-12-13 01:45:16.435 [WARNING][6250] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d5bfd545--426d5-eth0", GenerateName:"calico-apiserver-d5bfd545-", Namespace:"calico-apiserver", SelfLink:"", UID:"91cdf967-f24e-4903-a5a6-9102cde06b2f", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 43, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d5bfd545", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1563280be7dc4909077c6dc07459b22c19e6fc778fcb1e0fc1f74eaa58adc9d3", Pod:"calico-apiserver-d5bfd545-426d5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali45970deb1b5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:45:16.500683 containerd[1549]: 2024-12-13 01:45:16.436 [INFO][6250] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3" Dec 13 01:45:16.500683 containerd[1549]: 2024-12-13 01:45:16.436 [INFO][6250] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3" iface="eth0" netns="" Dec 13 01:45:16.500683 containerd[1549]: 2024-12-13 01:45:16.436 [INFO][6250] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3" Dec 13 01:45:16.500683 containerd[1549]: 2024-12-13 01:45:16.436 [INFO][6250] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3" Dec 13 01:45:16.500683 containerd[1549]: 2024-12-13 01:45:16.484 [INFO][6256] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3" HandleID="k8s-pod-network.2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3" Workload="localhost-k8s-calico--apiserver--d5bfd545--426d5-eth0" Dec 13 01:45:16.500683 containerd[1549]: 2024-12-13 01:45:16.485 [INFO][6256] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:45:16.500683 containerd[1549]: 2024-12-13 01:45:16.485 [INFO][6256] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:45:16.500683 containerd[1549]: 2024-12-13 01:45:16.488 [WARNING][6256] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3" HandleID="k8s-pod-network.2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3" Workload="localhost-k8s-calico--apiserver--d5bfd545--426d5-eth0" Dec 13 01:45:16.500683 containerd[1549]: 2024-12-13 01:45:16.488 [INFO][6256] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3" HandleID="k8s-pod-network.2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3" Workload="localhost-k8s-calico--apiserver--d5bfd545--426d5-eth0" Dec 13 01:45:16.500683 containerd[1549]: 2024-12-13 01:45:16.498 [INFO][6256] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:45:16.500683 containerd[1549]: 2024-12-13 01:45:16.499 [INFO][6250] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3" Dec 13 01:45:16.501041 containerd[1549]: time="2024-12-13T01:45:16.500701489Z" level=info msg="TearDown network for sandbox \"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\" successfully" Dec 13 01:45:16.545571 containerd[1549]: time="2024-12-13T01:45:16.544526964Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:45:16.545571 containerd[1549]: time="2024-12-13T01:45:16.544769935Z" level=info msg="RemovePodSandbox \"2c0c3719b5b3791606a57e013658c600ff1762fbfbedffd3353ad12f0717cad3\" returns successfully" Dec 13 01:45:16.547144 containerd[1549]: time="2024-12-13T01:45:16.546618479Z" level=info msg="StopPodSandbox for \"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\"" Dec 13 01:45:16.623763 containerd[1549]: 2024-12-13 01:45:16.591 [WARNING][6274] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--74d5794888--n2p4c-eth0", GenerateName:"calico-kube-controllers-74d5794888-", Namespace:"calico-system", SelfLink:"", UID:"fec74b51-4f13-4ee0-a490-81de9f872b4f", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 43, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74d5794888", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"182bf7d3fade0f3d53c744ba54adbad85841dabd505b251b2a9b423a6d8fcab8", Pod:"calico-kube-controllers-74d5794888-n2p4c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali17706e7d6a3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:45:16.623763 containerd[1549]: 2024-12-13 01:45:16.591 [INFO][6274] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305" Dec 13 01:45:16.623763 containerd[1549]: 2024-12-13 01:45:16.591 [INFO][6274] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305" iface="eth0" netns="" Dec 13 01:45:16.623763 containerd[1549]: 2024-12-13 01:45:16.591 [INFO][6274] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305" Dec 13 01:45:16.623763 containerd[1549]: 2024-12-13 01:45:16.591 [INFO][6274] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305" Dec 13 01:45:16.623763 containerd[1549]: 2024-12-13 01:45:16.613 [INFO][6280] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305" HandleID="k8s-pod-network.dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305" Workload="localhost-k8s-calico--kube--controllers--74d5794888--n2p4c-eth0" Dec 13 01:45:16.623763 containerd[1549]: 2024-12-13 01:45:16.613 [INFO][6280] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:45:16.623763 containerd[1549]: 2024-12-13 01:45:16.613 [INFO][6280] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:45:16.623763 containerd[1549]: 2024-12-13 01:45:16.620 [WARNING][6280] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305" HandleID="k8s-pod-network.dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305" Workload="localhost-k8s-calico--kube--controllers--74d5794888--n2p4c-eth0" Dec 13 01:45:16.623763 containerd[1549]: 2024-12-13 01:45:16.620 [INFO][6280] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305" HandleID="k8s-pod-network.dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305" Workload="localhost-k8s-calico--kube--controllers--74d5794888--n2p4c-eth0" Dec 13 01:45:16.623763 containerd[1549]: 2024-12-13 01:45:16.621 [INFO][6280] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:45:16.623763 containerd[1549]: 2024-12-13 01:45:16.622 [INFO][6274] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305" Dec 13 01:45:16.623763 containerd[1549]: time="2024-12-13T01:45:16.623741282Z" level=info msg="TearDown network for sandbox \"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\" successfully" Dec 13 01:45:16.624574 containerd[1549]: time="2024-12-13T01:45:16.623766883Z" level=info msg="StopPodSandbox for \"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\" returns successfully" Dec 13 01:45:16.624574 containerd[1549]: time="2024-12-13T01:45:16.624111221Z" level=info msg="RemovePodSandbox for \"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\"" Dec 13 01:45:16.624574 containerd[1549]: time="2024-12-13T01:45:16.624128237Z" level=info msg="Forcibly stopping sandbox \"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\"" Dec 13 01:45:16.673053 containerd[1549]: 2024-12-13 01:45:16.653 [WARNING][6298] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--74d5794888--n2p4c-eth0", GenerateName:"calico-kube-controllers-74d5794888-", Namespace:"calico-system", SelfLink:"", UID:"fec74b51-4f13-4ee0-a490-81de9f872b4f", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 43, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74d5794888", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"182bf7d3fade0f3d53c744ba54adbad85841dabd505b251b2a9b423a6d8fcab8", Pod:"calico-kube-controllers-74d5794888-n2p4c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali17706e7d6a3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:45:16.673053 containerd[1549]: 2024-12-13 01:45:16.653 [INFO][6298] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305" Dec 13 01:45:16.673053 containerd[1549]: 2024-12-13 01:45:16.653 [INFO][6298] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305" iface="eth0" netns="" Dec 13 01:45:16.673053 containerd[1549]: 2024-12-13 01:45:16.653 [INFO][6298] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305" Dec 13 01:45:16.673053 containerd[1549]: 2024-12-13 01:45:16.653 [INFO][6298] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305" Dec 13 01:45:16.673053 containerd[1549]: 2024-12-13 01:45:16.666 [INFO][6304] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305" HandleID="k8s-pod-network.dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305" Workload="localhost-k8s-calico--kube--controllers--74d5794888--n2p4c-eth0" Dec 13 01:45:16.673053 containerd[1549]: 2024-12-13 01:45:16.666 [INFO][6304] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:45:16.673053 containerd[1549]: 2024-12-13 01:45:16.666 [INFO][6304] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:45:16.673053 containerd[1549]: 2024-12-13 01:45:16.670 [WARNING][6304] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305" HandleID="k8s-pod-network.dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305" Workload="localhost-k8s-calico--kube--controllers--74d5794888--n2p4c-eth0" Dec 13 01:45:16.673053 containerd[1549]: 2024-12-13 01:45:16.670 [INFO][6304] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305" HandleID="k8s-pod-network.dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305" Workload="localhost-k8s-calico--kube--controllers--74d5794888--n2p4c-eth0" Dec 13 01:45:16.673053 containerd[1549]: 2024-12-13 01:45:16.670 [INFO][6304] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:45:16.673053 containerd[1549]: 2024-12-13 01:45:16.671 [INFO][6298] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305" Dec 13 01:45:16.674112 containerd[1549]: time="2024-12-13T01:45:16.673040883Z" level=info msg="TearDown network for sandbox \"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\" successfully" Dec 13 01:45:16.675093 containerd[1549]: time="2024-12-13T01:45:16.675055077Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:45:16.675212 containerd[1549]: time="2024-12-13T01:45:16.675173885Z" level=info msg="RemovePodSandbox \"dd4d4fcb03eacc5c96d2305a5848b5581729b0328b6cb50db7cd3ee2fcdc1305\" returns successfully" Dec 13 01:45:16.675810 containerd[1549]: time="2024-12-13T01:45:16.675676267Z" level=info msg="StopPodSandbox for \"ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39\"" Dec 13 01:45:16.725437 containerd[1549]: 2024-12-13 01:45:16.702 [WARNING][6322] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--jp7n7-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"dd95d309-c209-4cf1-a636-b98ab7a31667", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 43, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b1068176250e6df36c711871aad9468eec7ecdf4e8ea67d29ceb79368ac9a80b", Pod:"coredns-6f6b679f8f-jp7n7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibb99f553843", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:45:16.725437 containerd[1549]: 2024-12-13 01:45:16.702 [INFO][6322] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39" Dec 13 01:45:16.725437 containerd[1549]: 2024-12-13 01:45:16.702 [INFO][6322] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39" iface="eth0" netns="" Dec 13 01:45:16.725437 containerd[1549]: 2024-12-13 01:45:16.702 [INFO][6322] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39" Dec 13 01:45:16.725437 containerd[1549]: 2024-12-13 01:45:16.702 [INFO][6322] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39" Dec 13 01:45:16.725437 containerd[1549]: 2024-12-13 01:45:16.716 [INFO][6328] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39" HandleID="k8s-pod-network.ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39" Workload="localhost-k8s-coredns--6f6b679f8f--jp7n7-eth0" Dec 13 01:45:16.725437 containerd[1549]: 2024-12-13 01:45:16.716 [INFO][6328] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:45:16.725437 containerd[1549]: 2024-12-13 01:45:16.716 [INFO][6328] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:45:16.725437 containerd[1549]: 2024-12-13 01:45:16.721 [WARNING][6328] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39" HandleID="k8s-pod-network.ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39" Workload="localhost-k8s-coredns--6f6b679f8f--jp7n7-eth0" Dec 13 01:45:16.725437 containerd[1549]: 2024-12-13 01:45:16.721 [INFO][6328] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39" HandleID="k8s-pod-network.ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39" Workload="localhost-k8s-coredns--6f6b679f8f--jp7n7-eth0" Dec 13 01:45:16.725437 containerd[1549]: 2024-12-13 01:45:16.723 [INFO][6328] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:45:16.725437 containerd[1549]: 2024-12-13 01:45:16.724 [INFO][6322] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39" Dec 13 01:45:16.726570 containerd[1549]: time="2024-12-13T01:45:16.725444779Z" level=info msg="TearDown network for sandbox \"ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39\" successfully" Dec 13 01:45:16.726570 containerd[1549]: time="2024-12-13T01:45:16.725460659Z" level=info msg="StopPodSandbox for \"ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39\" returns successfully" Dec 13 01:45:16.726570 containerd[1549]: time="2024-12-13T01:45:16.725773629Z" level=info msg="RemovePodSandbox for \"ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39\"" Dec 13 01:45:16.726570 containerd[1549]: time="2024-12-13T01:45:16.725791147Z" level=info msg="Forcibly stopping sandbox \"ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39\"" Dec 13 01:45:16.791509 containerd[1549]: 2024-12-13 01:45:16.751 [WARNING][6347] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--jp7n7-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"dd95d309-c209-4cf1-a636-b98ab7a31667", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 43, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b1068176250e6df36c711871aad9468eec7ecdf4e8ea67d29ceb79368ac9a80b", Pod:"coredns-6f6b679f8f-jp7n7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibb99f553843", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:45:16.791509 containerd[1549]: 2024-12-13 01:45:16.751 [INFO][6347] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39" Dec 13 01:45:16.791509 containerd[1549]: 2024-12-13 01:45:16.751 [INFO][6347] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39" iface="eth0" netns="" Dec 13 01:45:16.791509 containerd[1549]: 2024-12-13 01:45:16.751 [INFO][6347] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39" Dec 13 01:45:16.791509 containerd[1549]: 2024-12-13 01:45:16.751 [INFO][6347] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39" Dec 13 01:45:16.791509 containerd[1549]: 2024-12-13 01:45:16.782 [INFO][6354] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39" HandleID="k8s-pod-network.ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39" Workload="localhost-k8s-coredns--6f6b679f8f--jp7n7-eth0" Dec 13 01:45:16.791509 containerd[1549]: 2024-12-13 01:45:16.782 [INFO][6354] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:45:16.791509 containerd[1549]: 2024-12-13 01:45:16.782 [INFO][6354] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:45:16.791509 containerd[1549]: 2024-12-13 01:45:16.788 [WARNING][6354] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39" HandleID="k8s-pod-network.ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39" Workload="localhost-k8s-coredns--6f6b679f8f--jp7n7-eth0" Dec 13 01:45:16.791509 containerd[1549]: 2024-12-13 01:45:16.788 [INFO][6354] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39" HandleID="k8s-pod-network.ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39" Workload="localhost-k8s-coredns--6f6b679f8f--jp7n7-eth0" Dec 13 01:45:16.791509 containerd[1549]: 2024-12-13 01:45:16.789 [INFO][6354] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:45:16.791509 containerd[1549]: 2024-12-13 01:45:16.790 [INFO][6347] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39" Dec 13 01:45:16.791509 containerd[1549]: time="2024-12-13T01:45:16.791492929Z" level=info msg="TearDown network for sandbox \"ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39\" successfully" Dec 13 01:45:16.794958 containerd[1549]: time="2024-12-13T01:45:16.794939888Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:45:16.795271 containerd[1549]: time="2024-12-13T01:45:16.794987262Z" level=info msg="RemovePodSandbox \"ea2e20dea3b54c3e33d898bf946a7a36fd4d263fbadec2d689531cec6dcb5a39\" returns successfully" Dec 13 01:45:16.795724 containerd[1549]: time="2024-12-13T01:45:16.795671106Z" level=info msg="StopPodSandbox for \"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\"" Dec 13 01:45:16.838736 containerd[1549]: 2024-12-13 01:45:16.818 [WARNING][6372] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--sgckt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"be0402b2-9394-4ec8-b88a-518cccbc701b", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 43, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"52969008ad526ad72a9aa34db91530463db9fe7361f00f761089ebb01a9bb5c0", Pod:"csi-node-driver-sgckt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib2c4a92082e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:45:16.838736 containerd[1549]: 2024-12-13 01:45:16.818 [INFO][6372] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586" Dec 13 01:45:16.838736 containerd[1549]: 2024-12-13 01:45:16.818 [INFO][6372] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586" iface="eth0" netns="" Dec 13 01:45:16.838736 containerd[1549]: 2024-12-13 01:45:16.818 [INFO][6372] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586" Dec 13 01:45:16.838736 containerd[1549]: 2024-12-13 01:45:16.818 [INFO][6372] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586" Dec 13 01:45:16.838736 containerd[1549]: 2024-12-13 01:45:16.832 [INFO][6378] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586" HandleID="k8s-pod-network.d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586" Workload="localhost-k8s-csi--node--driver--sgckt-eth0" Dec 13 01:45:16.838736 containerd[1549]: 2024-12-13 01:45:16.832 [INFO][6378] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:45:16.838736 containerd[1549]: 2024-12-13 01:45:16.832 [INFO][6378] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:45:16.838736 containerd[1549]: 2024-12-13 01:45:16.835 [WARNING][6378] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586" HandleID="k8s-pod-network.d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586" Workload="localhost-k8s-csi--node--driver--sgckt-eth0" Dec 13 01:45:16.838736 containerd[1549]: 2024-12-13 01:45:16.835 [INFO][6378] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586" HandleID="k8s-pod-network.d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586" Workload="localhost-k8s-csi--node--driver--sgckt-eth0" Dec 13 01:45:16.838736 containerd[1549]: 2024-12-13 01:45:16.836 [INFO][6378] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:45:16.838736 containerd[1549]: 2024-12-13 01:45:16.837 [INFO][6372] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586" Dec 13 01:45:16.840450 containerd[1549]: time="2024-12-13T01:45:16.838768233Z" level=info msg="TearDown network for sandbox \"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\" successfully" Dec 13 01:45:16.840450 containerd[1549]: time="2024-12-13T01:45:16.838784580Z" level=info msg="StopPodSandbox for \"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\" returns successfully" Dec 13 01:45:16.840450 containerd[1549]: time="2024-12-13T01:45:16.839103943Z" level=info msg="RemovePodSandbox for \"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\"" Dec 13 01:45:16.840450 containerd[1549]: time="2024-12-13T01:45:16.839119341Z" level=info msg="Forcibly stopping sandbox \"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\"" Dec 13 01:45:16.882553 containerd[1549]: 2024-12-13 01:45:16.861 [WARNING][6396] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--sgckt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"be0402b2-9394-4ec8-b88a-518cccbc701b", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 43, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"52969008ad526ad72a9aa34db91530463db9fe7361f00f761089ebb01a9bb5c0", Pod:"csi-node-driver-sgckt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib2c4a92082e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:45:16.882553 containerd[1549]: 2024-12-13 01:45:16.861 [INFO][6396] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586" Dec 13 01:45:16.882553 containerd[1549]: 2024-12-13 01:45:16.861 [INFO][6396] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586" iface="eth0" netns="" Dec 13 01:45:16.882553 containerd[1549]: 2024-12-13 01:45:16.861 [INFO][6396] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586" Dec 13 01:45:16.882553 containerd[1549]: 2024-12-13 01:45:16.861 [INFO][6396] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586" Dec 13 01:45:16.882553 containerd[1549]: 2024-12-13 01:45:16.876 [INFO][6402] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586" HandleID="k8s-pod-network.d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586" Workload="localhost-k8s-csi--node--driver--sgckt-eth0" Dec 13 01:45:16.882553 containerd[1549]: 2024-12-13 01:45:16.876 [INFO][6402] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:45:16.882553 containerd[1549]: 2024-12-13 01:45:16.876 [INFO][6402] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:45:16.882553 containerd[1549]: 2024-12-13 01:45:16.879 [WARNING][6402] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586" HandleID="k8s-pod-network.d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586" Workload="localhost-k8s-csi--node--driver--sgckt-eth0" Dec 13 01:45:16.882553 containerd[1549]: 2024-12-13 01:45:16.879 [INFO][6402] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586" HandleID="k8s-pod-network.d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586" Workload="localhost-k8s-csi--node--driver--sgckt-eth0" Dec 13 01:45:16.882553 containerd[1549]: 2024-12-13 01:45:16.880 [INFO][6402] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:45:16.882553 containerd[1549]: 2024-12-13 01:45:16.881 [INFO][6396] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586" Dec 13 01:45:16.885086 containerd[1549]: time="2024-12-13T01:45:16.882557936Z" level=info msg="TearDown network for sandbox \"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\" successfully" Dec 13 01:45:16.885086 containerd[1549]: time="2024-12-13T01:45:16.884052872Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:45:16.885086 containerd[1549]: time="2024-12-13T01:45:16.884099235Z" level=info msg="RemovePodSandbox \"d0f3e55a8915c6caa3fc6e37f20963d00b1e8e561b32c7cbcf5ef490c47e0586\" returns successfully" Dec 13 01:45:16.885086 containerd[1549]: time="2024-12-13T01:45:16.884387393Z" level=info msg="StopPodSandbox for \"9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2\"" Dec 13 01:45:16.927763 containerd[1549]: 2024-12-13 01:45:16.907 [WARNING][6420] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--mt9t2-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"00e916a8-4e37-4540-9352-5c9af61a76e0", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 43, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d22496edb868a08cdb0599e11068606ce65ee42ec8e9b6a849452d487f84bf2b", Pod:"coredns-6f6b679f8f-mt9t2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif5aa172434c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:45:16.927763 containerd[1549]: 2024-12-13 01:45:16.907 [INFO][6420] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2" Dec 13 01:45:16.927763 containerd[1549]: 2024-12-13 01:45:16.907 [INFO][6420] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2" iface="eth0" netns="" Dec 13 01:45:16.927763 containerd[1549]: 2024-12-13 01:45:16.907 [INFO][6420] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2" Dec 13 01:45:16.927763 containerd[1549]: 2024-12-13 01:45:16.907 [INFO][6420] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2" Dec 13 01:45:16.927763 containerd[1549]: 2024-12-13 01:45:16.920 [INFO][6427] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2" HandleID="k8s-pod-network.9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2" Workload="localhost-k8s-coredns--6f6b679f8f--mt9t2-eth0" Dec 13 01:45:16.927763 containerd[1549]: 2024-12-13 01:45:16.921 [INFO][6427] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:45:16.927763 containerd[1549]: 2024-12-13 01:45:16.921 [INFO][6427] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:45:16.927763 containerd[1549]: 2024-12-13 01:45:16.924 [WARNING][6427] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2" HandleID="k8s-pod-network.9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2" Workload="localhost-k8s-coredns--6f6b679f8f--mt9t2-eth0" Dec 13 01:45:16.927763 containerd[1549]: 2024-12-13 01:45:16.924 [INFO][6427] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2" HandleID="k8s-pod-network.9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2" Workload="localhost-k8s-coredns--6f6b679f8f--mt9t2-eth0" Dec 13 01:45:16.927763 containerd[1549]: 2024-12-13 01:45:16.925 [INFO][6427] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:45:16.927763 containerd[1549]: 2024-12-13 01:45:16.926 [INFO][6420] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2" Dec 13 01:45:16.928991 containerd[1549]: time="2024-12-13T01:45:16.927788531Z" level=info msg="TearDown network for sandbox \"9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2\" successfully" Dec 13 01:45:16.928991 containerd[1549]: time="2024-12-13T01:45:16.927804409Z" level=info msg="StopPodSandbox for \"9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2\" returns successfully" Dec 13 01:45:16.928991 containerd[1549]: time="2024-12-13T01:45:16.928108367Z" level=info msg="RemovePodSandbox for \"9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2\"" Dec 13 01:45:16.928991 containerd[1549]: time="2024-12-13T01:45:16.928124490Z" level=info msg="Forcibly stopping sandbox \"9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2\"" Dec 13 01:45:16.986609 containerd[1549]: 2024-12-13 01:45:16.959 [WARNING][6446] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--mt9t2-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"00e916a8-4e37-4540-9352-5c9af61a76e0", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 43, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d22496edb868a08cdb0599e11068606ce65ee42ec8e9b6a849452d487f84bf2b", Pod:"coredns-6f6b679f8f-mt9t2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif5aa172434c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:45:16.986609 containerd[1549]: 2024-12-13 01:45:16.959 [INFO][6446] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2" Dec 13 01:45:16.986609 containerd[1549]: 2024-12-13 01:45:16.959 [INFO][6446] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2" iface="eth0" netns="" Dec 13 01:45:16.986609 containerd[1549]: 2024-12-13 01:45:16.959 [INFO][6446] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2" Dec 13 01:45:16.986609 containerd[1549]: 2024-12-13 01:45:16.959 [INFO][6446] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2" Dec 13 01:45:16.986609 containerd[1549]: 2024-12-13 01:45:16.979 [INFO][6452] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2" HandleID="k8s-pod-network.9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2" Workload="localhost-k8s-coredns--6f6b679f8f--mt9t2-eth0" Dec 13 01:45:16.986609 containerd[1549]: 2024-12-13 01:45:16.980 [INFO][6452] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:45:16.986609 containerd[1549]: 2024-12-13 01:45:16.980 [INFO][6452] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:45:16.986609 containerd[1549]: 2024-12-13 01:45:16.983 [WARNING][6452] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2" HandleID="k8s-pod-network.9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2" Workload="localhost-k8s-coredns--6f6b679f8f--mt9t2-eth0" Dec 13 01:45:16.986609 containerd[1549]: 2024-12-13 01:45:16.983 [INFO][6452] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2" HandleID="k8s-pod-network.9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2" Workload="localhost-k8s-coredns--6f6b679f8f--mt9t2-eth0" Dec 13 01:45:16.986609 containerd[1549]: 2024-12-13 01:45:16.984 [INFO][6452] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:45:16.986609 containerd[1549]: 2024-12-13 01:45:16.985 [INFO][6446] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2" Dec 13 01:45:16.987193 containerd[1549]: time="2024-12-13T01:45:16.986610507Z" level=info msg="TearDown network for sandbox \"9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2\" successfully" Dec 13 01:45:16.987972 containerd[1549]: time="2024-12-13T01:45:16.987954547Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:45:16.988016 containerd[1549]: time="2024-12-13T01:45:16.988007289Z" level=info msg="RemovePodSandbox \"9043f51f9cf12682e5ede1b85f22e94002d30ae7f497b0aa4f789db7f2fdf6d2\" returns successfully" Dec 13 01:45:16.988385 containerd[1549]: time="2024-12-13T01:45:16.988369969Z" level=info msg="StopPodSandbox for \"7b16b20863325d422e1cf97f308b639d2ee595ff01860faaacc849be4300fb9d\"" Dec 13 01:45:16.988441 containerd[1549]: time="2024-12-13T01:45:16.988427878Z" level=info msg="TearDown network for sandbox \"7b16b20863325d422e1cf97f308b639d2ee595ff01860faaacc849be4300fb9d\" successfully" Dec 13 01:45:16.988441 containerd[1549]: time="2024-12-13T01:45:16.988438672Z" level=info msg="StopPodSandbox for \"7b16b20863325d422e1cf97f308b639d2ee595ff01860faaacc849be4300fb9d\" returns successfully" Dec 13 01:45:16.988638 containerd[1549]: time="2024-12-13T01:45:16.988624683Z" level=info msg="RemovePodSandbox for \"7b16b20863325d422e1cf97f308b639d2ee595ff01860faaacc849be4300fb9d\"" Dec 13 01:45:16.988668 containerd[1549]: time="2024-12-13T01:45:16.988639215Z" level=info msg="Forcibly stopping sandbox \"7b16b20863325d422e1cf97f308b639d2ee595ff01860faaacc849be4300fb9d\"" Dec 13 01:45:16.988693 containerd[1549]: time="2024-12-13T01:45:16.988685539Z" level=info msg="TearDown network for sandbox \"7b16b20863325d422e1cf97f308b639d2ee595ff01860faaacc849be4300fb9d\" successfully" Dec 13 01:45:16.994215 containerd[1549]: time="2024-12-13T01:45:16.994173648Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7b16b20863325d422e1cf97f308b639d2ee595ff01860faaacc849be4300fb9d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:45:16.994288 containerd[1549]: time="2024-12-13T01:45:16.994234347Z" level=info msg="RemovePodSandbox \"7b16b20863325d422e1cf97f308b639d2ee595ff01860faaacc849be4300fb9d\" returns successfully" Dec 13 01:45:18.646462 systemd[1]: Started sshd@20-139.178.70.108:22-139.178.89.65:50450.service - OpenSSH per-connection server daemon (139.178.89.65:50450). Dec 13 01:45:18.816470 sshd[6462]: Accepted publickey for core from 139.178.89.65 port 50450 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:45:18.817617 sshd[6462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:45:18.820897 systemd-logind[1526]: New session 23 of user core. Dec 13 01:45:18.831148 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 01:45:19.162888 sshd[6462]: pam_unix(sshd:session): session closed for user core Dec 13 01:45:19.164775 systemd-logind[1526]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:45:19.164883 systemd[1]: sshd@20-139.178.70.108:22-139.178.89.65:50450.service: Deactivated successfully. Dec 13 01:45:19.166286 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:45:19.167522 systemd-logind[1526]: Removed session 23. Dec 13 01:45:24.181178 systemd[1]: Started sshd@21-139.178.70.108:22-139.178.89.65:50456.service - OpenSSH per-connection server daemon (139.178.89.65:50456). Dec 13 01:45:24.246416 sshd[6497]: Accepted publickey for core from 139.178.89.65 port 50456 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:45:24.254428 sshd[6497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:45:24.265173 systemd-logind[1526]: New session 24 of user core. Dec 13 01:45:24.270114 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 01:45:24.644175 sshd[6497]: pam_unix(sshd:session): session closed for user core Dec 13 01:45:24.654853 systemd[1]: sshd@21-139.178.70.108:22-139.178.89.65:50456.service: Deactivated successfully. Dec 13 01:45:24.656176 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 01:45:24.657635 systemd-logind[1526]: Session 24 logged out. Waiting for processes to exit. Dec 13 01:45:24.659187 systemd-logind[1526]: Removed session 24. Dec 13 01:45:29.655351 systemd[1]: Started sshd@22-139.178.70.108:22-139.178.89.65:54816.service - OpenSSH per-connection server daemon (139.178.89.65:54816). Dec 13 01:45:29.735469 sshd[6536]: Accepted publickey for core from 139.178.89.65 port 54816 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:45:29.736403 sshd[6536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:45:29.739233 systemd-logind[1526]: New session 25 of user core. Dec 13 01:45:29.743083 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 01:45:30.863096 sshd[6536]: pam_unix(sshd:session): session closed for user core Dec 13 01:45:30.865649 systemd-logind[1526]: Session 25 logged out. Waiting for processes to exit. Dec 13 01:45:30.865806 systemd[1]: sshd@22-139.178.70.108:22-139.178.89.65:54816.service: Deactivated successfully. Dec 13 01:45:30.867177 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 01:45:30.867782 systemd-logind[1526]: Removed session 25.