Nov 12 20:43:09.729118 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 12 16:20:46 -00 2024 Nov 12 20:43:09.729136 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:43:09.729143 kernel: Disabled fast string operations Nov 12 20:43:09.729147 kernel: BIOS-provided physical RAM map: Nov 12 20:43:09.729151 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Nov 12 20:43:09.729155 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Nov 12 20:43:09.729161 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Nov 12 20:43:09.729166 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Nov 12 20:43:09.729170 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Nov 12 20:43:09.729174 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Nov 12 20:43:09.729178 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Nov 12 20:43:09.729183 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Nov 12 20:43:09.729187 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Nov 12 20:43:09.729191 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Nov 12 20:43:09.729198 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Nov 12 20:43:09.729237 kernel: NX (Execute Disable) protection: active Nov 12 20:43:09.729243 kernel: APIC: Static calls initialized Nov 12 20:43:09.729248 kernel: SMBIOS 2.7 present. Nov 12 20:43:09.729253 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Nov 12 20:43:09.729258 kernel: vmware: hypercall mode: 0x00 Nov 12 20:43:09.729263 kernel: Hypervisor detected: VMware Nov 12 20:43:09.729267 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Nov 12 20:43:09.729274 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Nov 12 20:43:09.729278 kernel: vmware: using clock offset of 4463714483 ns Nov 12 20:43:09.729283 kernel: tsc: Detected 3408.000 MHz processor Nov 12 20:43:09.729289 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 12 20:43:09.729294 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 12 20:43:09.729299 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Nov 12 20:43:09.729304 kernel: total RAM covered: 3072M Nov 12 20:43:09.729309 kernel: Found optimal setting for mtrr clean up Nov 12 20:43:09.729316 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Nov 12 20:43:09.729323 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Nov 12 20:43:09.729328 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 12 20:43:09.729333 kernel: Using GB pages for direct mapping Nov 12 20:43:09.729338 kernel: ACPI: Early table checksum verification disabled Nov 12 20:43:09.729343 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Nov 12 20:43:09.729348 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Nov 12 20:43:09.729353 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Nov 12 20:43:09.729358 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Nov 12 20:43:09.729363 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Nov 12 20:43:09.729371 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Nov 12 20:43:09.729376 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Nov 12 20:43:09.729382 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Nov 12 20:43:09.729387 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Nov 12 20:43:09.729392 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Nov 12 20:43:09.729399 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Nov 12 20:43:09.729404 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Nov 12 20:43:09.729409 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Nov 12 20:43:09.729414 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Nov 12 20:43:09.729419 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Nov 12 20:43:09.729424 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Nov 12 20:43:09.729429 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Nov 12 20:43:09.729435 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Nov 12 20:43:09.729440 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Nov 12 20:43:09.729445 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Nov 12 20:43:09.729451 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Nov 12 20:43:09.729456 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Nov 12 20:43:09.729461 kernel: system APIC only can use physical flat Nov 12 20:43:09.729466 kernel: APIC: Switched APIC routing to: physical flat Nov 12 20:43:09.729471 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 12 20:43:09.729477 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Nov 12 20:43:09.729482 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Nov 12 20:43:09.729487 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Nov 12 20:43:09.729492 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Nov 12 20:43:09.729498 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Nov 12 20:43:09.729503 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Nov 12 20:43:09.729508 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Nov 12 20:43:09.729513 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Nov 12 20:43:09.729518 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Nov 12 20:43:09.729523 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Nov 12 20:43:09.729528 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Nov 12 20:43:09.729533 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Nov 12 20:43:09.729538 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Nov 12 20:43:09.729543 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Nov 12 20:43:09.729549 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Nov 12 20:43:09.729555 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Nov 12 20:43:09.729560 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Nov 12 20:43:09.729565 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Nov 12 20:43:09.729570 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Nov 12 20:43:09.729574 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Nov 12 20:43:09.729580 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Nov 12 20:43:09.729585 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Nov 12 20:43:09.729590 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Nov 12 20:43:09.729594 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Nov 12 20:43:09.729601 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Nov 12 20:43:09.729606 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Nov 12 20:43:09.729611 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Nov 12 20:43:09.729615 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Nov 12 20:43:09.729621 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Nov 12 20:43:09.729625 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Nov 12 20:43:09.729630 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Nov 12 20:43:09.729635 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Nov 12 20:43:09.729640 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Nov 12 20:43:09.729646 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Nov 12 20:43:09.729651 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Nov 12 20:43:09.729657 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Nov 12 20:43:09.729662 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Nov 12 20:43:09.729667 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Nov 12 20:43:09.729672 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Nov 12 20:43:09.729677 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Nov 12 20:43:09.729682 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Nov 12 20:43:09.729687 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Nov 12 20:43:09.729692 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Nov 12 20:43:09.729697 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Nov 12 20:43:09.729702 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Nov 12 20:43:09.729708 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Nov 12 20:43:09.729714 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Nov 12 20:43:09.729718 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Nov 12 20:43:09.729724 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Nov 12 20:43:09.729729 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Nov 12 20:43:09.729734 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Nov 12 20:43:09.729739 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Nov 12 20:43:09.729744 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Nov 12 20:43:09.729749 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Nov 12 20:43:09.729754 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Nov 12 20:43:09.729760 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Nov 12 20:43:09.729766 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Nov 12 20:43:09.729771 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Nov 12 20:43:09.729780 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Nov 12 20:43:09.729785 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Nov 12 20:43:09.729791 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Nov 12 20:43:09.729796 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Nov 12 20:43:09.729802 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Nov 12 20:43:09.729808 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Nov 12 20:43:09.729813 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Nov 12 20:43:09.729819 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Nov 12 20:43:09.729824 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Nov 12 20:43:09.729829 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Nov 12 20:43:09.729835 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Nov 12 20:43:09.729840 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Nov 12 20:43:09.729846 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Nov 12 20:43:09.729851 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Nov 12 20:43:09.729856 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Nov 12 20:43:09.729863 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Nov 12 20:43:09.729868 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Nov 12 20:43:09.729873 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Nov 12 20:43:09.729879 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Nov 12 20:43:09.729884 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Nov 12 20:43:09.729889 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Nov 12 20:43:09.729895 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Nov 12 20:43:09.729901 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Nov 12 20:43:09.729906 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Nov 12 20:43:09.729911 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Nov 12 20:43:09.729917 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Nov 12 20:43:09.729923 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Nov 12 20:43:09.729928 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Nov 12 20:43:09.729934 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Nov 12 20:43:09.729939 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Nov 12 20:43:09.729944 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Nov 12 20:43:09.729950 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Nov 12 20:43:09.729955 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Nov 12 20:43:09.729960 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Nov 12 20:43:09.729966 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Nov 12 20:43:09.729971 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Nov 12 20:43:09.729977 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Nov 12 20:43:09.729983 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Nov 12 20:43:09.729988 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Nov 12 20:43:09.729994 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Nov 12 20:43:09.729999 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Nov 12 20:43:09.730004 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Nov 12 20:43:09.730010 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Nov 12 20:43:09.730015 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Nov 12 20:43:09.730020 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Nov 12 20:43:09.730026 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Nov 12 20:43:09.730032 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Nov 12 20:43:09.730038 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Nov 12 20:43:09.730043 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Nov 12 20:43:09.730048 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Nov 12 20:43:09.730054 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Nov 12 20:43:09.730059 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Nov 12 20:43:09.730064 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Nov 12 20:43:09.730070 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Nov 12 20:43:09.730075 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Nov 12 20:43:09.730080 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Nov 12 20:43:09.730087 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Nov 12 20:43:09.730092 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Nov 12 20:43:09.730097 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Nov 12 20:43:09.730103 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Nov 12 20:43:09.730108 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Nov 12 20:43:09.730113 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Nov 12 20:43:09.730118 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Nov 12 20:43:09.730124 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Nov 12 20:43:09.730129 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Nov 12 20:43:09.730135 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Nov 12 20:43:09.730141 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Nov 12 20:43:09.730146 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Nov 12 20:43:09.730152 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Nov 12 20:43:09.730157 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 12 20:43:09.730163 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 12 20:43:09.730168 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Nov 12 20:43:09.730174 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Nov 12 20:43:09.730179 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Nov 12 20:43:09.730185 kernel: Zone ranges: Nov 12 20:43:09.730191 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 12 20:43:09.730198 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Nov 12 20:43:09.730518 kernel: Normal empty Nov 12 20:43:09.730525 kernel: Movable zone start for each node Nov 12 20:43:09.730531 kernel: Early memory node ranges Nov 12 20:43:09.730536 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Nov 12 20:43:09.730542 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Nov 12 20:43:09.730548 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Nov 12 20:43:09.730553 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Nov 12 20:43:09.730558 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 12 20:43:09.730566 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Nov 12 20:43:09.730572 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Nov 12 20:43:09.730577 kernel: ACPI: PM-Timer IO Port: 0x1008 Nov 12 20:43:09.730583 kernel: system APIC only can use physical flat Nov 12 20:43:09.730588 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Nov 12 20:43:09.730593 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Nov 12 20:43:09.730599 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Nov 12 20:43:09.730604 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Nov 12 20:43:09.730610 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Nov 12 20:43:09.730615 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Nov 12 20:43:09.730622 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Nov 12 20:43:09.730627 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Nov 12 20:43:09.730633 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Nov 12 20:43:09.730638 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Nov 12 20:43:09.730644 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Nov 12 20:43:09.730649 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Nov 12 20:43:09.730655 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Nov 12 20:43:09.730660 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Nov 12 20:43:09.730666 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Nov 12 20:43:09.730672 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Nov 12 20:43:09.730678 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Nov 12 20:43:09.730683 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Nov 12 20:43:09.730688 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Nov 12 20:43:09.730694 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Nov 12 20:43:09.730699 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Nov 12 20:43:09.730705 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Nov 12 20:43:09.730711 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Nov 12 20:43:09.730716 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Nov 12 20:43:09.730721 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Nov 12 20:43:09.730728 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Nov 12 20:43:09.730734 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Nov 12 20:43:09.730739 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Nov 12 20:43:09.730744 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Nov 12 20:43:09.730750 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Nov 12 20:43:09.730755 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Nov 12 20:43:09.730761 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Nov 12 20:43:09.730766 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Nov 12 20:43:09.730772 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Nov 12 20:43:09.730777 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Nov 12 20:43:09.730784 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Nov 12 20:43:09.730789 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Nov 12 20:43:09.730795 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Nov 12 20:43:09.730800 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Nov 12 20:43:09.730805 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Nov 12 20:43:09.730811 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Nov 12 20:43:09.730816 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Nov 12 20:43:09.730822 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Nov 12 20:43:09.730827 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Nov 12 20:43:09.730834 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Nov 12 20:43:09.730839 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Nov 12 20:43:09.730845 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Nov 12 20:43:09.730850 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Nov 12 20:43:09.730856 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Nov 12 20:43:09.730861 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Nov 12 20:43:09.730867 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Nov 12 20:43:09.730872 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Nov 12 20:43:09.730877 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Nov 12 20:43:09.730883 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Nov 12 20:43:09.730890 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Nov 12 20:43:09.730895 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Nov 12 20:43:09.730900 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Nov 12 20:43:09.730906 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Nov 12 20:43:09.730911 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Nov 12 20:43:09.730917 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Nov 12 20:43:09.730922 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Nov 12 20:43:09.730928 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Nov 12 20:43:09.730933 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Nov 12 20:43:09.730939 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Nov 12 20:43:09.730946 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Nov 12 20:43:09.730951 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Nov 12 20:43:09.730956 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Nov 12 20:43:09.730962 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Nov 12 20:43:09.730967 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Nov 12 20:43:09.730973 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Nov 12 20:43:09.730978 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Nov 12 20:43:09.730984 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Nov 12 20:43:09.730989 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Nov 12 20:43:09.730994 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Nov 12 20:43:09.731001 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Nov 12 20:43:09.731006 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Nov 12 20:43:09.731012 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Nov 12 20:43:09.731017 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Nov 12 20:43:09.731023 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Nov 12 20:43:09.731028 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Nov 12 20:43:09.731034 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Nov 12 20:43:09.731039 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Nov 12 20:43:09.731045 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Nov 12 20:43:09.731051 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Nov 12 20:43:09.731056 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Nov 12 20:43:09.731062 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Nov 12 20:43:09.731067 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Nov 12 20:43:09.731073 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Nov 12 20:43:09.731078 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Nov 12 20:43:09.731084 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Nov 12 20:43:09.731089 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Nov 12 20:43:09.731095 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Nov 12 20:43:09.731100 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Nov 12 20:43:09.731107 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Nov 12 20:43:09.731112 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Nov 12 20:43:09.731117 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Nov 12 20:43:09.731123 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Nov 12 20:43:09.731128 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Nov 12 20:43:09.731134 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Nov 12 20:43:09.731139 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Nov 12 20:43:09.731144 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Nov 12 20:43:09.731150 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Nov 12 20:43:09.731156 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Nov 12 20:43:09.731162 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Nov 12 20:43:09.731168 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Nov 12 20:43:09.731173 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Nov 12 20:43:09.731179 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Nov 12 20:43:09.731184 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Nov 12 20:43:09.731190 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Nov 12 20:43:09.731195 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Nov 12 20:43:09.731200 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Nov 12 20:43:09.731212 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Nov 12 20:43:09.731219 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Nov 12 20:43:09.731224 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Nov 12 20:43:09.731230 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Nov 12 20:43:09.731235 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Nov 12 20:43:09.731241 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Nov 12 20:43:09.731246 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Nov 12 20:43:09.731251 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Nov 12 20:43:09.731257 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Nov 12 20:43:09.731262 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Nov 12 20:43:09.731268 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Nov 12 20:43:09.731274 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Nov 12 20:43:09.731280 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Nov 12 20:43:09.731285 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Nov 12 20:43:09.731291 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Nov 12 20:43:09.731296 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Nov 12 20:43:09.731302 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Nov 12 20:43:09.731307 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Nov 12 20:43:09.731312 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Nov 12 20:43:09.731318 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 12 20:43:09.731325 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Nov 12 20:43:09.731330 kernel: TSC deadline timer available Nov 12 20:43:09.731335 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Nov 12 20:43:09.731341 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Nov 12 20:43:09.731346 kernel: Booting paravirtualized kernel on VMware hypervisor Nov 12 20:43:09.731352 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 12 20:43:09.731358 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Nov 12 20:43:09.731363 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Nov 12 20:43:09.731369 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Nov 12 20:43:09.731376 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Nov 12 20:43:09.731381 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Nov 12 20:43:09.731386 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Nov 12 20:43:09.731399 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Nov 12 20:43:09.731405 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Nov 12 20:43:09.731433 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Nov 12 20:43:09.731441 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Nov 12 20:43:09.731455 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Nov 12 20:43:09.731461 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Nov 12 20:43:09.731468 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Nov 12 20:43:09.731474 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Nov 12 20:43:09.731480 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Nov 12 20:43:09.731485 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Nov 12 20:43:09.731491 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Nov 12 20:43:09.731497 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Nov 12 20:43:09.731502 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Nov 12 20:43:09.731509 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:43:09.731516 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 12 20:43:09.731522 kernel: random: crng init done Nov 12 20:43:09.731528 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Nov 12 20:43:09.731534 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Nov 12 20:43:09.731540 kernel: printk: log_buf_len min size: 262144 bytes Nov 12 20:43:09.731545 kernel: printk: log_buf_len: 1048576 bytes Nov 12 20:43:09.731552 kernel: printk: early log buf free: 239648(91%) Nov 12 20:43:09.731558 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 12 20:43:09.731564 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 12 20:43:09.731571 kernel: Fallback order for Node 0: 0 Nov 12 20:43:09.731577 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Nov 12 20:43:09.731583 kernel: Policy zone: DMA32 Nov 12 20:43:09.731588 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 12 20:43:09.731595 kernel: Memory: 1936324K/2096628K available (12288K kernel code, 2305K rwdata, 22724K rodata, 42828K init, 2360K bss, 160044K reserved, 0K cma-reserved) Nov 12 20:43:09.731603 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Nov 12 20:43:09.731609 kernel: ftrace: allocating 37799 entries in 148 pages Nov 12 20:43:09.731615 kernel: ftrace: allocated 148 pages with 3 groups Nov 12 20:43:09.731620 kernel: Dynamic Preempt: voluntary Nov 12 20:43:09.731626 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 12 20:43:09.731632 kernel: rcu: RCU event tracing is enabled. Nov 12 20:43:09.731638 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Nov 12 20:43:09.731644 kernel: Trampoline variant of Tasks RCU enabled. Nov 12 20:43:09.731650 kernel: Rude variant of Tasks RCU enabled. Nov 12 20:43:09.731656 kernel: Tracing variant of Tasks RCU enabled. Nov 12 20:43:09.731663 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 12 20:43:09.731669 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Nov 12 20:43:09.731675 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Nov 12 20:43:09.731681 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Nov 12 20:43:09.731687 kernel: Console: colour VGA+ 80x25 Nov 12 20:43:09.731693 kernel: printk: console [tty0] enabled Nov 12 20:43:09.731699 kernel: printk: console [ttyS0] enabled Nov 12 20:43:09.731705 kernel: ACPI: Core revision 20230628 Nov 12 20:43:09.731711 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Nov 12 20:43:09.731718 kernel: APIC: Switch to symmetric I/O mode setup Nov 12 20:43:09.731724 kernel: x2apic enabled Nov 12 20:43:09.731730 kernel: APIC: Switched APIC routing to: physical x2apic Nov 12 20:43:09.731736 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 12 20:43:09.731741 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Nov 12 20:43:09.731747 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Nov 12 20:43:09.731753 kernel: Disabled fast string operations Nov 12 20:43:09.731759 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 12 20:43:09.731765 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Nov 12 20:43:09.731772 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 12 20:43:09.731778 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Nov 12 20:43:09.731784 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Nov 12 20:43:09.731790 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Nov 12 20:43:09.731795 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Nov 12 20:43:09.731801 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Nov 12 20:43:09.731807 kernel: RETBleed: Mitigation: Enhanced IBRS Nov 12 20:43:09.731813 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 12 20:43:09.731819 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 12 20:43:09.731826 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 12 20:43:09.731832 kernel: SRBDS: Unknown: Dependent on hypervisor status Nov 12 20:43:09.731838 kernel: GDS: Unknown: Dependent on hypervisor status Nov 12 20:43:09.731844 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 12 20:43:09.731849 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 12 20:43:09.731855 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 12 20:43:09.731861 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 12 20:43:09.731867 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 12 20:43:09.731873 kernel: Freeing SMP alternatives memory: 32K Nov 12 20:43:09.731880 kernel: pid_max: default: 131072 minimum: 1024 Nov 12 20:43:09.731886 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 12 20:43:09.731892 kernel: landlock: Up and running. Nov 12 20:43:09.731898 kernel: SELinux: Initializing. Nov 12 20:43:09.731904 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 12 20:43:09.731911 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 12 20:43:09.731917 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Nov 12 20:43:09.731922 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Nov 12 20:43:09.731928 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Nov 12 20:43:09.731936 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Nov 12 20:43:09.731942 kernel: Performance Events: Skylake events, core PMU driver. Nov 12 20:43:09.731947 kernel: core: CPUID marked event: 'cpu cycles' unavailable Nov 12 20:43:09.731954 kernel: core: CPUID marked event: 'instructions' unavailable Nov 12 20:43:09.731959 kernel: core: CPUID marked event: 'bus cycles' unavailable Nov 12 20:43:09.731965 kernel: core: CPUID marked event: 'cache references' unavailable Nov 12 20:43:09.731971 kernel: core: CPUID marked event: 'cache misses' unavailable Nov 12 20:43:09.731976 kernel: core: CPUID marked event: 'branch instructions' unavailable Nov 12 20:43:09.731983 kernel: core: CPUID marked event: 'branch misses' unavailable Nov 12 20:43:09.731989 kernel: ... version: 1 Nov 12 20:43:09.731995 kernel: ... bit width: 48 Nov 12 20:43:09.732000 kernel: ... generic registers: 4 Nov 12 20:43:09.732006 kernel: ... value mask: 0000ffffffffffff Nov 12 20:43:09.732012 kernel: ... max period: 000000007fffffff Nov 12 20:43:09.732018 kernel: ... fixed-purpose events: 0 Nov 12 20:43:09.732023 kernel: ... event mask: 000000000000000f Nov 12 20:43:09.732029 kernel: signal: max sigframe size: 1776 Nov 12 20:43:09.732036 kernel: rcu: Hierarchical SRCU implementation. Nov 12 20:43:09.732042 kernel: rcu: Max phase no-delay instances is 400. Nov 12 20:43:09.732048 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 12 20:43:09.732054 kernel: smp: Bringing up secondary CPUs ... Nov 12 20:43:09.732060 kernel: smpboot: x86: Booting SMP configuration: Nov 12 20:43:09.732065 kernel: .... node #0, CPUs: #1 Nov 12 20:43:09.732071 kernel: Disabled fast string operations Nov 12 20:43:09.732077 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Nov 12 20:43:09.732083 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Nov 12 20:43:09.732089 kernel: smp: Brought up 1 node, 2 CPUs Nov 12 20:43:09.732096 kernel: smpboot: Max logical packages: 128 Nov 12 20:43:09.732102 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Nov 12 20:43:09.732107 kernel: devtmpfs: initialized Nov 12 20:43:09.732113 kernel: x86/mm: Memory block size: 128MB Nov 12 20:43:09.732119 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Nov 12 20:43:09.732125 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 12 20:43:09.732131 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Nov 12 20:43:09.732137 kernel: pinctrl core: initialized pinctrl subsystem Nov 12 20:43:09.732143 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 12 20:43:09.732150 kernel: audit: initializing netlink subsys (disabled) Nov 12 20:43:09.732156 kernel: audit: type=2000 audit(1731444188.067:1): state=initialized audit_enabled=0 res=1 Nov 12 20:43:09.732162 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 12 20:43:09.732167 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 12 20:43:09.732173 kernel: cpuidle: using governor menu Nov 12 20:43:09.732179 kernel: Simple Boot Flag at 0x36 set to 0x80 Nov 12 20:43:09.732185 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 12 20:43:09.732190 kernel: dca service started, version 1.12.1 Nov 12 20:43:09.732196 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Nov 12 20:43:09.732243 kernel: PCI: Using configuration type 1 for base access Nov 12 20:43:09.732250 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 12 20:43:09.732256 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 12 20:43:09.732262 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 12 20:43:09.732268 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 12 20:43:09.732274 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 12 20:43:09.732280 kernel: ACPI: Added _OSI(Module Device) Nov 12 20:43:09.732285 kernel: ACPI: Added _OSI(Processor Device) Nov 12 20:43:09.732291 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 12 20:43:09.732299 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 12 20:43:09.732305 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 12 20:43:09.732311 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Nov 12 20:43:09.732317 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 12 20:43:09.732322 kernel: ACPI: Interpreter enabled Nov 12 20:43:09.732328 kernel: ACPI: PM: (supports S0 S1 S5) Nov 12 20:43:09.732334 kernel: ACPI: Using IOAPIC for interrupt routing Nov 12 20:43:09.732340 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 12 20:43:09.732346 kernel: PCI: Using E820 reservations for host bridge windows Nov 12 20:43:09.732354 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Nov 12 20:43:09.732360 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Nov 12 20:43:09.732504 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 12 20:43:09.732581 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Nov 12 20:43:09.732636 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Nov 12 20:43:09.732645 kernel: PCI host bridge to bus 0000:00 Nov 12 20:43:09.732696 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 12 20:43:09.732744 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Nov 12 20:43:09.732788 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 12 20:43:09.732831 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 12 20:43:09.732874 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Nov 12 20:43:09.732917 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Nov 12 20:43:09.732974 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Nov 12 20:43:09.733032 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Nov 12 20:43:09.733085 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Nov 12 20:43:09.733142 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Nov 12 20:43:09.733191 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Nov 12 20:43:09.733281 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Nov 12 20:43:09.733357 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Nov 12 20:43:09.733409 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Nov 12 20:43:09.733460 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Nov 12 20:43:09.733513 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Nov 12 20:43:09.733562 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Nov 12 20:43:09.733611 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Nov 12 20:43:09.733662 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Nov 12 20:43:09.733710 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Nov 12 20:43:09.733761 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Nov 12 20:43:09.733812 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Nov 12 20:43:09.733889 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Nov 12 20:43:09.733945 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Nov 12 20:43:09.733993 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Nov 12 20:43:09.734043 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Nov 12 20:43:09.734091 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 12 20:43:09.734146 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Nov 12 20:43:09.734227 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.734279 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.734333 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.734383 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.734440 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.734494 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.734549 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.734599 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.734652 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.734702 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.734755 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.734807 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.734860 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.734910 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.734963 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.735012 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.735065 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.735117 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.735172 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.735231 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.735285 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.735334 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.735390 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.735442 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.735495 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.735544 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.735596 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.735646 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.735698 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.735751 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.735804 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.735854 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.735906 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.735956 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.736008 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.736060 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.736113 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.736163 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.736226 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.736279 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.736334 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.736387 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.736440 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.736489 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.736542 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.736592 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.736645 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.736694 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.736749 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.736799 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.736851 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.736900 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.736953 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.737003 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.737057 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.737107 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.737162 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.737224 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.737280 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.737330 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.737386 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.737441 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.737494 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.737544 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.737595 kernel: pci_bus 0000:01: extended config space not accessible Nov 12 20:43:09.737645 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 12 20:43:09.737696 kernel: pci_bus 0000:02: extended config space not accessible Nov 12 20:43:09.737707 kernel: acpiphp: Slot [32] registered Nov 12 20:43:09.737714 kernel: acpiphp: Slot [33] registered Nov 12 20:43:09.737720 kernel: acpiphp: Slot [34] registered Nov 12 20:43:09.737726 kernel: acpiphp: Slot [35] registered Nov 12 20:43:09.737731 kernel: acpiphp: Slot [36] registered Nov 12 20:43:09.737737 kernel: acpiphp: Slot [37] registered Nov 12 20:43:09.737743 kernel: acpiphp: Slot [38] registered Nov 12 20:43:09.737749 kernel: acpiphp: Slot [39] registered Nov 12 20:43:09.737756 kernel: acpiphp: Slot [40] registered Nov 12 20:43:09.737762 kernel: acpiphp: Slot [41] registered Nov 12 20:43:09.737768 kernel: acpiphp: Slot [42] registered Nov 12 20:43:09.737773 kernel: acpiphp: Slot [43] registered Nov 12 20:43:09.737779 kernel: acpiphp: Slot [44] registered Nov 12 20:43:09.737785 kernel: acpiphp: Slot [45] registered Nov 12 20:43:09.737790 kernel: acpiphp: Slot [46] registered Nov 12 20:43:09.737796 kernel: acpiphp: Slot [47] registered Nov 12 20:43:09.737802 kernel: acpiphp: Slot [48] registered Nov 12 20:43:09.737807 kernel: acpiphp: Slot [49] registered Nov 12 20:43:09.737815 kernel: acpiphp: Slot [50] registered Nov 12 20:43:09.737820 kernel: acpiphp: Slot [51] registered Nov 12 20:43:09.737826 kernel: acpiphp: Slot [52] registered Nov 12 20:43:09.737832 kernel: acpiphp: Slot [53] registered Nov 12 20:43:09.737838 kernel: acpiphp: Slot [54] registered Nov 12 20:43:09.737843 kernel: acpiphp: Slot [55] registered Nov 12 20:43:09.737849 kernel: acpiphp: Slot [56] registered Nov 12 20:43:09.737855 kernel: acpiphp: Slot [57] registered Nov 12 20:43:09.737861 kernel: acpiphp: Slot [58] registered Nov 12 20:43:09.737868 kernel: acpiphp: Slot [59] registered Nov 12 20:43:09.737874 kernel: acpiphp: Slot [60] registered Nov 12 20:43:09.737880 kernel: acpiphp: Slot [61] registered Nov 12 20:43:09.737886 kernel: acpiphp: Slot [62] registered Nov 12 20:43:09.737891 kernel: acpiphp: Slot [63] registered Nov 12 20:43:09.737940 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Nov 12 20:43:09.737989 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Nov 12 20:43:09.738038 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Nov 12 20:43:09.738087 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Nov 12 20:43:09.738138 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Nov 12 20:43:09.738188 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Nov 12 20:43:09.738569 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Nov 12 20:43:09.738624 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Nov 12 20:43:09.738675 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Nov 12 20:43:09.738730 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Nov 12 20:43:09.738782 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Nov 12 20:43:09.738836 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Nov 12 20:43:09.738887 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Nov 12 20:43:09.738938 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.738987 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Nov 12 20:43:09.739037 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Nov 12 20:43:09.739086 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Nov 12 20:43:09.739135 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Nov 12 20:43:09.739187 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Nov 12 20:43:09.739244 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Nov 12 20:43:09.739293 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Nov 12 20:43:09.739342 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Nov 12 20:43:09.739392 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Nov 12 20:43:09.739441 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Nov 12 20:43:09.739490 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Nov 12 20:43:09.739538 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Nov 12 20:43:09.739592 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Nov 12 20:43:09.739641 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Nov 12 20:43:09.739689 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Nov 12 20:43:09.739739 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Nov 12 20:43:09.739787 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Nov 12 20:43:09.739836 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Nov 12 20:43:09.739888 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Nov 12 20:43:09.739937 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Nov 12 20:43:09.739987 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Nov 12 20:43:09.740037 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Nov 12 20:43:09.740086 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Nov 12 20:43:09.740135 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Nov 12 20:43:09.740187 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Nov 12 20:43:09.740265 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Nov 12 20:43:09.740315 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Nov 12 20:43:09.740370 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Nov 12 20:43:09.740422 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Nov 12 20:43:09.740472 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Nov 12 20:43:09.740521 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Nov 12 20:43:09.740570 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Nov 12 20:43:09.740624 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Nov 12 20:43:09.740675 kernel: pci 0000:0b:00.0: supports D1 D2 Nov 12 20:43:09.740725 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 12 20:43:09.740776 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Nov 12 20:43:09.740826 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Nov 12 20:43:09.740879 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Nov 12 20:43:09.740929 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Nov 12 20:43:09.740979 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Nov 12 20:43:09.741030 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Nov 12 20:43:09.741080 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Nov 12 20:43:09.741130 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Nov 12 20:43:09.741180 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Nov 12 20:43:09.741243 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Nov 12 20:43:09.741294 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Nov 12 20:43:09.741344 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Nov 12 20:43:09.741400 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Nov 12 20:43:09.741450 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Nov 12 20:43:09.741499 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Nov 12 20:43:09.741549 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Nov 12 20:43:09.741599 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Nov 12 20:43:09.741647 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Nov 12 20:43:09.741697 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Nov 12 20:43:09.741745 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Nov 12 20:43:09.741796 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Nov 12 20:43:09.741846 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Nov 12 20:43:09.741895 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Nov 12 20:43:09.741944 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Nov 12 20:43:09.741994 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Nov 12 20:43:09.742044 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Nov 12 20:43:09.742093 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Nov 12 20:43:09.742143 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Nov 12 20:43:09.742194 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Nov 12 20:43:09.742265 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Nov 12 20:43:09.742315 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Nov 12 20:43:09.742366 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Nov 12 20:43:09.742415 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Nov 12 20:43:09.742464 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Nov 12 20:43:09.742513 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Nov 12 20:43:09.742563 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Nov 12 20:43:09.742615 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Nov 12 20:43:09.742664 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Nov 12 20:43:09.742712 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Nov 12 20:43:09.742762 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Nov 12 20:43:09.742811 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Nov 12 20:43:09.742860 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Nov 12 20:43:09.742910 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Nov 12 20:43:09.742962 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Nov 12 20:43:09.743011 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Nov 12 20:43:09.743060 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Nov 12 20:43:09.743109 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Nov 12 20:43:09.743158 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Nov 12 20:43:09.743245 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Nov 12 20:43:09.743299 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Nov 12 20:43:09.743347 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Nov 12 20:43:09.743399 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Nov 12 20:43:09.743448 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Nov 12 20:43:09.743497 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Nov 12 20:43:09.743546 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Nov 12 20:43:09.743595 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Nov 12 20:43:09.743652 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Nov 12 20:43:09.743703 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Nov 12 20:43:09.743999 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Nov 12 20:43:09.744054 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Nov 12 20:43:09.744103 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Nov 12 20:43:09.744152 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Nov 12 20:43:09.744267 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Nov 12 20:43:09.744317 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Nov 12 20:43:09.744364 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Nov 12 20:43:09.744416 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Nov 12 20:43:09.744463 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Nov 12 20:43:09.744512 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Nov 12 20:43:09.744560 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Nov 12 20:43:09.744607 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Nov 12 20:43:09.744654 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Nov 12 20:43:09.744703 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Nov 12 20:43:09.744751 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Nov 12 20:43:09.744798 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Nov 12 20:43:09.744854 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Nov 12 20:43:09.744904 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Nov 12 20:43:09.744951 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Nov 12 20:43:09.745000 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Nov 12 20:43:09.745047 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Nov 12 20:43:09.745093 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Nov 12 20:43:09.745101 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Nov 12 20:43:09.745107 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Nov 12 20:43:09.745113 kernel: ACPI: PCI: Interrupt link LNKB disabled Nov 12 20:43:09.745121 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 12 20:43:09.745126 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Nov 12 20:43:09.745132 kernel: iommu: Default domain type: Translated Nov 12 20:43:09.745137 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 12 20:43:09.745143 kernel: PCI: Using ACPI for IRQ routing Nov 12 20:43:09.745149 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 12 20:43:09.745154 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Nov 12 20:43:09.745160 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Nov 12 20:43:09.745278 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Nov 12 20:43:09.745330 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Nov 12 20:43:09.745378 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 12 20:43:09.745386 kernel: vgaarb: loaded Nov 12 20:43:09.745392 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Nov 12 20:43:09.745398 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Nov 12 20:43:09.745403 kernel: clocksource: Switched to clocksource tsc-early Nov 12 20:43:09.745409 kernel: VFS: Disk quotas dquot_6.6.0 Nov 12 20:43:09.745415 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 12 20:43:09.745421 kernel: pnp: PnP ACPI init Nov 12 20:43:09.745475 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Nov 12 20:43:09.745521 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Nov 12 20:43:09.745563 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Nov 12 20:43:09.745610 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Nov 12 20:43:09.745655 kernel: pnp 00:06: [dma 2] Nov 12 20:43:09.745702 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Nov 12 20:43:09.745749 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Nov 12 20:43:09.745803 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Nov 12 20:43:09.745812 kernel: pnp: PnP ACPI: found 8 devices Nov 12 20:43:09.745818 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 12 20:43:09.745824 kernel: NET: Registered PF_INET protocol family Nov 12 20:43:09.745830 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 12 20:43:09.745836 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 12 20:43:09.745841 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 12 20:43:09.745847 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 12 20:43:09.745855 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 12 20:43:09.745864 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 12 20:43:09.745870 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 12 20:43:09.745876 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 12 20:43:09.749926 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 12 20:43:09.749938 kernel: NET: Registered PF_XDP protocol family Nov 12 20:43:09.750000 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Nov 12 20:43:09.750054 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Nov 12 20:43:09.750107 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Nov 12 20:43:09.750158 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Nov 12 20:43:09.750215 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Nov 12 20:43:09.750264 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Nov 12 20:43:09.750313 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Nov 12 20:43:09.750362 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Nov 12 20:43:09.750467 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Nov 12 20:43:09.750515 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Nov 12 20:43:09.750563 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Nov 12 20:43:09.750612 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Nov 12 20:43:09.750660 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Nov 12 20:43:09.750711 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Nov 12 20:43:09.750759 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Nov 12 20:43:09.750807 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Nov 12 20:43:09.750856 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Nov 12 20:43:09.750905 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Nov 12 20:43:09.750969 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Nov 12 20:43:09.751285 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Nov 12 20:43:09.751360 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Nov 12 20:43:09.751616 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Nov 12 20:43:09.751685 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Nov 12 20:43:09.751735 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Nov 12 20:43:09.751784 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Nov 12 20:43:09.751836 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.751884 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.751932 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.751980 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.752028 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.752076 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.752124 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.752171 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.752232 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.752284 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.752332 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.752380 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.752428 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.752476 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.752523 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.752570 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.752617 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.752667 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.752715 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.752762 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.752809 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.752856 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.752904 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.752951 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.752998 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.753048 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.753095 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.753143 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.753191 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.753592 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.753646 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.753695 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.753743 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.753794 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.753859 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.753907 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.753955 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.754003 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.754051 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.754099 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.754147 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.754198 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.754308 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.754357 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.754408 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.754492 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.754539 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.754587 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.754633 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.754681 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.754732 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.754780 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.754827 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.754874 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.754921 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.754968 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.757238 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.757299 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.757351 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.757423 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.757494 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.757568 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.759645 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.759701 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.759752 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.759801 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.759850 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.759898 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.759947 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.759999 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.760047 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.760095 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.760143 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.760191 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.760251 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.760301 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.760349 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.760397 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.760490 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.760537 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.760586 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.760635 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.760683 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.760731 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.760780 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 12 20:43:09.760829 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Nov 12 20:43:09.760877 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Nov 12 20:43:09.760925 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Nov 12 20:43:09.760975 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Nov 12 20:43:09.761027 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Nov 12 20:43:09.761077 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Nov 12 20:43:09.761126 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Nov 12 20:43:09.761174 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Nov 12 20:43:09.761229 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Nov 12 20:43:09.761279 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Nov 12 20:43:09.761327 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Nov 12 20:43:09.761378 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Nov 12 20:43:09.761427 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Nov 12 20:43:09.761476 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Nov 12 20:43:09.761525 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Nov 12 20:43:09.761573 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Nov 12 20:43:09.761620 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Nov 12 20:43:09.761669 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Nov 12 20:43:09.761717 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Nov 12 20:43:09.761764 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Nov 12 20:43:09.761814 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Nov 12 20:43:09.761863 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Nov 12 20:43:09.761911 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Nov 12 20:43:09.761962 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Nov 12 20:43:09.762009 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Nov 12 20:43:09.762056 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Nov 12 20:43:09.762106 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Nov 12 20:43:09.762155 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Nov 12 20:43:09.762209 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Nov 12 20:43:09.762263 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Nov 12 20:43:09.762312 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Nov 12 20:43:09.762360 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Nov 12 20:43:09.762416 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Nov 12 20:43:09.762466 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Nov 12 20:43:09.762514 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Nov 12 20:43:09.762566 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Nov 12 20:43:09.762614 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Nov 12 20:43:09.762663 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Nov 12 20:43:09.762711 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Nov 12 20:43:09.762759 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Nov 12 20:43:09.762813 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Nov 12 20:43:09.762865 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Nov 12 20:43:09.762914 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Nov 12 20:43:09.762963 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Nov 12 20:43:09.763011 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Nov 12 20:43:09.763062 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Nov 12 20:43:09.763111 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Nov 12 20:43:09.763159 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Nov 12 20:43:09.763219 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Nov 12 20:43:09.763272 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Nov 12 20:43:09.763320 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Nov 12 20:43:09.763369 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Nov 12 20:43:09.763417 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Nov 12 20:43:09.763465 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Nov 12 20:43:09.763516 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Nov 12 20:43:09.763564 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Nov 12 20:43:09.763612 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Nov 12 20:43:09.763661 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Nov 12 20:43:09.763709 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Nov 12 20:43:09.763758 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Nov 12 20:43:09.763807 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Nov 12 20:43:09.763856 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Nov 12 20:43:09.763904 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Nov 12 20:43:09.763952 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Nov 12 20:43:09.764004 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Nov 12 20:43:09.764052 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Nov 12 20:43:09.764100 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Nov 12 20:43:09.764148 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Nov 12 20:43:09.764197 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Nov 12 20:43:09.764265 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Nov 12 20:43:09.764314 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Nov 12 20:43:09.764363 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Nov 12 20:43:09.764411 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Nov 12 20:43:09.764463 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Nov 12 20:43:09.764512 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Nov 12 20:43:09.764561 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Nov 12 20:43:09.764610 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Nov 12 20:43:09.764659 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Nov 12 20:43:09.764708 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Nov 12 20:43:09.764756 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Nov 12 20:43:09.764804 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Nov 12 20:43:09.764852 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Nov 12 20:43:09.764901 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Nov 12 20:43:09.764952 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Nov 12 20:43:09.765000 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Nov 12 20:43:09.765048 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Nov 12 20:43:09.765096 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Nov 12 20:43:09.765146 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Nov 12 20:43:09.765194 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Nov 12 20:43:09.765256 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Nov 12 20:43:09.765305 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Nov 12 20:43:09.765355 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Nov 12 20:43:09.765411 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Nov 12 20:43:09.765459 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Nov 12 20:43:09.765508 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Nov 12 20:43:09.765570 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Nov 12 20:43:09.765627 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Nov 12 20:43:09.765676 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Nov 12 20:43:09.765726 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Nov 12 20:43:09.765774 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Nov 12 20:43:09.765823 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Nov 12 20:43:09.765871 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Nov 12 20:43:09.765922 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Nov 12 20:43:09.765970 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Nov 12 20:43:09.766019 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Nov 12 20:43:09.766068 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Nov 12 20:43:09.766116 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Nov 12 20:43:09.766165 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Nov 12 20:43:09.766298 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Nov 12 20:43:09.766349 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Nov 12 20:43:09.766397 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Nov 12 20:43:09.766448 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Nov 12 20:43:09.766496 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Nov 12 20:43:09.766543 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Nov 12 20:43:09.766587 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Nov 12 20:43:09.766629 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Nov 12 20:43:09.766672 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Nov 12 20:43:09.766715 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Nov 12 20:43:09.766761 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Nov 12 20:43:09.766808 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Nov 12 20:43:09.766852 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Nov 12 20:43:09.766896 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Nov 12 20:43:09.766940 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Nov 12 20:43:09.766983 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Nov 12 20:43:09.767027 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Nov 12 20:43:09.767070 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Nov 12 20:43:09.767119 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Nov 12 20:43:09.767166 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Nov 12 20:43:09.767220 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Nov 12 20:43:09.767271 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Nov 12 20:43:09.767316 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Nov 12 20:43:09.767361 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Nov 12 20:43:09.767411 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Nov 12 20:43:09.767459 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Nov 12 20:43:09.767503 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Nov 12 20:43:09.767550 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Nov 12 20:43:09.767595 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Nov 12 20:43:09.767643 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Nov 12 20:43:09.767688 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Nov 12 20:43:09.767735 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Nov 12 20:43:09.767783 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Nov 12 20:43:09.767831 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Nov 12 20:43:09.767876 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Nov 12 20:43:09.767926 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Nov 12 20:43:09.767978 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Nov 12 20:43:09.768030 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Nov 12 20:43:09.768076 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Nov 12 20:43:09.768120 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Nov 12 20:43:09.768177 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Nov 12 20:43:09.768320 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Nov 12 20:43:09.768394 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Nov 12 20:43:09.768450 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Nov 12 20:43:09.768517 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Nov 12 20:43:09.768565 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Nov 12 20:43:09.768618 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Nov 12 20:43:09.768663 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Nov 12 20:43:09.768711 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Nov 12 20:43:09.768756 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Nov 12 20:43:09.768807 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Nov 12 20:43:09.768852 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Nov 12 20:43:09.768900 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Nov 12 20:43:09.768946 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Nov 12 20:43:09.768993 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Nov 12 20:43:09.769038 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Nov 12 20:43:09.769087 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Nov 12 20:43:09.769152 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Nov 12 20:43:09.769226 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Nov 12 20:43:09.769275 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Nov 12 20:43:09.769320 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Nov 12 20:43:09.769363 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Nov 12 20:43:09.769429 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Nov 12 20:43:09.769477 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Nov 12 20:43:09.769523 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Nov 12 20:43:09.769575 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Nov 12 20:43:09.769621 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Nov 12 20:43:09.769670 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Nov 12 20:43:09.769716 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Nov 12 20:43:09.769780 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Nov 12 20:43:09.769828 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Nov 12 20:43:09.769877 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Nov 12 20:43:09.769922 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Nov 12 20:43:09.769972 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Nov 12 20:43:09.770017 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Nov 12 20:43:09.770068 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Nov 12 20:43:09.770113 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Nov 12 20:43:09.770157 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Nov 12 20:43:09.770232 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Nov 12 20:43:09.770279 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Nov 12 20:43:09.770340 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Nov 12 20:43:09.770388 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Nov 12 20:43:09.770463 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Nov 12 20:43:09.770516 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Nov 12 20:43:09.770562 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Nov 12 20:43:09.770612 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Nov 12 20:43:09.770658 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Nov 12 20:43:09.770707 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Nov 12 20:43:09.770757 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Nov 12 20:43:09.770807 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Nov 12 20:43:09.770853 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Nov 12 20:43:09.770902 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Nov 12 20:43:09.770948 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Nov 12 20:43:09.771003 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 12 20:43:09.771014 kernel: PCI: CLS 32 bytes, default 64 Nov 12 20:43:09.771021 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 12 20:43:09.771028 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Nov 12 20:43:09.771035 kernel: clocksource: Switched to clocksource tsc Nov 12 20:43:09.771041 kernel: Initialise system trusted keyrings Nov 12 20:43:09.771047 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 12 20:43:09.771054 kernel: Key type asymmetric registered Nov 12 20:43:09.771060 kernel: Asymmetric key parser 'x509' registered Nov 12 20:43:09.771066 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 12 20:43:09.771073 kernel: io scheduler mq-deadline registered Nov 12 20:43:09.771080 kernel: io scheduler kyber registered Nov 12 20:43:09.771087 kernel: io scheduler bfq registered Nov 12 20:43:09.771138 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Nov 12 20:43:09.771191 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.771262 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Nov 12 20:43:09.771316 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.771367 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Nov 12 20:43:09.771423 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.771477 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Nov 12 20:43:09.771528 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.771579 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Nov 12 20:43:09.771630 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.771682 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Nov 12 20:43:09.771736 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.771787 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Nov 12 20:43:09.771837 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.771888 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Nov 12 20:43:09.771938 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.771990 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Nov 12 20:43:09.772041 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.772093 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Nov 12 20:43:09.772143 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.772194 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Nov 12 20:43:09.772285 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.772337 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Nov 12 20:43:09.772390 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.772441 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Nov 12 20:43:09.772492 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.772541 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Nov 12 20:43:09.772592 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.772643 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Nov 12 20:43:09.772695 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.772746 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Nov 12 20:43:09.772797 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.772848 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Nov 12 20:43:09.772898 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.772950 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Nov 12 20:43:09.773000 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.773051 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Nov 12 20:43:09.773100 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.773151 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Nov 12 20:43:09.773262 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.773322 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Nov 12 20:43:09.773375 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.773426 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Nov 12 20:43:09.773476 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.773526 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Nov 12 20:43:09.773576 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.773629 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Nov 12 20:43:09.773679 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.773731 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Nov 12 20:43:09.773781 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.773832 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Nov 12 20:43:09.773883 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.773934 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Nov 12 20:43:09.773987 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.774037 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Nov 12 20:43:09.774087 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.774139 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Nov 12 20:43:09.774189 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.774249 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Nov 12 20:43:09.774299 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.774349 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Nov 12 20:43:09.774403 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.774454 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Nov 12 20:43:09.774506 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.774515 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 12 20:43:09.774522 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 12 20:43:09.774529 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 12 20:43:09.774535 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Nov 12 20:43:09.774542 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 12 20:43:09.774548 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 12 20:43:09.774599 kernel: rtc_cmos 00:01: registered as rtc0 Nov 12 20:43:09.774648 kernel: rtc_cmos 00:01: setting system clock to 2024-11-12T20:43:09 UTC (1731444189) Nov 12 20:43:09.774694 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Nov 12 20:43:09.774703 kernel: intel_pstate: CPU model not supported Nov 12 20:43:09.774709 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 12 20:43:09.774716 kernel: NET: Registered PF_INET6 protocol family Nov 12 20:43:09.774722 kernel: Segment Routing with IPv6 Nov 12 20:43:09.774729 kernel: In-situ OAM (IOAM) with IPv6 Nov 12 20:43:09.774735 kernel: NET: Registered PF_PACKET protocol family Nov 12 20:43:09.774743 kernel: Key type dns_resolver registered Nov 12 20:43:09.774749 kernel: IPI shorthand broadcast: enabled Nov 12 20:43:09.774756 kernel: sched_clock: Marking stable (888014553, 227205661)->(1175789926, -60569712) Nov 12 20:43:09.774762 kernel: registered taskstats version 1 Nov 12 20:43:09.774768 kernel: Loading compiled-in X.509 certificates Nov 12 20:43:09.774775 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: 0473a73d840db5324524af106a53c13fc6fc218a' Nov 12 20:43:09.774781 kernel: Key type .fscrypt registered Nov 12 20:43:09.774788 kernel: Key type fscrypt-provisioning registered Nov 12 20:43:09.774794 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 12 20:43:09.774801 kernel: ima: Allocated hash algorithm: sha1 Nov 12 20:43:09.774808 kernel: ima: No architecture policies found Nov 12 20:43:09.774814 kernel: clk: Disabling unused clocks Nov 12 20:43:09.774820 kernel: Freeing unused kernel image (initmem) memory: 42828K Nov 12 20:43:09.774826 kernel: Write protecting the kernel read-only data: 36864k Nov 12 20:43:09.774833 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Nov 12 20:43:09.774839 kernel: Run /init as init process Nov 12 20:43:09.774845 kernel: with arguments: Nov 12 20:43:09.774852 kernel: /init Nov 12 20:43:09.774859 kernel: with environment: Nov 12 20:43:09.774865 kernel: HOME=/ Nov 12 20:43:09.774871 kernel: TERM=linux Nov 12 20:43:09.774877 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 12 20:43:09.774885 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:43:09.774893 systemd[1]: Detected virtualization vmware. Nov 12 20:43:09.774900 systemd[1]: Detected architecture x86-64. Nov 12 20:43:09.774906 systemd[1]: Running in initrd. Nov 12 20:43:09.774914 systemd[1]: No hostname configured, using default hostname. Nov 12 20:43:09.774920 systemd[1]: Hostname set to . Nov 12 20:43:09.774926 systemd[1]: Initializing machine ID from random generator. Nov 12 20:43:09.774933 systemd[1]: Queued start job for default target initrd.target. Nov 12 20:43:09.774939 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:43:09.774946 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:43:09.774953 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 12 20:43:09.774960 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:43:09.774968 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 12 20:43:09.774975 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 12 20:43:09.774982 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 12 20:43:09.774988 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 12 20:43:09.774995 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:43:09.775002 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:43:09.775009 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:43:09.775016 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:43:09.775023 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:43:09.775029 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:43:09.775037 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:43:09.775043 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:43:09.775050 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 20:43:09.775056 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 20:43:09.775063 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:43:09.775070 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:43:09.775077 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:43:09.775083 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:43:09.775090 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 12 20:43:09.775096 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:43:09.775103 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 12 20:43:09.775109 systemd[1]: Starting systemd-fsck-usr.service... Nov 12 20:43:09.775116 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:43:09.775122 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:43:09.775131 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:43:09.775148 systemd-journald[216]: Collecting audit messages is disabled. Nov 12 20:43:09.775165 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 12 20:43:09.775172 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:43:09.775180 systemd[1]: Finished systemd-fsck-usr.service. Nov 12 20:43:09.775187 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 12 20:43:09.775194 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 20:43:09.775200 kernel: Bridge firewalling registered Nov 12 20:43:09.777256 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:43:09.777265 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:43:09.777272 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:43:09.777279 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:43:09.777291 systemd-journald[216]: Journal started Nov 12 20:43:09.777307 systemd-journald[216]: Runtime Journal (/run/log/journal/954c29c8f3dd49fb947330e1b1c0f7b1) is 4.8M, max 38.6M, 33.8M free. Nov 12 20:43:09.777337 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:43:09.732580 systemd-modules-load[217]: Inserted module 'overlay' Nov 12 20:43:09.759640 systemd-modules-load[217]: Inserted module 'br_netfilter' Nov 12 20:43:09.781228 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:43:09.784229 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:43:09.786386 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:43:09.787502 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:43:09.791290 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 12 20:43:09.793129 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:43:09.793532 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:43:09.797832 dracut-cmdline[244]: dracut-dracut-053 Nov 12 20:43:09.799451 dracut-cmdline[244]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:43:09.799933 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:43:09.803285 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:43:09.819609 systemd-resolved[262]: Positive Trust Anchors: Nov 12 20:43:09.819618 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:43:09.819639 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:43:09.822112 systemd-resolved[262]: Defaulting to hostname 'linux'. Nov 12 20:43:09.822682 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:43:09.822818 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:43:09.843221 kernel: SCSI subsystem initialized Nov 12 20:43:09.849214 kernel: Loading iSCSI transport class v2.0-870. Nov 12 20:43:09.855214 kernel: iscsi: registered transport (tcp) Nov 12 20:43:09.867406 kernel: iscsi: registered transport (qla4xxx) Nov 12 20:43:09.867425 kernel: QLogic iSCSI HBA Driver Nov 12 20:43:09.886933 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 12 20:43:09.891291 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 12 20:43:09.905648 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 12 20:43:09.906236 kernel: device-mapper: uevent: version 1.0.3 Nov 12 20:43:09.906245 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 12 20:43:09.937218 kernel: raid6: avx2x4 gen() 53854 MB/s Nov 12 20:43:09.954248 kernel: raid6: avx2x2 gen() 54375 MB/s Nov 12 20:43:09.971415 kernel: raid6: avx2x1 gen() 45933 MB/s Nov 12 20:43:09.971435 kernel: raid6: using algorithm avx2x2 gen() 54375 MB/s Nov 12 20:43:09.989465 kernel: raid6: .... xor() 31916 MB/s, rmw enabled Nov 12 20:43:09.989486 kernel: raid6: using avx2x2 recovery algorithm Nov 12 20:43:10.002215 kernel: xor: automatically using best checksumming function avx Nov 12 20:43:10.101351 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 12 20:43:10.106601 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:43:10.111356 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:43:10.119020 systemd-udevd[433]: Using default interface naming scheme 'v255'. Nov 12 20:43:10.121556 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:43:10.127415 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 12 20:43:10.134425 dracut-pre-trigger[441]: rd.md=0: removing MD RAID activation Nov 12 20:43:10.150674 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:43:10.154322 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:43:10.222907 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:43:10.225642 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 12 20:43:10.240225 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 12 20:43:10.241008 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:43:10.241368 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:43:10.241643 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:43:10.245288 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 12 20:43:10.252716 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:43:10.291223 kernel: VMware PVSCSI driver - version 1.0.7.0-k Nov 12 20:43:10.292234 kernel: vmw_pvscsi: using 64bit dma Nov 12 20:43:10.296217 kernel: vmw_pvscsi: max_id: 16 Nov 12 20:43:10.296234 kernel: vmw_pvscsi: setting ring_pages to 8 Nov 12 20:43:10.299021 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Nov 12 20:43:10.299038 kernel: vmw_pvscsi: enabling reqCallThreshold Nov 12 20:43:10.299046 kernel: vmw_pvscsi: driver-based request coalescing enabled Nov 12 20:43:10.299054 kernel: vmw_pvscsi: using MSI-X Nov 12 20:43:10.300681 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Nov 12 20:43:10.310198 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Nov 12 20:43:10.310288 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Nov 12 20:43:10.310381 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Nov 12 20:43:10.310449 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Nov 12 20:43:10.320213 kernel: cryptd: max_cpu_qlen set to 1000 Nov 12 20:43:10.327247 kernel: libata version 3.00 loaded. Nov 12 20:43:10.328215 kernel: ata_piix 0000:00:07.1: version 2.13 Nov 12 20:43:10.336825 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Nov 12 20:43:10.336908 kernel: scsi host1: ata_piix Nov 12 20:43:10.336980 kernel: scsi host2: ata_piix Nov 12 20:43:10.337039 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Nov 12 20:43:10.337048 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Nov 12 20:43:10.338020 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:43:10.338093 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:43:10.340290 kernel: AVX2 version of gcm_enc/dec engaged. Nov 12 20:43:10.340305 kernel: AES CTR mode by8 optimization enabled Nov 12 20:43:10.339534 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:43:10.339622 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:43:10.339703 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:43:10.340348 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:43:10.346440 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:43:10.359487 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:43:10.364304 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:43:10.371385 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:43:10.507231 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Nov 12 20:43:10.513291 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Nov 12 20:43:10.529227 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Nov 12 20:43:10.580560 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 12 20:43:10.580661 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Nov 12 20:43:10.580745 kernel: sd 0:0:0:0: [sda] Cache data unavailable Nov 12 20:43:10.580825 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Nov 12 20:43:10.580911 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Nov 12 20:43:10.581003 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 12 20:43:10.581018 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 12 20:43:10.581098 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:43:10.581109 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 12 20:43:10.766330 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Nov 12 20:43:10.772318 kernel: BTRFS: device fsid 9dfeafbb-8ab7-4be2-acae-f51db463fc77 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (484) Nov 12 20:43:10.772334 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (498) Nov 12 20:43:10.770177 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Nov 12 20:43:10.775300 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Nov 12 20:43:10.775547 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Nov 12 20:43:10.778241 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Nov 12 20:43:10.787306 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 12 20:43:10.812216 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:43:10.816555 kernel: GPT:disk_guids don't match. Nov 12 20:43:10.816588 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 12 20:43:10.816597 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:43:10.822216 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:43:11.821230 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:43:11.821764 disk-uuid[597]: The operation has completed successfully. Nov 12 20:43:11.856530 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 12 20:43:11.856866 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 12 20:43:11.861302 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 12 20:43:11.863765 sh[614]: Success Nov 12 20:43:11.872220 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 12 20:43:11.905264 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 12 20:43:11.910163 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 12 20:43:11.910409 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 12 20:43:11.931241 kernel: BTRFS info (device dm-0): first mount of filesystem 9dfeafbb-8ab7-4be2-acae-f51db463fc77 Nov 12 20:43:11.931285 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:43:11.931294 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 12 20:43:11.931301 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 12 20:43:11.931427 kernel: BTRFS info (device dm-0): using free space tree Nov 12 20:43:11.942220 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 12 20:43:11.944095 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 12 20:43:11.953360 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Nov 12 20:43:11.954808 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 12 20:43:12.025007 kernel: BTRFS info (device sda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:43:12.025058 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:43:12.025067 kernel: BTRFS info (device sda6): using free space tree Nov 12 20:43:12.098228 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 12 20:43:12.113424 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 12 20:43:12.115217 kernel: BTRFS info (device sda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:43:12.121527 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 12 20:43:12.125325 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 12 20:43:12.270066 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Nov 12 20:43:12.274343 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 12 20:43:12.328345 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:43:12.332275 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:43:12.344748 systemd-networkd[808]: lo: Link UP Nov 12 20:43:12.344752 systemd-networkd[808]: lo: Gained carrier Nov 12 20:43:12.345584 systemd-networkd[808]: Enumeration completed Nov 12 20:43:12.345746 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:43:12.345829 systemd-networkd[808]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Nov 12 20:43:12.345903 systemd[1]: Reached target network.target - Network. Nov 12 20:43:12.349539 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Nov 12 20:43:12.349669 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Nov 12 20:43:12.349790 systemd-networkd[808]: ens192: Link UP Nov 12 20:43:12.349797 systemd-networkd[808]: ens192: Gained carrier Nov 12 20:43:12.369645 ignition[675]: Ignition 2.19.0 Nov 12 20:43:12.369657 ignition[675]: Stage: fetch-offline Nov 12 20:43:12.369694 ignition[675]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:43:12.369700 ignition[675]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 12 20:43:12.369764 ignition[675]: parsed url from cmdline: "" Nov 12 20:43:12.369766 ignition[675]: no config URL provided Nov 12 20:43:12.369769 ignition[675]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 20:43:12.369774 ignition[675]: no config at "/usr/lib/ignition/user.ign" Nov 12 20:43:12.370173 ignition[675]: config successfully fetched Nov 12 20:43:12.370190 ignition[675]: parsing config with SHA512: 86a4cefd826eff6e91e1177f5a5b2fe2185dca41fc815ef5c59dd11affc74e62a92299dcb637c5281b35cb56996743e8296b15bb3e2dffdc058dded915daa7c3 Nov 12 20:43:12.372446 unknown[675]: fetched base config from "system" Nov 12 20:43:12.372454 unknown[675]: fetched user config from "vmware" Nov 12 20:43:12.372684 ignition[675]: fetch-offline: fetch-offline passed Nov 12 20:43:12.372720 ignition[675]: Ignition finished successfully Nov 12 20:43:12.373524 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:43:12.373898 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 12 20:43:12.377314 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 12 20:43:12.385059 ignition[813]: Ignition 2.19.0 Nov 12 20:43:12.385066 ignition[813]: Stage: kargs Nov 12 20:43:12.385158 ignition[813]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:43:12.385164 ignition[813]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 12 20:43:12.385709 ignition[813]: kargs: kargs passed Nov 12 20:43:12.385738 ignition[813]: Ignition finished successfully Nov 12 20:43:12.386951 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 12 20:43:12.390351 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 12 20:43:12.397546 ignition[819]: Ignition 2.19.0 Nov 12 20:43:12.397553 ignition[819]: Stage: disks Nov 12 20:43:12.397658 ignition[819]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:43:12.397664 ignition[819]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 12 20:43:12.398179 ignition[819]: disks: disks passed Nov 12 20:43:12.398214 ignition[819]: Ignition finished successfully Nov 12 20:43:12.399033 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 12 20:43:12.399515 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 12 20:43:12.399795 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 20:43:12.400052 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:43:12.400298 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:43:12.400505 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:43:12.405338 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 12 20:43:12.514269 systemd-fsck[827]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Nov 12 20:43:12.520132 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 12 20:43:12.523276 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 12 20:43:12.664222 kernel: EXT4-fs (sda9): mounted filesystem cc5635ac-cac6-420e-b789-89e3a937cfb2 r/w with ordered data mode. Quota mode: none. Nov 12 20:43:12.664447 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 12 20:43:12.664868 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 12 20:43:12.674301 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:43:12.676629 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 12 20:43:12.677069 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 12 20:43:12.677116 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 12 20:43:12.677141 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:43:12.681694 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 12 20:43:12.682578 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 12 20:43:12.688171 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (835) Nov 12 20:43:12.688865 kernel: BTRFS info (device sda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:43:12.688889 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:43:12.689326 kernel: BTRFS info (device sda6): using free space tree Nov 12 20:43:12.695226 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 12 20:43:12.696413 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:43:12.727031 initrd-setup-root[859]: cut: /sysroot/etc/passwd: No such file or directory Nov 12 20:43:12.730755 initrd-setup-root[866]: cut: /sysroot/etc/group: No such file or directory Nov 12 20:43:12.734484 initrd-setup-root[873]: cut: /sysroot/etc/shadow: No such file or directory Nov 12 20:43:12.738394 initrd-setup-root[880]: cut: /sysroot/etc/gshadow: No such file or directory Nov 12 20:43:12.809992 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 12 20:43:12.814326 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 12 20:43:12.816766 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 12 20:43:12.822257 kernel: BTRFS info (device sda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:43:12.834695 ignition[947]: INFO : Ignition 2.19.0 Nov 12 20:43:12.835027 ignition[947]: INFO : Stage: mount Nov 12 20:43:12.835027 ignition[947]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:43:12.835027 ignition[947]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 12 20:43:12.835454 ignition[947]: INFO : mount: mount passed Nov 12 20:43:12.835932 ignition[947]: INFO : Ignition finished successfully Nov 12 20:43:12.836199 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 12 20:43:12.840416 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 12 20:43:12.841063 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 12 20:43:12.926943 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 12 20:43:12.932332 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:43:12.967227 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (959) Nov 12 20:43:12.970864 kernel: BTRFS info (device sda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:43:12.970905 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:43:12.970922 kernel: BTRFS info (device sda6): using free space tree Nov 12 20:43:12.980219 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 12 20:43:12.980900 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:43:12.995291 ignition[976]: INFO : Ignition 2.19.0 Nov 12 20:43:12.995291 ignition[976]: INFO : Stage: files Nov 12 20:43:12.995661 ignition[976]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:43:12.995661 ignition[976]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 12 20:43:12.995953 ignition[976]: DEBUG : files: compiled without relabeling support, skipping Nov 12 20:43:12.997909 ignition[976]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 12 20:43:12.997909 ignition[976]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 12 20:43:13.003577 ignition[976]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 12 20:43:13.003749 ignition[976]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 12 20:43:13.003878 ignition[976]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 12 20:43:13.003814 unknown[976]: wrote ssh authorized keys file for user: core Nov 12 20:43:13.005551 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:43:13.005804 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Nov 12 20:43:13.039766 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 12 20:43:13.108257 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:43:13.109084 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 12 20:43:13.109084 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 12 20:43:13.109084 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:43:13.109084 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:43:13.109084 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:43:13.109084 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:43:13.109084 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:43:13.109084 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:43:13.110626 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:43:13.110626 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:43:13.110626 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:43:13.110626 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:43:13.110626 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:43:13.110626 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Nov 12 20:43:13.441420 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 12 20:43:13.668616 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:43:13.668913 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Nov 12 20:43:13.668913 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Nov 12 20:43:13.669639 ignition[976]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 12 20:43:13.669639 ignition[976]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:43:13.669639 ignition[976]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:43:13.669639 ignition[976]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 12 20:43:13.669639 ignition[976]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 12 20:43:13.669639 ignition[976]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 20:43:13.669639 ignition[976]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 20:43:13.669639 ignition[976]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 12 20:43:13.669639 ignition[976]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Nov 12 20:43:13.728622 ignition[976]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 20:43:13.731517 ignition[976]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 20:43:13.731517 ignition[976]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Nov 12 20:43:13.731517 ignition[976]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 12 20:43:13.731517 ignition[976]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 12 20:43:13.732175 ignition[976]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:43:13.732175 ignition[976]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:43:13.732175 ignition[976]: INFO : files: files passed Nov 12 20:43:13.732175 ignition[976]: INFO : Ignition finished successfully Nov 12 20:43:13.733094 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 12 20:43:13.737362 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 12 20:43:13.738333 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 12 20:43:13.750389 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 12 20:43:13.750457 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 12 20:43:13.755032 initrd-setup-root-after-ignition[1006]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:43:13.755032 initrd-setup-root-after-ignition[1006]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:43:13.756001 initrd-setup-root-after-ignition[1010]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:43:13.756950 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:43:13.757349 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 12 20:43:13.761419 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 12 20:43:13.775968 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 12 20:43:13.776038 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 12 20:43:13.776383 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 12 20:43:13.776496 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 12 20:43:13.776696 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 12 20:43:13.777195 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 12 20:43:13.788078 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:43:13.792340 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 12 20:43:13.798679 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:43:13.798908 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:43:13.799158 systemd[1]: Stopped target timers.target - Timer Units. Nov 12 20:43:13.799354 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 12 20:43:13.799435 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:43:13.799793 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 12 20:43:13.799954 systemd[1]: Stopped target basic.target - Basic System. Nov 12 20:43:13.800134 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 12 20:43:13.800347 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:43:13.800546 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 12 20:43:13.800959 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 12 20:43:13.801139 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:43:13.801356 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 12 20:43:13.801561 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 12 20:43:13.801749 systemd[1]: Stopped target swap.target - Swaps. Nov 12 20:43:13.801924 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 12 20:43:13.801995 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:43:13.802353 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:43:13.802518 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:43:13.802703 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 12 20:43:13.802753 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:43:13.802918 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 12 20:43:13.802979 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 12 20:43:13.803275 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 12 20:43:13.803340 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:43:13.803550 systemd[1]: Stopped target paths.target - Path Units. Nov 12 20:43:13.803680 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 12 20:43:13.804277 systemd-networkd[808]: ens192: Gained IPv6LL Nov 12 20:43:13.808262 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:43:13.808647 systemd[1]: Stopped target slices.target - Slice Units. Nov 12 20:43:13.808817 systemd[1]: Stopped target sockets.target - Socket Units. Nov 12 20:43:13.808996 systemd[1]: iscsid.socket: Deactivated successfully. Nov 12 20:43:13.809059 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:43:13.809294 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 12 20:43:13.809343 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:43:13.809500 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 12 20:43:13.809569 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:43:13.809820 systemd[1]: ignition-files.service: Deactivated successfully. Nov 12 20:43:13.809879 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 12 20:43:13.819386 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 12 20:43:13.822363 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 12 20:43:13.822673 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 12 20:43:13.822866 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:43:13.823193 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 12 20:43:13.823274 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:43:13.825919 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 12 20:43:13.826125 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 12 20:43:13.828065 ignition[1030]: INFO : Ignition 2.19.0 Nov 12 20:43:13.828065 ignition[1030]: INFO : Stage: umount Nov 12 20:43:13.828357 ignition[1030]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:43:13.828357 ignition[1030]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 12 20:43:13.828826 ignition[1030]: INFO : umount: umount passed Nov 12 20:43:13.828826 ignition[1030]: INFO : Ignition finished successfully Nov 12 20:43:13.833564 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 12 20:43:13.833867 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 12 20:43:13.834287 systemd[1]: Stopped target network.target - Network. Nov 12 20:43:13.834383 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 12 20:43:13.834426 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 12 20:43:13.834542 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 12 20:43:13.834566 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 12 20:43:13.834667 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 12 20:43:13.834687 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 12 20:43:13.834783 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 12 20:43:13.834804 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 12 20:43:13.834992 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 12 20:43:13.835133 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 12 20:43:13.837804 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 12 20:43:13.837869 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 12 20:43:13.839068 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 12 20:43:13.839111 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:43:13.841911 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 12 20:43:13.845248 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 12 20:43:13.845478 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 12 20:43:13.845869 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 12 20:43:13.845889 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:43:13.850320 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 12 20:43:13.850580 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 12 20:43:13.850621 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:43:13.850769 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Nov 12 20:43:13.850800 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Nov 12 20:43:13.850931 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 20:43:13.850953 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:43:13.851051 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 12 20:43:13.851072 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 12 20:43:13.851255 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:43:13.857478 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 12 20:43:13.857554 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 12 20:43:13.861669 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 12 20:43:13.861755 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:43:13.862063 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 12 20:43:13.862088 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 12 20:43:13.862302 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 12 20:43:13.862320 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:43:13.862490 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 12 20:43:13.862513 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:43:13.862790 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 12 20:43:13.862811 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 12 20:43:13.863091 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:43:13.863112 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:43:13.869423 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 12 20:43:13.869533 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 12 20:43:13.869565 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:43:13.869692 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 12 20:43:13.869715 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:43:13.869827 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 12 20:43:13.869848 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:43:13.869958 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:43:13.869979 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:43:13.872583 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 12 20:43:13.872649 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 12 20:43:13.900574 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 12 20:43:13.900649 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 12 20:43:13.900914 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 12 20:43:13.901023 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 12 20:43:13.901048 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 12 20:43:13.905367 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 12 20:43:13.918533 systemd[1]: Switching root. Nov 12 20:43:13.949539 systemd-journald[216]: Journal stopped Nov 12 20:43:09.729118 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 12 16:20:46 -00 2024 Nov 12 20:43:09.729136 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:43:09.729143 kernel: Disabled fast string operations Nov 12 20:43:09.729147 kernel: BIOS-provided physical RAM map: Nov 12 20:43:09.729151 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Nov 12 20:43:09.729155 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Nov 12 20:43:09.729161 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Nov 12 20:43:09.729166 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Nov 12 20:43:09.729170 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Nov 12 20:43:09.729174 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Nov 12 20:43:09.729178 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Nov 12 20:43:09.729183 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Nov 12 20:43:09.729187 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Nov 12 20:43:09.729191 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Nov 12 20:43:09.729198 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Nov 12 20:43:09.729237 kernel: NX (Execute Disable) protection: active Nov 12 20:43:09.729243 kernel: APIC: Static calls initialized Nov 12 20:43:09.729248 kernel: SMBIOS 2.7 present. Nov 12 20:43:09.729253 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Nov 12 20:43:09.729258 kernel: vmware: hypercall mode: 0x00 Nov 12 20:43:09.729263 kernel: Hypervisor detected: VMware Nov 12 20:43:09.729267 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Nov 12 20:43:09.729274 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Nov 12 20:43:09.729278 kernel: vmware: using clock offset of 4463714483 ns Nov 12 20:43:09.729283 kernel: tsc: Detected 3408.000 MHz processor Nov 12 20:43:09.729289 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 12 20:43:09.729294 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 12 20:43:09.729299 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Nov 12 20:43:09.729304 kernel: total RAM covered: 3072M Nov 12 20:43:09.729309 kernel: Found optimal setting for mtrr clean up Nov 12 20:43:09.729316 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Nov 12 20:43:09.729323 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Nov 12 20:43:09.729328 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 12 20:43:09.729333 kernel: Using GB pages for direct mapping Nov 12 20:43:09.729338 kernel: ACPI: Early table checksum verification disabled Nov 12 20:43:09.729343 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Nov 12 20:43:09.729348 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Nov 12 20:43:09.729353 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Nov 12 20:43:09.729358 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Nov 12 20:43:09.729363 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Nov 12 20:43:09.729371 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Nov 12 20:43:09.729376 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Nov 12 20:43:09.729382 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Nov 12 20:43:09.729387 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Nov 12 20:43:09.729392 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Nov 12 20:43:09.729399 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Nov 12 20:43:09.729404 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Nov 12 20:43:09.729409 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Nov 12 20:43:09.729414 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Nov 12 20:43:09.729419 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Nov 12 20:43:09.729424 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Nov 12 20:43:09.729429 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Nov 12 20:43:09.729435 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Nov 12 20:43:09.729440 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Nov 12 20:43:09.729445 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Nov 12 20:43:09.729451 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Nov 12 20:43:09.729456 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Nov 12 20:43:09.729461 kernel: system APIC only can use physical flat Nov 12 20:43:09.729466 kernel: APIC: Switched APIC routing to: physical flat Nov 12 20:43:09.729471 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 12 20:43:09.729477 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Nov 12 20:43:09.729482 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Nov 12 20:43:09.729487 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Nov 12 20:43:09.729492 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Nov 12 20:43:09.729498 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Nov 12 20:43:09.729503 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Nov 12 20:43:09.729508 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Nov 12 20:43:09.729513 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Nov 12 20:43:09.729518 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Nov 12 20:43:09.729523 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Nov 12 20:43:09.729528 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Nov 12 20:43:09.729533 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Nov 12 20:43:09.729538 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Nov 12 20:43:09.729543 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Nov 12 20:43:09.729549 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Nov 12 20:43:09.729555 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Nov 12 20:43:09.729560 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Nov 12 20:43:09.729565 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Nov 12 20:43:09.729570 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Nov 12 20:43:09.729574 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Nov 12 20:43:09.729580 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Nov 12 20:43:09.729585 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Nov 12 20:43:09.729590 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Nov 12 20:43:09.729594 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Nov 12 20:43:09.729601 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Nov 12 20:43:09.729606 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Nov 12 20:43:09.729611 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Nov 12 20:43:09.729615 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Nov 12 20:43:09.729621 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Nov 12 20:43:09.729625 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Nov 12 20:43:09.729630 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Nov 12 20:43:09.729635 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Nov 12 20:43:09.729640 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Nov 12 20:43:09.729646 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Nov 12 20:43:09.729651 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Nov 12 20:43:09.729657 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Nov 12 20:43:09.729662 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Nov 12 20:43:09.729667 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Nov 12 20:43:09.729672 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Nov 12 20:43:09.729677 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Nov 12 20:43:09.729682 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Nov 12 20:43:09.729687 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Nov 12 20:43:09.729692 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Nov 12 20:43:09.729697 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Nov 12 20:43:09.729702 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Nov 12 20:43:09.729708 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Nov 12 20:43:09.729714 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Nov 12 20:43:09.729718 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Nov 12 20:43:09.729724 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Nov 12 20:43:09.729729 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Nov 12 20:43:09.729734 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Nov 12 20:43:09.729739 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Nov 12 20:43:09.729744 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Nov 12 20:43:09.729749 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Nov 12 20:43:09.729754 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Nov 12 20:43:09.729760 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Nov 12 20:43:09.729766 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Nov 12 20:43:09.729771 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Nov 12 20:43:09.729780 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Nov 12 20:43:09.729785 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Nov 12 20:43:09.729791 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Nov 12 20:43:09.729796 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Nov 12 20:43:09.729802 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Nov 12 20:43:09.729808 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Nov 12 20:43:09.729813 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Nov 12 20:43:09.729819 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Nov 12 20:43:09.729824 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Nov 12 20:43:09.729829 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Nov 12 20:43:09.729835 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Nov 12 20:43:09.729840 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Nov 12 20:43:09.729846 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Nov 12 20:43:09.729851 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Nov 12 20:43:09.729856 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Nov 12 20:43:09.729863 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Nov 12 20:43:09.729868 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Nov 12 20:43:09.729873 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Nov 12 20:43:09.729879 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Nov 12 20:43:09.729884 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Nov 12 20:43:09.729889 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Nov 12 20:43:09.729895 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Nov 12 20:43:09.729901 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Nov 12 20:43:09.729906 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Nov 12 20:43:09.729911 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Nov 12 20:43:09.729917 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Nov 12 20:43:09.729923 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Nov 12 20:43:09.729928 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Nov 12 20:43:09.729934 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Nov 12 20:43:09.729939 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Nov 12 20:43:09.729944 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Nov 12 20:43:09.729950 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Nov 12 20:43:09.729955 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Nov 12 20:43:09.729960 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Nov 12 20:43:09.729966 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Nov 12 20:43:09.729971 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Nov 12 20:43:09.729977 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Nov 12 20:43:09.729983 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Nov 12 20:43:09.729988 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Nov 12 20:43:09.729994 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Nov 12 20:43:09.729999 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Nov 12 20:43:09.730004 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Nov 12 20:43:09.730010 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Nov 12 20:43:09.730015 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Nov 12 20:43:09.730020 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Nov 12 20:43:09.730026 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Nov 12 20:43:09.730032 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Nov 12 20:43:09.730038 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Nov 12 20:43:09.730043 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Nov 12 20:43:09.730048 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Nov 12 20:43:09.730054 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Nov 12 20:43:09.730059 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Nov 12 20:43:09.730064 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Nov 12 20:43:09.730070 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Nov 12 20:43:09.730075 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Nov 12 20:43:09.730080 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Nov 12 20:43:09.730087 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Nov 12 20:43:09.730092 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Nov 12 20:43:09.730097 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Nov 12 20:43:09.730103 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Nov 12 20:43:09.730108 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Nov 12 20:43:09.730113 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Nov 12 20:43:09.730118 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Nov 12 20:43:09.730124 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Nov 12 20:43:09.730129 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Nov 12 20:43:09.730135 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Nov 12 20:43:09.730141 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Nov 12 20:43:09.730146 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Nov 12 20:43:09.730152 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Nov 12 20:43:09.730157 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 12 20:43:09.730163 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 12 20:43:09.730168 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Nov 12 20:43:09.730174 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Nov 12 20:43:09.730179 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Nov 12 20:43:09.730185 kernel: Zone ranges: Nov 12 20:43:09.730191 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 12 20:43:09.730198 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Nov 12 20:43:09.730518 kernel: Normal empty Nov 12 20:43:09.730525 kernel: Movable zone start for each node Nov 12 20:43:09.730531 kernel: Early memory node ranges Nov 12 20:43:09.730536 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Nov 12 20:43:09.730542 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Nov 12 20:43:09.730548 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Nov 12 20:43:09.730553 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Nov 12 20:43:09.730558 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 12 20:43:09.730566 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Nov 12 20:43:09.730572 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Nov 12 20:43:09.730577 kernel: ACPI: PM-Timer IO Port: 0x1008 Nov 12 20:43:09.730583 kernel: system APIC only can use physical flat Nov 12 20:43:09.730588 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Nov 12 20:43:09.730593 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Nov 12 20:43:09.730599 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Nov 12 20:43:09.730604 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Nov 12 20:43:09.730610 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Nov 12 20:43:09.730615 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Nov 12 20:43:09.730622 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Nov 12 20:43:09.730627 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Nov 12 20:43:09.730633 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Nov 12 20:43:09.730638 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Nov 12 20:43:09.730644 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Nov 12 20:43:09.730649 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Nov 12 20:43:09.730655 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Nov 12 20:43:09.730660 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Nov 12 20:43:09.730666 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Nov 12 20:43:09.730672 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Nov 12 20:43:09.730678 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Nov 12 20:43:09.730683 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Nov 12 20:43:09.730688 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Nov 12 20:43:09.730694 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Nov 12 20:43:09.730699 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Nov 12 20:43:09.730705 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Nov 12 20:43:09.730711 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Nov 12 20:43:09.730716 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Nov 12 20:43:09.730721 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Nov 12 20:43:09.730728 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Nov 12 20:43:09.730734 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Nov 12 20:43:09.730739 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Nov 12 20:43:09.730744 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Nov 12 20:43:09.730750 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Nov 12 20:43:09.730755 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Nov 12 20:43:09.730761 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Nov 12 20:43:09.730766 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Nov 12 20:43:09.730772 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Nov 12 20:43:09.730777 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Nov 12 20:43:09.730784 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Nov 12 20:43:09.730789 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Nov 12 20:43:09.730795 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Nov 12 20:43:09.730800 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Nov 12 20:43:09.730805 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Nov 12 20:43:09.730811 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Nov 12 20:43:09.730816 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Nov 12 20:43:09.730822 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Nov 12 20:43:09.730827 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Nov 12 20:43:09.730834 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Nov 12 20:43:09.730839 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Nov 12 20:43:09.730845 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Nov 12 20:43:09.730850 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Nov 12 20:43:09.730856 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Nov 12 20:43:09.730861 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Nov 12 20:43:09.730867 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Nov 12 20:43:09.730872 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Nov 12 20:43:09.730877 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Nov 12 20:43:09.730883 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Nov 12 20:43:09.730890 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Nov 12 20:43:09.730895 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Nov 12 20:43:09.730900 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Nov 12 20:43:09.730906 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Nov 12 20:43:09.730911 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Nov 12 20:43:09.730917 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Nov 12 20:43:09.730922 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Nov 12 20:43:09.730928 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Nov 12 20:43:09.730933 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Nov 12 20:43:09.730939 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Nov 12 20:43:09.730946 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Nov 12 20:43:09.730951 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Nov 12 20:43:09.730956 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Nov 12 20:43:09.730962 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Nov 12 20:43:09.730967 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Nov 12 20:43:09.730973 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Nov 12 20:43:09.730978 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Nov 12 20:43:09.730984 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Nov 12 20:43:09.730989 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Nov 12 20:43:09.730994 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Nov 12 20:43:09.731001 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Nov 12 20:43:09.731006 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Nov 12 20:43:09.731012 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Nov 12 20:43:09.731017 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Nov 12 20:43:09.731023 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Nov 12 20:43:09.731028 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Nov 12 20:43:09.731034 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Nov 12 20:43:09.731039 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Nov 12 20:43:09.731045 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Nov 12 20:43:09.731051 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Nov 12 20:43:09.731056 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Nov 12 20:43:09.731062 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Nov 12 20:43:09.731067 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Nov 12 20:43:09.731073 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Nov 12 20:43:09.731078 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Nov 12 20:43:09.731084 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Nov 12 20:43:09.731089 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Nov 12 20:43:09.731095 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Nov 12 20:43:09.731100 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Nov 12 20:43:09.731107 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Nov 12 20:43:09.731112 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Nov 12 20:43:09.731117 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Nov 12 20:43:09.731123 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Nov 12 20:43:09.731128 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Nov 12 20:43:09.731134 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Nov 12 20:43:09.731139 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Nov 12 20:43:09.731144 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Nov 12 20:43:09.731150 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Nov 12 20:43:09.731156 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Nov 12 20:43:09.731162 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Nov 12 20:43:09.731168 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Nov 12 20:43:09.731173 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Nov 12 20:43:09.731179 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Nov 12 20:43:09.731184 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Nov 12 20:43:09.731190 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Nov 12 20:43:09.731195 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Nov 12 20:43:09.731200 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Nov 12 20:43:09.731212 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Nov 12 20:43:09.731219 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Nov 12 20:43:09.731224 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Nov 12 20:43:09.731230 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Nov 12 20:43:09.731235 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Nov 12 20:43:09.731241 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Nov 12 20:43:09.731246 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Nov 12 20:43:09.731251 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Nov 12 20:43:09.731257 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Nov 12 20:43:09.731262 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Nov 12 20:43:09.731268 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Nov 12 20:43:09.731274 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Nov 12 20:43:09.731280 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Nov 12 20:43:09.731285 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Nov 12 20:43:09.731291 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Nov 12 20:43:09.731296 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Nov 12 20:43:09.731302 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Nov 12 20:43:09.731307 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Nov 12 20:43:09.731312 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Nov 12 20:43:09.731318 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 12 20:43:09.731325 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Nov 12 20:43:09.731330 kernel: TSC deadline timer available Nov 12 20:43:09.731335 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Nov 12 20:43:09.731341 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Nov 12 20:43:09.731346 kernel: Booting paravirtualized kernel on VMware hypervisor Nov 12 20:43:09.731352 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 12 20:43:09.731358 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Nov 12 20:43:09.731363 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Nov 12 20:43:09.731369 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Nov 12 20:43:09.731376 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Nov 12 20:43:09.731381 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Nov 12 20:43:09.731386 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Nov 12 20:43:09.731399 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Nov 12 20:43:09.731405 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Nov 12 20:43:09.731433 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Nov 12 20:43:09.731441 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Nov 12 20:43:09.731455 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Nov 12 20:43:09.731461 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Nov 12 20:43:09.731468 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Nov 12 20:43:09.731474 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Nov 12 20:43:09.731480 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Nov 12 20:43:09.731485 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Nov 12 20:43:09.731491 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Nov 12 20:43:09.731497 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Nov 12 20:43:09.731502 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Nov 12 20:43:09.731509 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:43:09.731516 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 12 20:43:09.731522 kernel: random: crng init done Nov 12 20:43:09.731528 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Nov 12 20:43:09.731534 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Nov 12 20:43:09.731540 kernel: printk: log_buf_len min size: 262144 bytes Nov 12 20:43:09.731545 kernel: printk: log_buf_len: 1048576 bytes Nov 12 20:43:09.731552 kernel: printk: early log buf free: 239648(91%) Nov 12 20:43:09.731558 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 12 20:43:09.731564 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 12 20:43:09.731571 kernel: Fallback order for Node 0: 0 Nov 12 20:43:09.731577 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Nov 12 20:43:09.731583 kernel: Policy zone: DMA32 Nov 12 20:43:09.731588 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 12 20:43:09.731595 kernel: Memory: 1936324K/2096628K available (12288K kernel code, 2305K rwdata, 22724K rodata, 42828K init, 2360K bss, 160044K reserved, 0K cma-reserved) Nov 12 20:43:09.731603 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Nov 12 20:43:09.731609 kernel: ftrace: allocating 37799 entries in 148 pages Nov 12 20:43:09.731615 kernel: ftrace: allocated 148 pages with 3 groups Nov 12 20:43:09.731620 kernel: Dynamic Preempt: voluntary Nov 12 20:43:09.731626 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 12 20:43:09.731632 kernel: rcu: RCU event tracing is enabled. Nov 12 20:43:09.731638 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Nov 12 20:43:09.731644 kernel: Trampoline variant of Tasks RCU enabled. Nov 12 20:43:09.731650 kernel: Rude variant of Tasks RCU enabled. Nov 12 20:43:09.731656 kernel: Tracing variant of Tasks RCU enabled. Nov 12 20:43:09.731663 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 12 20:43:09.731669 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Nov 12 20:43:09.731675 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Nov 12 20:43:09.731681 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Nov 12 20:43:09.731687 kernel: Console: colour VGA+ 80x25 Nov 12 20:43:09.731693 kernel: printk: console [tty0] enabled Nov 12 20:43:09.731699 kernel: printk: console [ttyS0] enabled Nov 12 20:43:09.731705 kernel: ACPI: Core revision 20230628 Nov 12 20:43:09.731711 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Nov 12 20:43:09.731718 kernel: APIC: Switch to symmetric I/O mode setup Nov 12 20:43:09.731724 kernel: x2apic enabled Nov 12 20:43:09.731730 kernel: APIC: Switched APIC routing to: physical x2apic Nov 12 20:43:09.731736 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 12 20:43:09.731741 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Nov 12 20:43:09.731747 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Nov 12 20:43:09.731753 kernel: Disabled fast string operations Nov 12 20:43:09.731759 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 12 20:43:09.731765 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Nov 12 20:43:09.731772 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 12 20:43:09.731778 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Nov 12 20:43:09.731784 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Nov 12 20:43:09.731790 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Nov 12 20:43:09.731795 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Nov 12 20:43:09.731801 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Nov 12 20:43:09.731807 kernel: RETBleed: Mitigation: Enhanced IBRS Nov 12 20:43:09.731813 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 12 20:43:09.731819 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 12 20:43:09.731826 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 12 20:43:09.731832 kernel: SRBDS: Unknown: Dependent on hypervisor status Nov 12 20:43:09.731838 kernel: GDS: Unknown: Dependent on hypervisor status Nov 12 20:43:09.731844 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 12 20:43:09.731849 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 12 20:43:09.731855 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 12 20:43:09.731861 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 12 20:43:09.731867 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 12 20:43:09.731873 kernel: Freeing SMP alternatives memory: 32K Nov 12 20:43:09.731880 kernel: pid_max: default: 131072 minimum: 1024 Nov 12 20:43:09.731886 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 12 20:43:09.731892 kernel: landlock: Up and running. Nov 12 20:43:09.731898 kernel: SELinux: Initializing. Nov 12 20:43:09.731904 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 12 20:43:09.731911 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 12 20:43:09.731917 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Nov 12 20:43:09.731922 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Nov 12 20:43:09.731928 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Nov 12 20:43:09.731936 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Nov 12 20:43:09.731942 kernel: Performance Events: Skylake events, core PMU driver. Nov 12 20:43:09.731947 kernel: core: CPUID marked event: 'cpu cycles' unavailable Nov 12 20:43:09.731954 kernel: core: CPUID marked event: 'instructions' unavailable Nov 12 20:43:09.731959 kernel: core: CPUID marked event: 'bus cycles' unavailable Nov 12 20:43:09.731965 kernel: core: CPUID marked event: 'cache references' unavailable Nov 12 20:43:09.731971 kernel: core: CPUID marked event: 'cache misses' unavailable Nov 12 20:43:09.731976 kernel: core: CPUID marked event: 'branch instructions' unavailable Nov 12 20:43:09.731983 kernel: core: CPUID marked event: 'branch misses' unavailable Nov 12 20:43:09.731989 kernel: ... version: 1 Nov 12 20:43:09.731995 kernel: ... bit width: 48 Nov 12 20:43:09.732000 kernel: ... generic registers: 4 Nov 12 20:43:09.732006 kernel: ... value mask: 0000ffffffffffff Nov 12 20:43:09.732012 kernel: ... max period: 000000007fffffff Nov 12 20:43:09.732018 kernel: ... fixed-purpose events: 0 Nov 12 20:43:09.732023 kernel: ... event mask: 000000000000000f Nov 12 20:43:09.732029 kernel: signal: max sigframe size: 1776 Nov 12 20:43:09.732036 kernel: rcu: Hierarchical SRCU implementation. Nov 12 20:43:09.732042 kernel: rcu: Max phase no-delay instances is 400. Nov 12 20:43:09.732048 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 12 20:43:09.732054 kernel: smp: Bringing up secondary CPUs ... Nov 12 20:43:09.732060 kernel: smpboot: x86: Booting SMP configuration: Nov 12 20:43:09.732065 kernel: .... node #0, CPUs: #1 Nov 12 20:43:09.732071 kernel: Disabled fast string operations Nov 12 20:43:09.732077 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Nov 12 20:43:09.732083 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Nov 12 20:43:09.732089 kernel: smp: Brought up 1 node, 2 CPUs Nov 12 20:43:09.732096 kernel: smpboot: Max logical packages: 128 Nov 12 20:43:09.732102 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Nov 12 20:43:09.732107 kernel: devtmpfs: initialized Nov 12 20:43:09.732113 kernel: x86/mm: Memory block size: 128MB Nov 12 20:43:09.732119 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Nov 12 20:43:09.732125 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 12 20:43:09.732131 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Nov 12 20:43:09.732137 kernel: pinctrl core: initialized pinctrl subsystem Nov 12 20:43:09.732143 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 12 20:43:09.732150 kernel: audit: initializing netlink subsys (disabled) Nov 12 20:43:09.732156 kernel: audit: type=2000 audit(1731444188.067:1): state=initialized audit_enabled=0 res=1 Nov 12 20:43:09.732162 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 12 20:43:09.732167 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 12 20:43:09.732173 kernel: cpuidle: using governor menu Nov 12 20:43:09.732179 kernel: Simple Boot Flag at 0x36 set to 0x80 Nov 12 20:43:09.732185 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 12 20:43:09.732190 kernel: dca service started, version 1.12.1 Nov 12 20:43:09.732196 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Nov 12 20:43:09.732243 kernel: PCI: Using configuration type 1 for base access Nov 12 20:43:09.732250 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 12 20:43:09.732256 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 12 20:43:09.732262 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 12 20:43:09.732268 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 12 20:43:09.732274 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 12 20:43:09.732280 kernel: ACPI: Added _OSI(Module Device) Nov 12 20:43:09.732285 kernel: ACPI: Added _OSI(Processor Device) Nov 12 20:43:09.732291 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 12 20:43:09.732299 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 12 20:43:09.732305 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 12 20:43:09.732311 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Nov 12 20:43:09.732317 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 12 20:43:09.732322 kernel: ACPI: Interpreter enabled Nov 12 20:43:09.732328 kernel: ACPI: PM: (supports S0 S1 S5) Nov 12 20:43:09.732334 kernel: ACPI: Using IOAPIC for interrupt routing Nov 12 20:43:09.732340 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 12 20:43:09.732346 kernel: PCI: Using E820 reservations for host bridge windows Nov 12 20:43:09.732354 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Nov 12 20:43:09.732360 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Nov 12 20:43:09.732504 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 12 20:43:09.732581 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Nov 12 20:43:09.732636 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Nov 12 20:43:09.732645 kernel: PCI host bridge to bus 0000:00 Nov 12 20:43:09.732696 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 12 20:43:09.732744 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Nov 12 20:43:09.732788 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 12 20:43:09.732831 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 12 20:43:09.732874 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Nov 12 20:43:09.732917 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Nov 12 20:43:09.732974 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Nov 12 20:43:09.733032 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Nov 12 20:43:09.733085 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Nov 12 20:43:09.733142 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Nov 12 20:43:09.733191 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Nov 12 20:43:09.733281 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Nov 12 20:43:09.733357 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Nov 12 20:43:09.733409 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Nov 12 20:43:09.733460 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Nov 12 20:43:09.733513 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Nov 12 20:43:09.733562 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Nov 12 20:43:09.733611 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Nov 12 20:43:09.733662 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Nov 12 20:43:09.733710 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Nov 12 20:43:09.733761 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Nov 12 20:43:09.733812 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Nov 12 20:43:09.733889 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Nov 12 20:43:09.733945 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Nov 12 20:43:09.733993 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Nov 12 20:43:09.734043 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Nov 12 20:43:09.734091 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 12 20:43:09.734146 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Nov 12 20:43:09.734227 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.734279 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.734333 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.734383 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.734440 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.734494 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.734549 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.734599 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.734652 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.734702 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.734755 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.734807 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.734860 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.734910 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.734963 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.735012 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.735065 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.735117 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.735172 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.735231 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.735285 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.735334 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.735390 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.735442 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.735495 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.735544 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.735596 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.735646 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.735698 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.735751 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.735804 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.735854 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.735906 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.735956 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.736008 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.736060 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.736113 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.736163 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.736226 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.736279 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.736334 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.736387 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.736440 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.736489 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.736542 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.736592 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.736645 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.736694 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.736749 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.736799 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.736851 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.736900 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.736953 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.737003 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.737057 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.737107 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.737162 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.737224 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.737280 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.737330 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.737386 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.737441 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.737494 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Nov 12 20:43:09.737544 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.737595 kernel: pci_bus 0000:01: extended config space not accessible Nov 12 20:43:09.737645 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 12 20:43:09.737696 kernel: pci_bus 0000:02: extended config space not accessible Nov 12 20:43:09.737707 kernel: acpiphp: Slot [32] registered Nov 12 20:43:09.737714 kernel: acpiphp: Slot [33] registered Nov 12 20:43:09.737720 kernel: acpiphp: Slot [34] registered Nov 12 20:43:09.737726 kernel: acpiphp: Slot [35] registered Nov 12 20:43:09.737731 kernel: acpiphp: Slot [36] registered Nov 12 20:43:09.737737 kernel: acpiphp: Slot [37] registered Nov 12 20:43:09.737743 kernel: acpiphp: Slot [38] registered Nov 12 20:43:09.737749 kernel: acpiphp: Slot [39] registered Nov 12 20:43:09.737756 kernel: acpiphp: Slot [40] registered Nov 12 20:43:09.737762 kernel: acpiphp: Slot [41] registered Nov 12 20:43:09.737768 kernel: acpiphp: Slot [42] registered Nov 12 20:43:09.737773 kernel: acpiphp: Slot [43] registered Nov 12 20:43:09.737779 kernel: acpiphp: Slot [44] registered Nov 12 20:43:09.737785 kernel: acpiphp: Slot [45] registered Nov 12 20:43:09.737790 kernel: acpiphp: Slot [46] registered Nov 12 20:43:09.737796 kernel: acpiphp: Slot [47] registered Nov 12 20:43:09.737802 kernel: acpiphp: Slot [48] registered Nov 12 20:43:09.737807 kernel: acpiphp: Slot [49] registered Nov 12 20:43:09.737815 kernel: acpiphp: Slot [50] registered Nov 12 20:43:09.737820 kernel: acpiphp: Slot [51] registered Nov 12 20:43:09.737826 kernel: acpiphp: Slot [52] registered Nov 12 20:43:09.737832 kernel: acpiphp: Slot [53] registered Nov 12 20:43:09.737838 kernel: acpiphp: Slot [54] registered Nov 12 20:43:09.737843 kernel: acpiphp: Slot [55] registered Nov 12 20:43:09.737849 kernel: acpiphp: Slot [56] registered Nov 12 20:43:09.737855 kernel: acpiphp: Slot [57] registered Nov 12 20:43:09.737861 kernel: acpiphp: Slot [58] registered Nov 12 20:43:09.737868 kernel: acpiphp: Slot [59] registered Nov 12 20:43:09.737874 kernel: acpiphp: Slot [60] registered Nov 12 20:43:09.737880 kernel: acpiphp: Slot [61] registered Nov 12 20:43:09.737886 kernel: acpiphp: Slot [62] registered Nov 12 20:43:09.737891 kernel: acpiphp: Slot [63] registered Nov 12 20:43:09.737940 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Nov 12 20:43:09.737989 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Nov 12 20:43:09.738038 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Nov 12 20:43:09.738087 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Nov 12 20:43:09.738138 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Nov 12 20:43:09.738188 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Nov 12 20:43:09.738569 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Nov 12 20:43:09.738624 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Nov 12 20:43:09.738675 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Nov 12 20:43:09.738730 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Nov 12 20:43:09.738782 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Nov 12 20:43:09.738836 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Nov 12 20:43:09.738887 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Nov 12 20:43:09.738938 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Nov 12 20:43:09.738987 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Nov 12 20:43:09.739037 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Nov 12 20:43:09.739086 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Nov 12 20:43:09.739135 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Nov 12 20:43:09.739187 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Nov 12 20:43:09.739244 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Nov 12 20:43:09.739293 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Nov 12 20:43:09.739342 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Nov 12 20:43:09.739392 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Nov 12 20:43:09.739441 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Nov 12 20:43:09.739490 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Nov 12 20:43:09.739538 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Nov 12 20:43:09.739592 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Nov 12 20:43:09.739641 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Nov 12 20:43:09.739689 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Nov 12 20:43:09.739739 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Nov 12 20:43:09.739787 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Nov 12 20:43:09.739836 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Nov 12 20:43:09.739888 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Nov 12 20:43:09.739937 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Nov 12 20:43:09.739987 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Nov 12 20:43:09.740037 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Nov 12 20:43:09.740086 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Nov 12 20:43:09.740135 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Nov 12 20:43:09.740187 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Nov 12 20:43:09.740265 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Nov 12 20:43:09.740315 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Nov 12 20:43:09.740370 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Nov 12 20:43:09.740422 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Nov 12 20:43:09.740472 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Nov 12 20:43:09.740521 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Nov 12 20:43:09.740570 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Nov 12 20:43:09.740624 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Nov 12 20:43:09.740675 kernel: pci 0000:0b:00.0: supports D1 D2 Nov 12 20:43:09.740725 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 12 20:43:09.740776 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Nov 12 20:43:09.740826 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Nov 12 20:43:09.740879 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Nov 12 20:43:09.740929 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Nov 12 20:43:09.740979 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Nov 12 20:43:09.741030 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Nov 12 20:43:09.741080 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Nov 12 20:43:09.741130 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Nov 12 20:43:09.741180 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Nov 12 20:43:09.741243 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Nov 12 20:43:09.741294 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Nov 12 20:43:09.741344 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Nov 12 20:43:09.741400 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Nov 12 20:43:09.741450 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Nov 12 20:43:09.741499 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Nov 12 20:43:09.741549 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Nov 12 20:43:09.741599 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Nov 12 20:43:09.741647 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Nov 12 20:43:09.741697 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Nov 12 20:43:09.741745 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Nov 12 20:43:09.741796 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Nov 12 20:43:09.741846 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Nov 12 20:43:09.741895 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Nov 12 20:43:09.741944 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Nov 12 20:43:09.741994 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Nov 12 20:43:09.742044 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Nov 12 20:43:09.742093 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Nov 12 20:43:09.742143 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Nov 12 20:43:09.742194 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Nov 12 20:43:09.742265 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Nov 12 20:43:09.742315 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Nov 12 20:43:09.742366 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Nov 12 20:43:09.742415 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Nov 12 20:43:09.742464 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Nov 12 20:43:09.742513 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Nov 12 20:43:09.742563 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Nov 12 20:43:09.742615 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Nov 12 20:43:09.742664 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Nov 12 20:43:09.742712 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Nov 12 20:43:09.742762 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Nov 12 20:43:09.742811 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Nov 12 20:43:09.742860 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Nov 12 20:43:09.742910 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Nov 12 20:43:09.742962 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Nov 12 20:43:09.743011 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Nov 12 20:43:09.743060 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Nov 12 20:43:09.743109 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Nov 12 20:43:09.743158 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Nov 12 20:43:09.743245 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Nov 12 20:43:09.743299 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Nov 12 20:43:09.743347 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Nov 12 20:43:09.743399 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Nov 12 20:43:09.743448 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Nov 12 20:43:09.743497 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Nov 12 20:43:09.743546 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Nov 12 20:43:09.743595 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Nov 12 20:43:09.743652 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Nov 12 20:43:09.743703 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Nov 12 20:43:09.743999 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Nov 12 20:43:09.744054 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Nov 12 20:43:09.744103 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Nov 12 20:43:09.744152 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Nov 12 20:43:09.744267 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Nov 12 20:43:09.744317 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Nov 12 20:43:09.744364 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Nov 12 20:43:09.744416 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Nov 12 20:43:09.744463 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Nov 12 20:43:09.744512 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Nov 12 20:43:09.744560 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Nov 12 20:43:09.744607 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Nov 12 20:43:09.744654 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Nov 12 20:43:09.744703 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Nov 12 20:43:09.744751 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Nov 12 20:43:09.744798 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Nov 12 20:43:09.744854 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Nov 12 20:43:09.744904 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Nov 12 20:43:09.744951 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Nov 12 20:43:09.745000 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Nov 12 20:43:09.745047 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Nov 12 20:43:09.745093 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Nov 12 20:43:09.745101 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Nov 12 20:43:09.745107 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Nov 12 20:43:09.745113 kernel: ACPI: PCI: Interrupt link LNKB disabled Nov 12 20:43:09.745121 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 12 20:43:09.745126 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Nov 12 20:43:09.745132 kernel: iommu: Default domain type: Translated Nov 12 20:43:09.745137 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 12 20:43:09.745143 kernel: PCI: Using ACPI for IRQ routing Nov 12 20:43:09.745149 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 12 20:43:09.745154 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Nov 12 20:43:09.745160 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Nov 12 20:43:09.745278 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Nov 12 20:43:09.745330 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Nov 12 20:43:09.745378 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 12 20:43:09.745386 kernel: vgaarb: loaded Nov 12 20:43:09.745392 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Nov 12 20:43:09.745398 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Nov 12 20:43:09.745403 kernel: clocksource: Switched to clocksource tsc-early Nov 12 20:43:09.745409 kernel: VFS: Disk quotas dquot_6.6.0 Nov 12 20:43:09.745415 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 12 20:43:09.745421 kernel: pnp: PnP ACPI init Nov 12 20:43:09.745475 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Nov 12 20:43:09.745521 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Nov 12 20:43:09.745563 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Nov 12 20:43:09.745610 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Nov 12 20:43:09.745655 kernel: pnp 00:06: [dma 2] Nov 12 20:43:09.745702 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Nov 12 20:43:09.745749 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Nov 12 20:43:09.745803 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Nov 12 20:43:09.745812 kernel: pnp: PnP ACPI: found 8 devices Nov 12 20:43:09.745818 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 12 20:43:09.745824 kernel: NET: Registered PF_INET protocol family Nov 12 20:43:09.745830 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 12 20:43:09.745836 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 12 20:43:09.745841 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 12 20:43:09.745847 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 12 20:43:09.745855 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 12 20:43:09.745864 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 12 20:43:09.745870 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 12 20:43:09.745876 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 12 20:43:09.749926 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 12 20:43:09.749938 kernel: NET: Registered PF_XDP protocol family Nov 12 20:43:09.750000 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Nov 12 20:43:09.750054 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Nov 12 20:43:09.750107 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Nov 12 20:43:09.750158 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Nov 12 20:43:09.750215 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Nov 12 20:43:09.750264 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Nov 12 20:43:09.750313 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Nov 12 20:43:09.750362 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Nov 12 20:43:09.750467 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Nov 12 20:43:09.750515 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Nov 12 20:43:09.750563 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Nov 12 20:43:09.750612 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Nov 12 20:43:09.750660 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Nov 12 20:43:09.750711 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Nov 12 20:43:09.750759 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Nov 12 20:43:09.750807 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Nov 12 20:43:09.750856 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Nov 12 20:43:09.750905 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Nov 12 20:43:09.750969 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Nov 12 20:43:09.751285 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Nov 12 20:43:09.751360 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Nov 12 20:43:09.751616 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Nov 12 20:43:09.751685 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Nov 12 20:43:09.751735 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Nov 12 20:43:09.751784 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Nov 12 20:43:09.751836 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.751884 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.751932 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.751980 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.752028 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.752076 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.752124 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.752171 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.752232 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.752284 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.752332 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.752380 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.752428 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.752476 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.752523 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.752570 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.752617 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.752667 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.752715 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.752762 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.752809 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.752856 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.752904 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.752951 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.752998 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.753048 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.753095 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.753143 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.753191 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.753592 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.753646 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.753695 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.753743 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.753794 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.753859 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.753907 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.753955 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.754003 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.754051 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.754099 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.754147 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.754198 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.754308 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.754357 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.754408 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.754492 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.754539 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.754587 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.754633 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.754681 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.754732 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.754780 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.754827 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.754874 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.754921 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.754968 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.757238 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.757299 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.757351 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.757423 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.757494 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.757568 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.759645 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.759701 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.759752 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.759801 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.759850 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.759898 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.759947 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.759999 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.760047 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.760095 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.760143 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.760191 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.760251 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.760301 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.760349 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.760397 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.760490 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.760537 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.760586 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.760635 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.760683 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Nov 12 20:43:09.760731 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Nov 12 20:43:09.760780 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 12 20:43:09.760829 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Nov 12 20:43:09.760877 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Nov 12 20:43:09.760925 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Nov 12 20:43:09.760975 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Nov 12 20:43:09.761027 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Nov 12 20:43:09.761077 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Nov 12 20:43:09.761126 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Nov 12 20:43:09.761174 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Nov 12 20:43:09.761229 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Nov 12 20:43:09.761279 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Nov 12 20:43:09.761327 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Nov 12 20:43:09.761378 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Nov 12 20:43:09.761427 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Nov 12 20:43:09.761476 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Nov 12 20:43:09.761525 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Nov 12 20:43:09.761573 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Nov 12 20:43:09.761620 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Nov 12 20:43:09.761669 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Nov 12 20:43:09.761717 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Nov 12 20:43:09.761764 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Nov 12 20:43:09.761814 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Nov 12 20:43:09.761863 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Nov 12 20:43:09.761911 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Nov 12 20:43:09.761962 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Nov 12 20:43:09.762009 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Nov 12 20:43:09.762056 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Nov 12 20:43:09.762106 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Nov 12 20:43:09.762155 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Nov 12 20:43:09.762209 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Nov 12 20:43:09.762263 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Nov 12 20:43:09.762312 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Nov 12 20:43:09.762360 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Nov 12 20:43:09.762416 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Nov 12 20:43:09.762466 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Nov 12 20:43:09.762514 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Nov 12 20:43:09.762566 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Nov 12 20:43:09.762614 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Nov 12 20:43:09.762663 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Nov 12 20:43:09.762711 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Nov 12 20:43:09.762759 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Nov 12 20:43:09.762813 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Nov 12 20:43:09.762865 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Nov 12 20:43:09.762914 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Nov 12 20:43:09.762963 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Nov 12 20:43:09.763011 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Nov 12 20:43:09.763062 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Nov 12 20:43:09.763111 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Nov 12 20:43:09.763159 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Nov 12 20:43:09.763219 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Nov 12 20:43:09.763272 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Nov 12 20:43:09.763320 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Nov 12 20:43:09.763369 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Nov 12 20:43:09.763417 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Nov 12 20:43:09.763465 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Nov 12 20:43:09.763516 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Nov 12 20:43:09.763564 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Nov 12 20:43:09.763612 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Nov 12 20:43:09.763661 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Nov 12 20:43:09.763709 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Nov 12 20:43:09.763758 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Nov 12 20:43:09.763807 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Nov 12 20:43:09.763856 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Nov 12 20:43:09.763904 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Nov 12 20:43:09.763952 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Nov 12 20:43:09.764004 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Nov 12 20:43:09.764052 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Nov 12 20:43:09.764100 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Nov 12 20:43:09.764148 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Nov 12 20:43:09.764197 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Nov 12 20:43:09.764265 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Nov 12 20:43:09.764314 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Nov 12 20:43:09.764363 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Nov 12 20:43:09.764411 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Nov 12 20:43:09.764463 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Nov 12 20:43:09.764512 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Nov 12 20:43:09.764561 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Nov 12 20:43:09.764610 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Nov 12 20:43:09.764659 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Nov 12 20:43:09.764708 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Nov 12 20:43:09.764756 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Nov 12 20:43:09.764804 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Nov 12 20:43:09.764852 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Nov 12 20:43:09.764901 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Nov 12 20:43:09.764952 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Nov 12 20:43:09.765000 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Nov 12 20:43:09.765048 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Nov 12 20:43:09.765096 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Nov 12 20:43:09.765146 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Nov 12 20:43:09.765194 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Nov 12 20:43:09.765256 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Nov 12 20:43:09.765305 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Nov 12 20:43:09.765355 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Nov 12 20:43:09.765411 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Nov 12 20:43:09.765459 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Nov 12 20:43:09.765508 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Nov 12 20:43:09.765570 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Nov 12 20:43:09.765627 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Nov 12 20:43:09.765676 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Nov 12 20:43:09.765726 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Nov 12 20:43:09.765774 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Nov 12 20:43:09.765823 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Nov 12 20:43:09.765871 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Nov 12 20:43:09.765922 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Nov 12 20:43:09.765970 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Nov 12 20:43:09.766019 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Nov 12 20:43:09.766068 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Nov 12 20:43:09.766116 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Nov 12 20:43:09.766165 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Nov 12 20:43:09.766298 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Nov 12 20:43:09.766349 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Nov 12 20:43:09.766397 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Nov 12 20:43:09.766448 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Nov 12 20:43:09.766496 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Nov 12 20:43:09.766543 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Nov 12 20:43:09.766587 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Nov 12 20:43:09.766629 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Nov 12 20:43:09.766672 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Nov 12 20:43:09.766715 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Nov 12 20:43:09.766761 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Nov 12 20:43:09.766808 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Nov 12 20:43:09.766852 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Nov 12 20:43:09.766896 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Nov 12 20:43:09.766940 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Nov 12 20:43:09.766983 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Nov 12 20:43:09.767027 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Nov 12 20:43:09.767070 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Nov 12 20:43:09.767119 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Nov 12 20:43:09.767166 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Nov 12 20:43:09.767220 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Nov 12 20:43:09.767271 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Nov 12 20:43:09.767316 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Nov 12 20:43:09.767361 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Nov 12 20:43:09.767411 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Nov 12 20:43:09.767459 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Nov 12 20:43:09.767503 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Nov 12 20:43:09.767550 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Nov 12 20:43:09.767595 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Nov 12 20:43:09.767643 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Nov 12 20:43:09.767688 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Nov 12 20:43:09.767735 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Nov 12 20:43:09.767783 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Nov 12 20:43:09.767831 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Nov 12 20:43:09.767876 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Nov 12 20:43:09.767926 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Nov 12 20:43:09.767978 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Nov 12 20:43:09.768030 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Nov 12 20:43:09.768076 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Nov 12 20:43:09.768120 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Nov 12 20:43:09.768177 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Nov 12 20:43:09.768320 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Nov 12 20:43:09.768394 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Nov 12 20:43:09.768450 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Nov 12 20:43:09.768517 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Nov 12 20:43:09.768565 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Nov 12 20:43:09.768618 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Nov 12 20:43:09.768663 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Nov 12 20:43:09.768711 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Nov 12 20:43:09.768756 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Nov 12 20:43:09.768807 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Nov 12 20:43:09.768852 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Nov 12 20:43:09.768900 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Nov 12 20:43:09.768946 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Nov 12 20:43:09.768993 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Nov 12 20:43:09.769038 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Nov 12 20:43:09.769087 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Nov 12 20:43:09.769152 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Nov 12 20:43:09.769226 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Nov 12 20:43:09.769275 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Nov 12 20:43:09.769320 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Nov 12 20:43:09.769363 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Nov 12 20:43:09.769429 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Nov 12 20:43:09.769477 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Nov 12 20:43:09.769523 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Nov 12 20:43:09.769575 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Nov 12 20:43:09.769621 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Nov 12 20:43:09.769670 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Nov 12 20:43:09.769716 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Nov 12 20:43:09.769780 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Nov 12 20:43:09.769828 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Nov 12 20:43:09.769877 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Nov 12 20:43:09.769922 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Nov 12 20:43:09.769972 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Nov 12 20:43:09.770017 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Nov 12 20:43:09.770068 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Nov 12 20:43:09.770113 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Nov 12 20:43:09.770157 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Nov 12 20:43:09.770232 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Nov 12 20:43:09.770279 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Nov 12 20:43:09.770340 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Nov 12 20:43:09.770388 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Nov 12 20:43:09.770463 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Nov 12 20:43:09.770516 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Nov 12 20:43:09.770562 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Nov 12 20:43:09.770612 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Nov 12 20:43:09.770658 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Nov 12 20:43:09.770707 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Nov 12 20:43:09.770757 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Nov 12 20:43:09.770807 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Nov 12 20:43:09.770853 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Nov 12 20:43:09.770902 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Nov 12 20:43:09.770948 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Nov 12 20:43:09.771003 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 12 20:43:09.771014 kernel: PCI: CLS 32 bytes, default 64 Nov 12 20:43:09.771021 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 12 20:43:09.771028 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Nov 12 20:43:09.771035 kernel: clocksource: Switched to clocksource tsc Nov 12 20:43:09.771041 kernel: Initialise system trusted keyrings Nov 12 20:43:09.771047 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 12 20:43:09.771054 kernel: Key type asymmetric registered Nov 12 20:43:09.771060 kernel: Asymmetric key parser 'x509' registered Nov 12 20:43:09.771066 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 12 20:43:09.771073 kernel: io scheduler mq-deadline registered Nov 12 20:43:09.771080 kernel: io scheduler kyber registered Nov 12 20:43:09.771087 kernel: io scheduler bfq registered Nov 12 20:43:09.771138 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Nov 12 20:43:09.771191 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.771262 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Nov 12 20:43:09.771316 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.771367 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Nov 12 20:43:09.771423 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.771477 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Nov 12 20:43:09.771528 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.771579 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Nov 12 20:43:09.771630 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.771682 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Nov 12 20:43:09.771736 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.771787 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Nov 12 20:43:09.771837 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.771888 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Nov 12 20:43:09.771938 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.771990 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Nov 12 20:43:09.772041 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.772093 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Nov 12 20:43:09.772143 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.772194 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Nov 12 20:43:09.772285 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.772337 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Nov 12 20:43:09.772390 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.772441 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Nov 12 20:43:09.772492 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.772541 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Nov 12 20:43:09.772592 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.772643 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Nov 12 20:43:09.772695 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.772746 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Nov 12 20:43:09.772797 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.772848 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Nov 12 20:43:09.772898 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.772950 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Nov 12 20:43:09.773000 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.773051 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Nov 12 20:43:09.773100 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.773151 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Nov 12 20:43:09.773262 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.773322 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Nov 12 20:43:09.773375 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.773426 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Nov 12 20:43:09.773476 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.773526 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Nov 12 20:43:09.773576 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.773629 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Nov 12 20:43:09.773679 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.773731 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Nov 12 20:43:09.773781 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.773832 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Nov 12 20:43:09.773883 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.773934 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Nov 12 20:43:09.773987 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.774037 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Nov 12 20:43:09.774087 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.774139 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Nov 12 20:43:09.774189 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.774249 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Nov 12 20:43:09.774299 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.774349 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Nov 12 20:43:09.774403 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.774454 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Nov 12 20:43:09.774506 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 12 20:43:09.774515 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 12 20:43:09.774522 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 12 20:43:09.774529 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 12 20:43:09.774535 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Nov 12 20:43:09.774542 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 12 20:43:09.774548 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 12 20:43:09.774599 kernel: rtc_cmos 00:01: registered as rtc0 Nov 12 20:43:09.774648 kernel: rtc_cmos 00:01: setting system clock to 2024-11-12T20:43:09 UTC (1731444189) Nov 12 20:43:09.774694 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Nov 12 20:43:09.774703 kernel: intel_pstate: CPU model not supported Nov 12 20:43:09.774709 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 12 20:43:09.774716 kernel: NET: Registered PF_INET6 protocol family Nov 12 20:43:09.774722 kernel: Segment Routing with IPv6 Nov 12 20:43:09.774729 kernel: In-situ OAM (IOAM) with IPv6 Nov 12 20:43:09.774735 kernel: NET: Registered PF_PACKET protocol family Nov 12 20:43:09.774743 kernel: Key type dns_resolver registered Nov 12 20:43:09.774749 kernel: IPI shorthand broadcast: enabled Nov 12 20:43:09.774756 kernel: sched_clock: Marking stable (888014553, 227205661)->(1175789926, -60569712) Nov 12 20:43:09.774762 kernel: registered taskstats version 1 Nov 12 20:43:09.774768 kernel: Loading compiled-in X.509 certificates Nov 12 20:43:09.774775 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: 0473a73d840db5324524af106a53c13fc6fc218a' Nov 12 20:43:09.774781 kernel: Key type .fscrypt registered Nov 12 20:43:09.774788 kernel: Key type fscrypt-provisioning registered Nov 12 20:43:09.774794 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 12 20:43:09.774801 kernel: ima: Allocated hash algorithm: sha1 Nov 12 20:43:09.774808 kernel: ima: No architecture policies found Nov 12 20:43:09.774814 kernel: clk: Disabling unused clocks Nov 12 20:43:09.774820 kernel: Freeing unused kernel image (initmem) memory: 42828K Nov 12 20:43:09.774826 kernel: Write protecting the kernel read-only data: 36864k Nov 12 20:43:09.774833 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Nov 12 20:43:09.774839 kernel: Run /init as init process Nov 12 20:43:09.774845 kernel: with arguments: Nov 12 20:43:09.774852 kernel: /init Nov 12 20:43:09.774859 kernel: with environment: Nov 12 20:43:09.774865 kernel: HOME=/ Nov 12 20:43:09.774871 kernel: TERM=linux Nov 12 20:43:09.774877 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 12 20:43:09.774885 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:43:09.774893 systemd[1]: Detected virtualization vmware. Nov 12 20:43:09.774900 systemd[1]: Detected architecture x86-64. Nov 12 20:43:09.774906 systemd[1]: Running in initrd. Nov 12 20:43:09.774914 systemd[1]: No hostname configured, using default hostname. Nov 12 20:43:09.774920 systemd[1]: Hostname set to . Nov 12 20:43:09.774926 systemd[1]: Initializing machine ID from random generator. Nov 12 20:43:09.774933 systemd[1]: Queued start job for default target initrd.target. Nov 12 20:43:09.774939 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:43:09.774946 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:43:09.774953 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 12 20:43:09.774960 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:43:09.774968 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 12 20:43:09.774975 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 12 20:43:09.774982 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 12 20:43:09.774988 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 12 20:43:09.774995 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:43:09.775002 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:43:09.775009 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:43:09.775016 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:43:09.775023 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:43:09.775029 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:43:09.775037 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:43:09.775043 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:43:09.775050 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 20:43:09.775056 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 20:43:09.775063 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:43:09.775070 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:43:09.775077 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:43:09.775083 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:43:09.775090 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 12 20:43:09.775096 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:43:09.775103 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 12 20:43:09.775109 systemd[1]: Starting systemd-fsck-usr.service... Nov 12 20:43:09.775116 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:43:09.775122 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:43:09.775131 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:43:09.775148 systemd-journald[216]: Collecting audit messages is disabled. Nov 12 20:43:09.775165 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 12 20:43:09.775172 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:43:09.775180 systemd[1]: Finished systemd-fsck-usr.service. Nov 12 20:43:09.775187 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 12 20:43:09.775194 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 20:43:09.775200 kernel: Bridge firewalling registered Nov 12 20:43:09.777256 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:43:09.777265 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:43:09.777272 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:43:09.777279 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:43:09.777291 systemd-journald[216]: Journal started Nov 12 20:43:09.777307 systemd-journald[216]: Runtime Journal (/run/log/journal/954c29c8f3dd49fb947330e1b1c0f7b1) is 4.8M, max 38.6M, 33.8M free. Nov 12 20:43:09.777337 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:43:09.732580 systemd-modules-load[217]: Inserted module 'overlay' Nov 12 20:43:09.759640 systemd-modules-load[217]: Inserted module 'br_netfilter' Nov 12 20:43:09.781228 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:43:09.784229 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:43:09.786386 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:43:09.787502 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:43:09.791290 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 12 20:43:09.793129 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:43:09.793532 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:43:09.797832 dracut-cmdline[244]: dracut-dracut-053 Nov 12 20:43:09.799451 dracut-cmdline[244]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:43:09.799933 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:43:09.803285 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:43:09.819609 systemd-resolved[262]: Positive Trust Anchors: Nov 12 20:43:09.819618 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:43:09.819639 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:43:09.822112 systemd-resolved[262]: Defaulting to hostname 'linux'. Nov 12 20:43:09.822682 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:43:09.822818 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:43:09.843221 kernel: SCSI subsystem initialized Nov 12 20:43:09.849214 kernel: Loading iSCSI transport class v2.0-870. Nov 12 20:43:09.855214 kernel: iscsi: registered transport (tcp) Nov 12 20:43:09.867406 kernel: iscsi: registered transport (qla4xxx) Nov 12 20:43:09.867425 kernel: QLogic iSCSI HBA Driver Nov 12 20:43:09.886933 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 12 20:43:09.891291 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 12 20:43:09.905648 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 12 20:43:09.906236 kernel: device-mapper: uevent: version 1.0.3 Nov 12 20:43:09.906245 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 12 20:43:09.937218 kernel: raid6: avx2x4 gen() 53854 MB/s Nov 12 20:43:09.954248 kernel: raid6: avx2x2 gen() 54375 MB/s Nov 12 20:43:09.971415 kernel: raid6: avx2x1 gen() 45933 MB/s Nov 12 20:43:09.971435 kernel: raid6: using algorithm avx2x2 gen() 54375 MB/s Nov 12 20:43:09.989465 kernel: raid6: .... xor() 31916 MB/s, rmw enabled Nov 12 20:43:09.989486 kernel: raid6: using avx2x2 recovery algorithm Nov 12 20:43:10.002215 kernel: xor: automatically using best checksumming function avx Nov 12 20:43:10.101351 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 12 20:43:10.106601 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:43:10.111356 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:43:10.119020 systemd-udevd[433]: Using default interface naming scheme 'v255'. Nov 12 20:43:10.121556 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:43:10.127415 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 12 20:43:10.134425 dracut-pre-trigger[441]: rd.md=0: removing MD RAID activation Nov 12 20:43:10.150674 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:43:10.154322 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:43:10.222907 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:43:10.225642 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 12 20:43:10.240225 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 12 20:43:10.241008 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:43:10.241368 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:43:10.241643 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:43:10.245288 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 12 20:43:10.252716 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:43:10.291223 kernel: VMware PVSCSI driver - version 1.0.7.0-k Nov 12 20:43:10.292234 kernel: vmw_pvscsi: using 64bit dma Nov 12 20:43:10.296217 kernel: vmw_pvscsi: max_id: 16 Nov 12 20:43:10.296234 kernel: vmw_pvscsi: setting ring_pages to 8 Nov 12 20:43:10.299021 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Nov 12 20:43:10.299038 kernel: vmw_pvscsi: enabling reqCallThreshold Nov 12 20:43:10.299046 kernel: vmw_pvscsi: driver-based request coalescing enabled Nov 12 20:43:10.299054 kernel: vmw_pvscsi: using MSI-X Nov 12 20:43:10.300681 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Nov 12 20:43:10.310198 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Nov 12 20:43:10.310288 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Nov 12 20:43:10.310381 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Nov 12 20:43:10.310449 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Nov 12 20:43:10.320213 kernel: cryptd: max_cpu_qlen set to 1000 Nov 12 20:43:10.327247 kernel: libata version 3.00 loaded. Nov 12 20:43:10.328215 kernel: ata_piix 0000:00:07.1: version 2.13 Nov 12 20:43:10.336825 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Nov 12 20:43:10.336908 kernel: scsi host1: ata_piix Nov 12 20:43:10.336980 kernel: scsi host2: ata_piix Nov 12 20:43:10.337039 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Nov 12 20:43:10.337048 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Nov 12 20:43:10.338020 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:43:10.338093 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:43:10.340290 kernel: AVX2 version of gcm_enc/dec engaged. Nov 12 20:43:10.340305 kernel: AES CTR mode by8 optimization enabled Nov 12 20:43:10.339534 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:43:10.339622 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:43:10.339703 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:43:10.340348 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:43:10.346440 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:43:10.359487 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:43:10.364304 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:43:10.371385 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:43:10.507231 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Nov 12 20:43:10.513291 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Nov 12 20:43:10.529227 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Nov 12 20:43:10.580560 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 12 20:43:10.580661 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Nov 12 20:43:10.580745 kernel: sd 0:0:0:0: [sda] Cache data unavailable Nov 12 20:43:10.580825 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Nov 12 20:43:10.580911 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Nov 12 20:43:10.581003 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 12 20:43:10.581018 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 12 20:43:10.581098 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:43:10.581109 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 12 20:43:10.766330 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Nov 12 20:43:10.772318 kernel: BTRFS: device fsid 9dfeafbb-8ab7-4be2-acae-f51db463fc77 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (484) Nov 12 20:43:10.772334 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (498) Nov 12 20:43:10.770177 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Nov 12 20:43:10.775300 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Nov 12 20:43:10.775547 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Nov 12 20:43:10.778241 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Nov 12 20:43:10.787306 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 12 20:43:10.812216 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:43:10.816555 kernel: GPT:disk_guids don't match. Nov 12 20:43:10.816588 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 12 20:43:10.816597 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:43:10.822216 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:43:11.821230 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:43:11.821764 disk-uuid[597]: The operation has completed successfully. Nov 12 20:43:11.856530 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 12 20:43:11.856866 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 12 20:43:11.861302 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 12 20:43:11.863765 sh[614]: Success Nov 12 20:43:11.872220 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 12 20:43:11.905264 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 12 20:43:11.910163 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 12 20:43:11.910409 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 12 20:43:11.931241 kernel: BTRFS info (device dm-0): first mount of filesystem 9dfeafbb-8ab7-4be2-acae-f51db463fc77 Nov 12 20:43:11.931285 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:43:11.931294 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 12 20:43:11.931301 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 12 20:43:11.931427 kernel: BTRFS info (device dm-0): using free space tree Nov 12 20:43:11.942220 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 12 20:43:11.944095 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 12 20:43:11.953360 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Nov 12 20:43:11.954808 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 12 20:43:12.025007 kernel: BTRFS info (device sda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:43:12.025058 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:43:12.025067 kernel: BTRFS info (device sda6): using free space tree Nov 12 20:43:12.098228 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 12 20:43:12.113424 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 12 20:43:12.115217 kernel: BTRFS info (device sda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:43:12.121527 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 12 20:43:12.125325 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 12 20:43:12.270066 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Nov 12 20:43:12.274343 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 12 20:43:12.328345 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:43:12.332275 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:43:12.344748 systemd-networkd[808]: lo: Link UP Nov 12 20:43:12.344752 systemd-networkd[808]: lo: Gained carrier Nov 12 20:43:12.345584 systemd-networkd[808]: Enumeration completed Nov 12 20:43:12.345746 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:43:12.345829 systemd-networkd[808]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Nov 12 20:43:12.345903 systemd[1]: Reached target network.target - Network. Nov 12 20:43:12.349539 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Nov 12 20:43:12.349669 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Nov 12 20:43:12.349790 systemd-networkd[808]: ens192: Link UP Nov 12 20:43:12.349797 systemd-networkd[808]: ens192: Gained carrier Nov 12 20:43:12.369645 ignition[675]: Ignition 2.19.0 Nov 12 20:43:12.369657 ignition[675]: Stage: fetch-offline Nov 12 20:43:12.369694 ignition[675]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:43:12.369700 ignition[675]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 12 20:43:12.369764 ignition[675]: parsed url from cmdline: "" Nov 12 20:43:12.369766 ignition[675]: no config URL provided Nov 12 20:43:12.369769 ignition[675]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 20:43:12.369774 ignition[675]: no config at "/usr/lib/ignition/user.ign" Nov 12 20:43:12.370173 ignition[675]: config successfully fetched Nov 12 20:43:12.370190 ignition[675]: parsing config with SHA512: 86a4cefd826eff6e91e1177f5a5b2fe2185dca41fc815ef5c59dd11affc74e62a92299dcb637c5281b35cb56996743e8296b15bb3e2dffdc058dded915daa7c3 Nov 12 20:43:12.372446 unknown[675]: fetched base config from "system" Nov 12 20:43:12.372454 unknown[675]: fetched user config from "vmware" Nov 12 20:43:12.372684 ignition[675]: fetch-offline: fetch-offline passed Nov 12 20:43:12.372720 ignition[675]: Ignition finished successfully Nov 12 20:43:12.373524 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:43:12.373898 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 12 20:43:12.377314 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 12 20:43:12.385059 ignition[813]: Ignition 2.19.0 Nov 12 20:43:12.385066 ignition[813]: Stage: kargs Nov 12 20:43:12.385158 ignition[813]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:43:12.385164 ignition[813]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 12 20:43:12.385709 ignition[813]: kargs: kargs passed Nov 12 20:43:12.385738 ignition[813]: Ignition finished successfully Nov 12 20:43:12.386951 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 12 20:43:12.390351 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 12 20:43:12.397546 ignition[819]: Ignition 2.19.0 Nov 12 20:43:12.397553 ignition[819]: Stage: disks Nov 12 20:43:12.397658 ignition[819]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:43:12.397664 ignition[819]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 12 20:43:12.398179 ignition[819]: disks: disks passed Nov 12 20:43:12.398214 ignition[819]: Ignition finished successfully Nov 12 20:43:12.399033 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 12 20:43:12.399515 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 12 20:43:12.399795 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 20:43:12.400052 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:43:12.400298 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:43:12.400505 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:43:12.405338 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 12 20:43:12.514269 systemd-fsck[827]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Nov 12 20:43:12.520132 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 12 20:43:12.523276 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 12 20:43:12.664222 kernel: EXT4-fs (sda9): mounted filesystem cc5635ac-cac6-420e-b789-89e3a937cfb2 r/w with ordered data mode. Quota mode: none. Nov 12 20:43:12.664447 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 12 20:43:12.664868 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 12 20:43:12.674301 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:43:12.676629 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 12 20:43:12.677069 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 12 20:43:12.677116 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 12 20:43:12.677141 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:43:12.681694 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 12 20:43:12.682578 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 12 20:43:12.688171 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (835) Nov 12 20:43:12.688865 kernel: BTRFS info (device sda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:43:12.688889 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:43:12.689326 kernel: BTRFS info (device sda6): using free space tree Nov 12 20:43:12.695226 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 12 20:43:12.696413 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:43:12.727031 initrd-setup-root[859]: cut: /sysroot/etc/passwd: No such file or directory Nov 12 20:43:12.730755 initrd-setup-root[866]: cut: /sysroot/etc/group: No such file or directory Nov 12 20:43:12.734484 initrd-setup-root[873]: cut: /sysroot/etc/shadow: No such file or directory Nov 12 20:43:12.738394 initrd-setup-root[880]: cut: /sysroot/etc/gshadow: No such file or directory Nov 12 20:43:12.809992 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 12 20:43:12.814326 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 12 20:43:12.816766 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 12 20:43:12.822257 kernel: BTRFS info (device sda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:43:12.834695 ignition[947]: INFO : Ignition 2.19.0 Nov 12 20:43:12.835027 ignition[947]: INFO : Stage: mount Nov 12 20:43:12.835027 ignition[947]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:43:12.835027 ignition[947]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 12 20:43:12.835454 ignition[947]: INFO : mount: mount passed Nov 12 20:43:12.835932 ignition[947]: INFO : Ignition finished successfully Nov 12 20:43:12.836199 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 12 20:43:12.840416 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 12 20:43:12.841063 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 12 20:43:12.926943 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 12 20:43:12.932332 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:43:12.967227 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (959) Nov 12 20:43:12.970864 kernel: BTRFS info (device sda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:43:12.970905 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:43:12.970922 kernel: BTRFS info (device sda6): using free space tree Nov 12 20:43:12.980219 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 12 20:43:12.980900 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:43:12.995291 ignition[976]: INFO : Ignition 2.19.0 Nov 12 20:43:12.995291 ignition[976]: INFO : Stage: files Nov 12 20:43:12.995661 ignition[976]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:43:12.995661 ignition[976]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 12 20:43:12.995953 ignition[976]: DEBUG : files: compiled without relabeling support, skipping Nov 12 20:43:12.997909 ignition[976]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 12 20:43:12.997909 ignition[976]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 12 20:43:13.003577 ignition[976]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 12 20:43:13.003749 ignition[976]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 12 20:43:13.003878 ignition[976]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 12 20:43:13.003814 unknown[976]: wrote ssh authorized keys file for user: core Nov 12 20:43:13.005551 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:43:13.005804 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Nov 12 20:43:13.039766 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 12 20:43:13.108257 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:43:13.109084 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 12 20:43:13.109084 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 12 20:43:13.109084 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:43:13.109084 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:43:13.109084 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:43:13.109084 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:43:13.109084 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:43:13.109084 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:43:13.110626 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:43:13.110626 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:43:13.110626 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:43:13.110626 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:43:13.110626 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:43:13.110626 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Nov 12 20:43:13.441420 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 12 20:43:13.668616 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:43:13.668913 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Nov 12 20:43:13.668913 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Nov 12 20:43:13.669639 ignition[976]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 12 20:43:13.669639 ignition[976]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:43:13.669639 ignition[976]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:43:13.669639 ignition[976]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 12 20:43:13.669639 ignition[976]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 12 20:43:13.669639 ignition[976]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 20:43:13.669639 ignition[976]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 20:43:13.669639 ignition[976]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 12 20:43:13.669639 ignition[976]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Nov 12 20:43:13.728622 ignition[976]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 20:43:13.731517 ignition[976]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 20:43:13.731517 ignition[976]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Nov 12 20:43:13.731517 ignition[976]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 12 20:43:13.731517 ignition[976]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 12 20:43:13.732175 ignition[976]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:43:13.732175 ignition[976]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:43:13.732175 ignition[976]: INFO : files: files passed Nov 12 20:43:13.732175 ignition[976]: INFO : Ignition finished successfully Nov 12 20:43:13.733094 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 12 20:43:13.737362 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 12 20:43:13.738333 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 12 20:43:13.750389 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 12 20:43:13.750457 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 12 20:43:13.755032 initrd-setup-root-after-ignition[1006]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:43:13.755032 initrd-setup-root-after-ignition[1006]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:43:13.756001 initrd-setup-root-after-ignition[1010]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:43:13.756950 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:43:13.757349 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 12 20:43:13.761419 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 12 20:43:13.775968 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 12 20:43:13.776038 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 12 20:43:13.776383 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 12 20:43:13.776496 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 12 20:43:13.776696 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 12 20:43:13.777195 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 12 20:43:13.788078 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:43:13.792340 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 12 20:43:13.798679 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:43:13.798908 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:43:13.799158 systemd[1]: Stopped target timers.target - Timer Units. Nov 12 20:43:13.799354 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 12 20:43:13.799435 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:43:13.799793 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 12 20:43:13.799954 systemd[1]: Stopped target basic.target - Basic System. Nov 12 20:43:13.800134 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 12 20:43:13.800347 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:43:13.800546 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 12 20:43:13.800959 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 12 20:43:13.801139 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:43:13.801356 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 12 20:43:13.801561 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 12 20:43:13.801749 systemd[1]: Stopped target swap.target - Swaps. Nov 12 20:43:13.801924 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 12 20:43:13.801995 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:43:13.802353 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:43:13.802518 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:43:13.802703 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 12 20:43:13.802753 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:43:13.802918 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 12 20:43:13.802979 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 12 20:43:13.803275 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 12 20:43:13.803340 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:43:13.803550 systemd[1]: Stopped target paths.target - Path Units. Nov 12 20:43:13.803680 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 12 20:43:13.804277 systemd-networkd[808]: ens192: Gained IPv6LL Nov 12 20:43:13.808262 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:43:13.808647 systemd[1]: Stopped target slices.target - Slice Units. Nov 12 20:43:13.808817 systemd[1]: Stopped target sockets.target - Socket Units. Nov 12 20:43:13.808996 systemd[1]: iscsid.socket: Deactivated successfully. Nov 12 20:43:13.809059 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:43:13.809294 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 12 20:43:13.809343 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:43:13.809500 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 12 20:43:13.809569 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:43:13.809820 systemd[1]: ignition-files.service: Deactivated successfully. Nov 12 20:43:13.809879 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 12 20:43:13.819386 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 12 20:43:13.822363 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 12 20:43:13.822673 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 12 20:43:13.822866 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:43:13.823193 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 12 20:43:13.823274 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:43:13.825919 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 12 20:43:13.826125 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 12 20:43:13.828065 ignition[1030]: INFO : Ignition 2.19.0 Nov 12 20:43:13.828065 ignition[1030]: INFO : Stage: umount Nov 12 20:43:13.828357 ignition[1030]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:43:13.828357 ignition[1030]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 12 20:43:13.828826 ignition[1030]: INFO : umount: umount passed Nov 12 20:43:13.828826 ignition[1030]: INFO : Ignition finished successfully Nov 12 20:43:13.833564 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 12 20:43:13.833867 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 12 20:43:13.834287 systemd[1]: Stopped target network.target - Network. Nov 12 20:43:13.834383 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 12 20:43:13.834426 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 12 20:43:13.834542 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 12 20:43:13.834566 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 12 20:43:13.834667 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 12 20:43:13.834687 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 12 20:43:13.834783 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 12 20:43:13.834804 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 12 20:43:13.834992 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 12 20:43:13.835133 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 12 20:43:13.837804 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 12 20:43:13.837869 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 12 20:43:13.839068 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 12 20:43:13.839111 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:43:13.841911 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 12 20:43:13.845248 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 12 20:43:13.845478 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 12 20:43:13.845869 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 12 20:43:13.845889 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:43:13.850320 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 12 20:43:13.850580 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 12 20:43:13.850621 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:43:13.850769 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Nov 12 20:43:13.850800 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Nov 12 20:43:13.850931 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 20:43:13.850953 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:43:13.851051 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 12 20:43:13.851072 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 12 20:43:13.851255 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:43:13.857478 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 12 20:43:13.857554 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 12 20:43:13.861669 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 12 20:43:13.861755 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:43:13.862063 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 12 20:43:13.862088 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 12 20:43:13.862302 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 12 20:43:13.862320 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:43:13.862490 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 12 20:43:13.862513 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:43:13.862790 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 12 20:43:13.862811 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 12 20:43:13.863091 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:43:13.863112 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:43:13.869423 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 12 20:43:13.869533 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 12 20:43:13.869565 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:43:13.869692 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 12 20:43:13.869715 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:43:13.869827 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 12 20:43:13.869848 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:43:13.869958 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:43:13.869979 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:43:13.872583 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 12 20:43:13.872649 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 12 20:43:13.900574 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 12 20:43:13.900649 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 12 20:43:13.900914 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 12 20:43:13.901023 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 12 20:43:13.901048 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 12 20:43:13.905367 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 12 20:43:13.918533 systemd[1]: Switching root. Nov 12 20:43:13.949539 systemd-journald[216]: Journal stopped Nov 12 20:43:15.025804 systemd-journald[216]: Received SIGTERM from PID 1 (systemd). Nov 12 20:43:15.025825 kernel: SELinux: policy capability network_peer_controls=1 Nov 12 20:43:15.025832 kernel: SELinux: policy capability open_perms=1 Nov 12 20:43:15.025838 kernel: SELinux: policy capability extended_socket_class=1 Nov 12 20:43:15.025843 kernel: SELinux: policy capability always_check_network=0 Nov 12 20:43:15.025848 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 12 20:43:15.025856 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 12 20:43:15.025861 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 12 20:43:15.025867 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 12 20:43:15.025873 kernel: audit: type=1403 audit(1731444194.522:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 12 20:43:15.025880 systemd[1]: Successfully loaded SELinux policy in 33.127ms. Nov 12 20:43:15.025886 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.913ms. Nov 12 20:43:15.025893 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:43:15.025900 systemd[1]: Detected virtualization vmware. Nov 12 20:43:15.025907 systemd[1]: Detected architecture x86-64. Nov 12 20:43:15.025913 systemd[1]: Detected first boot. Nov 12 20:43:15.025920 systemd[1]: Initializing machine ID from random generator. Nov 12 20:43:15.025928 zram_generator::config[1072]: No configuration found. Nov 12 20:43:15.025935 systemd[1]: Populated /etc with preset unit settings. Nov 12 20:43:15.025942 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 12 20:43:15.025949 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Nov 12 20:43:15.025955 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 12 20:43:15.025961 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 12 20:43:15.025967 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 12 20:43:15.025975 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 12 20:43:15.025982 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 12 20:43:15.025988 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 12 20:43:15.025994 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 12 20:43:15.026001 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 12 20:43:15.026007 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 12 20:43:15.026014 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 12 20:43:15.026022 systemd[1]: Created slice user.slice - User and Session Slice. Nov 12 20:43:15.026028 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:43:15.026035 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:43:15.026041 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 12 20:43:15.026048 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 12 20:43:15.026054 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 12 20:43:15.026060 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:43:15.026067 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 12 20:43:15.026075 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:43:15.026082 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 12 20:43:15.026089 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 12 20:43:15.026096 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 12 20:43:15.026103 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 12 20:43:15.026110 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:43:15.026116 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:43:15.026123 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:43:15.026131 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:43:15.026137 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 12 20:43:15.026144 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 12 20:43:15.026151 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:43:15.026158 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:43:15.026167 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:43:15.026174 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 12 20:43:15.026181 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 12 20:43:15.026187 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 12 20:43:15.026194 systemd[1]: Mounting media.mount - External Media Directory... Nov 12 20:43:15.026201 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:43:15.026236 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 12 20:43:15.026249 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 12 20:43:15.026263 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 12 20:43:15.026271 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 12 20:43:15.026277 systemd[1]: Reached target machines.target - Containers. Nov 12 20:43:15.026284 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 12 20:43:15.026291 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Nov 12 20:43:15.026298 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:43:15.026305 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 12 20:43:15.026311 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:43:15.026319 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 20:43:15.026326 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:43:15.026333 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 12 20:43:15.026340 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:43:15.026348 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 12 20:43:15.026355 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 12 20:43:15.026361 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 12 20:43:15.026368 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 12 20:43:15.026375 systemd[1]: Stopped systemd-fsck-usr.service. Nov 12 20:43:15.026383 kernel: fuse: init (API version 7.39) Nov 12 20:43:15.026389 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:43:15.026399 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:43:15.026406 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 12 20:43:15.026413 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 12 20:43:15.026420 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:43:15.026426 systemd[1]: verity-setup.service: Deactivated successfully. Nov 12 20:43:15.026433 systemd[1]: Stopped verity-setup.service. Nov 12 20:43:15.026442 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:43:15.026449 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 12 20:43:15.026455 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 12 20:43:15.026462 systemd[1]: Mounted media.mount - External Media Directory. Nov 12 20:43:15.026469 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 12 20:43:15.026476 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 12 20:43:15.026483 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 12 20:43:15.026490 kernel: loop: module loaded Nov 12 20:43:15.026507 systemd-journald[1162]: Collecting audit messages is disabled. Nov 12 20:43:15.026525 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 12 20:43:15.026533 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:43:15.026540 systemd-journald[1162]: Journal started Nov 12 20:43:15.026555 systemd-journald[1162]: Runtime Journal (/run/log/journal/ed7077c331c24148b5a4338301a6fa14) is 4.8M, max 38.6M, 33.8M free. Nov 12 20:43:14.853944 systemd[1]: Queued start job for default target multi-user.target. Nov 12 20:43:14.870156 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 12 20:43:14.870360 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 12 20:43:15.027015 jq[1139]: true Nov 12 20:43:15.028643 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 12 20:43:15.028660 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 12 20:43:15.041271 kernel: ACPI: bus type drm_connector registered Nov 12 20:43:15.042224 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:43:15.043780 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:43:15.043872 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:43:15.044108 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 20:43:15.044180 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 20:43:15.044416 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:43:15.044485 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:43:15.044714 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 12 20:43:15.044782 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 12 20:43:15.045002 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:43:15.045068 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:43:15.045321 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:43:15.045547 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 12 20:43:15.045778 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 12 20:43:15.047884 jq[1181]: true Nov 12 20:43:15.058829 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 12 20:43:15.066232 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 12 20:43:15.068295 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 12 20:43:15.068441 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 12 20:43:15.068464 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:43:15.075943 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 12 20:43:15.094346 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 12 20:43:15.100874 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 12 20:43:15.101095 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:43:15.124568 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 12 20:43:15.129858 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 12 20:43:15.130107 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:43:15.133307 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 12 20:43:15.133479 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:43:15.140698 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:43:15.142335 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 12 20:43:15.151374 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 20:43:15.152669 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 12 20:43:15.152874 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 12 20:43:15.153308 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 12 20:43:15.166614 systemd-journald[1162]: Time spent on flushing to /var/log/journal/ed7077c331c24148b5a4338301a6fa14 is 83.180ms for 1838 entries. Nov 12 20:43:15.166614 systemd-journald[1162]: System Journal (/var/log/journal/ed7077c331c24148b5a4338301a6fa14) is 8.0M, max 584.8M, 576.8M free. Nov 12 20:43:15.269026 systemd-journald[1162]: Received client request to flush runtime journal. Nov 12 20:43:15.269063 kernel: loop0: detected capacity change from 0 to 140768 Nov 12 20:43:15.269075 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 12 20:43:15.269084 kernel: loop1: detected capacity change from 0 to 211296 Nov 12 20:43:15.170516 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 12 20:43:15.175293 ignition[1185]: Ignition 2.19.0 Nov 12 20:43:15.170819 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 12 20:43:15.175490 ignition[1185]: deleting config from guestinfo properties Nov 12 20:43:15.181427 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 12 20:43:15.190224 ignition[1185]: Successfully deleted config Nov 12 20:43:15.190861 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Nov 12 20:43:15.217954 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:43:15.236855 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 12 20:43:15.238303 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 12 20:43:15.240505 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Nov 12 20:43:15.240514 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Nov 12 20:43:15.246476 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:43:15.253596 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 12 20:43:15.255483 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:43:15.264981 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 12 20:43:15.275551 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 12 20:43:15.282855 udevadm[1231]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 12 20:43:15.299886 kernel: loop2: detected capacity change from 0 to 142488 Nov 12 20:43:15.302214 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 12 20:43:15.313334 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:43:15.337744 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Nov 12 20:43:15.337757 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Nov 12 20:43:15.343690 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:43:15.367226 kernel: loop3: detected capacity change from 0 to 2976 Nov 12 20:43:15.406259 kernel: loop4: detected capacity change from 0 to 140768 Nov 12 20:43:15.424226 kernel: loop5: detected capacity change from 0 to 211296 Nov 12 20:43:15.488214 kernel: loop6: detected capacity change from 0 to 142488 Nov 12 20:43:15.518218 kernel: loop7: detected capacity change from 0 to 2976 Nov 12 20:43:15.536902 (sd-merge)[1245]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Nov 12 20:43:15.537167 (sd-merge)[1245]: Merged extensions into '/usr'. Nov 12 20:43:15.544005 systemd[1]: Reloading requested from client PID 1213 ('systemd-sysext') (unit systemd-sysext.service)... Nov 12 20:43:15.544015 systemd[1]: Reloading... Nov 12 20:43:15.587222 zram_generator::config[1271]: No configuration found. Nov 12 20:43:15.657065 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 12 20:43:15.672634 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:43:15.700233 systemd[1]: Reloading finished in 155 ms. Nov 12 20:43:15.727224 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 12 20:43:15.733407 systemd[1]: Starting ensure-sysext.service... Nov 12 20:43:15.734315 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:43:15.744832 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 12 20:43:15.745228 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 12 20:43:15.745722 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 12 20:43:15.745878 systemd-tmpfiles[1327]: ACLs are not supported, ignoring. Nov 12 20:43:15.745911 systemd-tmpfiles[1327]: ACLs are not supported, ignoring. Nov 12 20:43:15.827091 systemd-tmpfiles[1327]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 20:43:15.828224 systemd-tmpfiles[1327]: Skipping /boot Nov 12 20:43:15.829295 systemd[1]: Reloading requested from client PID 1326 ('systemctl') (unit ensure-sysext.service)... Nov 12 20:43:15.829305 systemd[1]: Reloading... Nov 12 20:43:15.842391 systemd-tmpfiles[1327]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 20:43:15.842473 systemd-tmpfiles[1327]: Skipping /boot Nov 12 20:43:15.880219 zram_generator::config[1351]: No configuration found. Nov 12 20:43:15.957930 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 12 20:43:15.973856 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:43:16.003012 systemd[1]: Reloading finished in 173 ms. Nov 12 20:43:16.018451 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:43:16.021987 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 20:43:16.036340 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 12 20:43:16.039313 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 12 20:43:16.049304 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:43:16.051807 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 12 20:43:16.065415 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 12 20:43:16.067720 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:43:16.069849 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:43:16.071498 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:43:16.072389 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:43:16.072554 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:43:16.072625 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:43:16.074696 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:43:16.074788 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:43:16.074846 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:43:16.079410 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:43:16.086442 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 20:43:16.086639 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:43:16.086772 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:43:16.089248 systemd[1]: Finished ensure-sysext.service. Nov 12 20:43:16.089655 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 12 20:43:16.100435 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 12 20:43:16.112720 ldconfig[1205]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 12 20:43:16.113471 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 20:43:16.113570 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 20:43:16.120772 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 12 20:43:16.123412 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 12 20:43:16.125958 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:43:16.126062 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:43:16.133356 augenrules[1444]: No rules Nov 12 20:43:16.134487 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 12 20:43:16.134878 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 12 20:43:16.135123 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 20:43:16.135420 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:43:16.135499 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:43:16.136881 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:43:16.143593 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:43:16.145493 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 12 20:43:16.145810 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:43:16.146254 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:43:16.147738 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:43:16.168262 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 12 20:43:16.168604 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 12 20:43:16.169058 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 12 20:43:16.178887 systemd-udevd[1456]: Using default interface naming scheme 'v255'. Nov 12 20:43:16.188139 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 12 20:43:16.188363 systemd[1]: Reached target time-set.target - System Time Set. Nov 12 20:43:16.204012 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:43:16.210156 systemd-resolved[1417]: Positive Trust Anchors: Nov 12 20:43:16.210633 systemd-resolved[1417]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:43:16.210656 systemd-resolved[1417]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:43:16.212330 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:43:16.214721 systemd-resolved[1417]: Defaulting to hostname 'linux'. Nov 12 20:43:16.217433 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:43:16.220566 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:43:16.243447 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 12 20:43:16.251224 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1467) Nov 12 20:43:16.254224 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1467) Nov 12 20:43:16.257349 systemd-networkd[1465]: lo: Link UP Nov 12 20:43:16.257353 systemd-networkd[1465]: lo: Gained carrier Nov 12 20:43:16.257713 systemd-networkd[1465]: Enumeration completed Nov 12 20:43:16.257766 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:43:16.257919 systemd[1]: Reached target network.target - Network. Nov 12 20:43:16.264387 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 12 20:43:16.276110 systemd-networkd[1465]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Nov 12 20:43:16.278224 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Nov 12 20:43:16.278352 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Nov 12 20:43:16.279383 systemd-networkd[1465]: ens192: Link UP Nov 12 20:43:16.279699 systemd-networkd[1465]: ens192: Gained carrier Nov 12 20:43:16.284294 systemd-timesyncd[1436]: Network configuration changed, trying to establish connection. Nov 12 20:43:16.297242 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 12 20:43:16.306226 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1473) Nov 12 20:43:16.306268 kernel: ACPI: button: Power Button [PWRF] Nov 12 20:43:16.319626 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Nov 12 20:43:16.323322 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 12 20:43:16.332495 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 12 20:43:16.384224 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Nov 12 20:43:16.393312 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Nov 12 20:43:16.393999 kernel: Guest personality initialized and is active Nov 12 20:43:16.395267 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 12 20:43:16.395294 kernel: Initialized host personality Nov 12 20:43:16.399215 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Nov 12 20:43:16.404911 (udev-worker)[1476]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Nov 12 20:43:16.415621 kernel: mousedev: PS/2 mouse device common for all mice Nov 12 20:43:16.415503 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:43:16.436447 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 12 20:43:16.442352 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 12 20:43:16.481215 lvm[1505]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 20:43:16.510177 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 12 20:43:16.510441 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:43:16.514300 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 12 20:43:16.516826 lvm[1507]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 20:43:16.567482 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 12 20:43:16.935456 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:43:16.935716 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:43:16.935892 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 12 20:43:16.936033 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 12 20:43:16.936258 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 12 20:43:16.936426 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 12 20:43:16.936550 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 12 20:43:16.936696 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 12 20:43:16.936715 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:43:16.936811 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:43:16.937347 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 12 20:43:16.938470 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 12 20:43:16.942483 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 12 20:43:16.943086 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 12 20:43:16.943256 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:43:16.943343 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:43:16.943459 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 12 20:43:16.943478 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 12 20:43:16.945321 systemd[1]: Starting containerd.service - containerd container runtime... Nov 12 20:43:16.948057 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 12 20:43:16.950300 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 12 20:43:16.952316 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 12 20:43:16.952448 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 12 20:43:16.954660 jq[1516]: false Nov 12 20:43:16.955095 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 12 20:43:16.959020 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 12 20:43:16.960963 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 12 20:43:16.962526 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 12 20:43:16.974113 extend-filesystems[1517]: Found loop4 Nov 12 20:43:16.974436 extend-filesystems[1517]: Found loop5 Nov 12 20:43:16.974436 extend-filesystems[1517]: Found loop6 Nov 12 20:43:16.974436 extend-filesystems[1517]: Found loop7 Nov 12 20:43:16.974436 extend-filesystems[1517]: Found sda Nov 12 20:43:16.974436 extend-filesystems[1517]: Found sda1 Nov 12 20:43:16.974436 extend-filesystems[1517]: Found sda2 Nov 12 20:43:16.974436 extend-filesystems[1517]: Found sda3 Nov 12 20:43:16.974436 extend-filesystems[1517]: Found usr Nov 12 20:43:16.974436 extend-filesystems[1517]: Found sda4 Nov 12 20:43:16.974436 extend-filesystems[1517]: Found sda6 Nov 12 20:43:16.974436 extend-filesystems[1517]: Found sda7 Nov 12 20:43:16.974436 extend-filesystems[1517]: Found sda9 Nov 12 20:43:16.974436 extend-filesystems[1517]: Checking size of /dev/sda9 Nov 12 20:43:16.975316 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 12 20:43:16.975639 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 12 20:43:16.976076 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 12 20:43:16.978346 systemd[1]: Starting update-engine.service - Update Engine... Nov 12 20:43:16.981270 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 12 20:43:16.985293 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Nov 12 20:43:16.987091 jq[1532]: true Nov 12 20:43:16.987257 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 12 20:43:16.987376 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 12 20:43:16.990418 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 12 20:43:16.990523 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 12 20:43:16.990816 systemd[1]: motdgen.service: Deactivated successfully. Nov 12 20:43:16.990910 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 12 20:43:17.005611 jq[1537]: true Nov 12 20:43:17.012911 (ntainerd)[1545]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 12 20:43:17.014911 extend-filesystems[1517]: Old size kept for /dev/sda9 Nov 12 20:43:17.014911 extend-filesystems[1517]: Found sr0 Nov 12 20:43:17.014843 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 12 20:43:17.014979 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 12 20:43:17.026483 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Nov 12 20:43:17.034132 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Nov 12 20:43:17.042114 tar[1536]: linux-amd64/helm Nov 12 20:43:17.044368 dbus-daemon[1515]: [system] SELinux support is enabled Nov 12 20:43:17.045153 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 12 20:43:17.053610 update_engine[1528]: I20241112 20:43:17.052354 1528 main.cc:92] Flatcar Update Engine starting Nov 12 20:43:17.054387 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Nov 12 20:43:17.054793 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 12 20:43:17.054812 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 12 20:43:17.054949 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 12 20:43:17.054959 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 12 20:43:17.063888 systemd[1]: Started update-engine.service - Update Engine. Nov 12 20:43:17.064292 update_engine[1528]: I20241112 20:43:17.064018 1528 update_check_scheduler.cc:74] Next update check in 11m35s Nov 12 20:43:17.066904 unknown[1557]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Nov 12 20:43:17.069368 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 12 20:43:17.070706 unknown[1557]: Core dump limit set to -1 Nov 12 20:43:17.084502 kernel: NET: Registered PF_VSOCK protocol family Nov 12 20:43:17.097065 systemd-logind[1522]: Watching system buttons on /dev/input/event1 (Power Button) Nov 12 20:43:17.097079 systemd-logind[1522]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 12 20:43:17.098516 systemd-logind[1522]: New seat seat0. Nov 12 20:43:17.099249 bash[1577]: Updated "/home/core/.ssh/authorized_keys" Nov 12 20:43:17.100020 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 12 20:43:17.100962 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 12 20:43:17.103828 systemd[1]: Started systemd-logind.service - User Login Management. Nov 12 20:43:17.118874 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1480) Nov 12 20:43:17.210277 locksmithd[1572]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 12 20:43:17.373037 containerd[1545]: time="2024-11-12T20:43:17.372991923Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 12 20:43:17.406100 containerd[1545]: time="2024-11-12T20:43:17.406070124Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:43:17.407578 containerd[1545]: time="2024-11-12T20:43:17.407346230Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.60-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:43:17.407578 containerd[1545]: time="2024-11-12T20:43:17.407371753Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 12 20:43:17.407578 containerd[1545]: time="2024-11-12T20:43:17.407383469Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 12 20:43:17.407578 containerd[1545]: time="2024-11-12T20:43:17.407482931Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 12 20:43:17.407578 containerd[1545]: time="2024-11-12T20:43:17.407493874Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 12 20:43:17.407578 containerd[1545]: time="2024-11-12T20:43:17.407532574Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:43:17.407578 containerd[1545]: time="2024-11-12T20:43:17.407542200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:43:17.407796 containerd[1545]: time="2024-11-12T20:43:17.407785236Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:43:17.407835 containerd[1545]: time="2024-11-12T20:43:17.407827520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 12 20:43:17.407867 containerd[1545]: time="2024-11-12T20:43:17.407859555Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:43:17.407899 containerd[1545]: time="2024-11-12T20:43:17.407892280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 12 20:43:17.407973 containerd[1545]: time="2024-11-12T20:43:17.407964588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:43:17.408382 containerd[1545]: time="2024-11-12T20:43:17.408139766Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:43:17.408382 containerd[1545]: time="2024-11-12T20:43:17.408213985Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:43:17.408382 containerd[1545]: time="2024-11-12T20:43:17.408223935Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 12 20:43:17.408382 containerd[1545]: time="2024-11-12T20:43:17.408274442Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 12 20:43:17.408382 containerd[1545]: time="2024-11-12T20:43:17.408303323Z" level=info msg="metadata content store policy set" policy=shared Nov 12 20:43:17.412401 containerd[1545]: time="2024-11-12T20:43:17.412082241Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 12 20:43:17.412401 containerd[1545]: time="2024-11-12T20:43:17.412119341Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 12 20:43:17.412401 containerd[1545]: time="2024-11-12T20:43:17.412130138Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 12 20:43:17.412401 containerd[1545]: time="2024-11-12T20:43:17.412139328Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 12 20:43:17.412401 containerd[1545]: time="2024-11-12T20:43:17.412148397Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 12 20:43:17.412401 containerd[1545]: time="2024-11-12T20:43:17.412261379Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 12 20:43:17.412531 containerd[1545]: time="2024-11-12T20:43:17.412425182Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 12 20:43:17.412531 containerd[1545]: time="2024-11-12T20:43:17.412512722Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 12 20:43:17.412531 containerd[1545]: time="2024-11-12T20:43:17.412523510Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 12 20:43:17.412569 containerd[1545]: time="2024-11-12T20:43:17.412531929Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 12 20:43:17.412569 containerd[1545]: time="2024-11-12T20:43:17.412539767Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 12 20:43:17.412569 containerd[1545]: time="2024-11-12T20:43:17.412547050Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 12 20:43:17.412569 containerd[1545]: time="2024-11-12T20:43:17.412555432Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 12 20:43:17.412569 containerd[1545]: time="2024-11-12T20:43:17.412563297Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 12 20:43:17.412632 containerd[1545]: time="2024-11-12T20:43:17.412571813Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 12 20:43:17.412632 containerd[1545]: time="2024-11-12T20:43:17.412579369Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 12 20:43:17.412632 containerd[1545]: time="2024-11-12T20:43:17.412586100Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 12 20:43:17.412632 containerd[1545]: time="2024-11-12T20:43:17.412592553Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 12 20:43:17.412632 containerd[1545]: time="2024-11-12T20:43:17.412604110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 12 20:43:17.412632 containerd[1545]: time="2024-11-12T20:43:17.412612351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 12 20:43:17.412632 containerd[1545]: time="2024-11-12T20:43:17.412618935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 12 20:43:17.412632 containerd[1545]: time="2024-11-12T20:43:17.412626088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 12 20:43:17.412733 containerd[1545]: time="2024-11-12T20:43:17.412632864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 12 20:43:17.412733 containerd[1545]: time="2024-11-12T20:43:17.412640360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 12 20:43:17.412733 containerd[1545]: time="2024-11-12T20:43:17.412646448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 12 20:43:17.412733 containerd[1545]: time="2024-11-12T20:43:17.412653749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 12 20:43:17.412733 containerd[1545]: time="2024-11-12T20:43:17.412660716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 12 20:43:17.412733 containerd[1545]: time="2024-11-12T20:43:17.412669003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 12 20:43:17.412733 containerd[1545]: time="2024-11-12T20:43:17.412676036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 12 20:43:17.412733 containerd[1545]: time="2024-11-12T20:43:17.412682337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 12 20:43:17.412733 containerd[1545]: time="2024-11-12T20:43:17.412693108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 12 20:43:17.412733 containerd[1545]: time="2024-11-12T20:43:17.412702324Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 12 20:43:17.412733 containerd[1545]: time="2024-11-12T20:43:17.412714855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 12 20:43:17.412733 containerd[1545]: time="2024-11-12T20:43:17.412726559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 12 20:43:17.412885 containerd[1545]: time="2024-11-12T20:43:17.412736685Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 12 20:43:17.412885 containerd[1545]: time="2024-11-12T20:43:17.412772193Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 12 20:43:17.412885 containerd[1545]: time="2024-11-12T20:43:17.412786314Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 12 20:43:17.412885 containerd[1545]: time="2024-11-12T20:43:17.412793740Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 12 20:43:17.412885 containerd[1545]: time="2024-11-12T20:43:17.412800765Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 12 20:43:17.412885 containerd[1545]: time="2024-11-12T20:43:17.412806379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 12 20:43:17.412885 containerd[1545]: time="2024-11-12T20:43:17.412813813Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 12 20:43:17.412885 containerd[1545]: time="2024-11-12T20:43:17.412819598Z" level=info msg="NRI interface is disabled by configuration." Nov 12 20:43:17.412885 containerd[1545]: time="2024-11-12T20:43:17.412825104Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 12 20:43:17.413070 containerd[1545]: time="2024-11-12T20:43:17.413014611Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 12 20:43:17.413070 containerd[1545]: time="2024-11-12T20:43:17.413052364Z" level=info msg="Connect containerd service" Nov 12 20:43:17.413070 containerd[1545]: time="2024-11-12T20:43:17.413071655Z" level=info msg="using legacy CRI server" Nov 12 20:43:17.413184 containerd[1545]: time="2024-11-12T20:43:17.413076513Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 12 20:43:17.413184 containerd[1545]: time="2024-11-12T20:43:17.413143844Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 12 20:43:17.416410 containerd[1545]: time="2024-11-12T20:43:17.416376772Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 20:43:17.416799 containerd[1545]: time="2024-11-12T20:43:17.416566293Z" level=info msg="Start subscribing containerd event" Nov 12 20:43:17.416799 containerd[1545]: time="2024-11-12T20:43:17.416599317Z" level=info msg="Start recovering state" Nov 12 20:43:17.416799 containerd[1545]: time="2024-11-12T20:43:17.416611813Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 12 20:43:17.416799 containerd[1545]: time="2024-11-12T20:43:17.416637990Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 12 20:43:17.416799 containerd[1545]: time="2024-11-12T20:43:17.416646370Z" level=info msg="Start event monitor" Nov 12 20:43:17.416799 containerd[1545]: time="2024-11-12T20:43:17.416664772Z" level=info msg="Start snapshots syncer" Nov 12 20:43:17.416799 containerd[1545]: time="2024-11-12T20:43:17.416671747Z" level=info msg="Start cni network conf syncer for default" Nov 12 20:43:17.416799 containerd[1545]: time="2024-11-12T20:43:17.416676967Z" level=info msg="Start streaming server" Nov 12 20:43:17.416799 containerd[1545]: time="2024-11-12T20:43:17.416718338Z" level=info msg="containerd successfully booted in 0.044638s" Nov 12 20:43:17.416786 systemd[1]: Started containerd.service - containerd container runtime. Nov 12 20:43:17.552164 tar[1536]: linux-amd64/LICENSE Nov 12 20:43:17.552164 tar[1536]: linux-amd64/README.md Nov 12 20:43:17.559545 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 12 20:43:17.574916 sshd_keygen[1553]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 12 20:43:17.588414 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 12 20:43:17.595429 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 12 20:43:17.598920 systemd[1]: issuegen.service: Deactivated successfully. Nov 12 20:43:17.599048 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 12 20:43:17.600755 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 12 20:43:17.608614 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 12 20:43:17.610388 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 12 20:43:17.612422 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 12 20:43:17.613508 systemd[1]: Reached target getty.target - Login Prompts. Nov 12 20:43:18.284377 systemd-networkd[1465]: ens192: Gained IPv6LL Nov 12 20:43:18.284673 systemd-timesyncd[1436]: Network configuration changed, trying to establish connection. Nov 12 20:43:18.286021 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 12 20:43:18.286716 systemd[1]: Reached target network-online.target - Network is Online. Nov 12 20:43:18.290380 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Nov 12 20:43:18.298562 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:43:18.301401 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 12 20:43:18.333690 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 12 20:43:18.335075 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 12 20:43:18.335183 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Nov 12 20:43:18.335736 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 12 20:43:19.060927 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:43:19.062022 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 12 20:43:19.063034 systemd[1]: Startup finished in 970ms (kernel) + 4.894s (initrd) + 4.572s (userspace) = 10.437s. Nov 12 20:43:19.068879 (kubelet)[1691]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:43:19.094822 login[1660]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 12 20:43:19.094974 login[1661]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 12 20:43:19.102114 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 12 20:43:19.108366 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 12 20:43:19.110740 systemd-logind[1522]: New session 1 of user core. Nov 12 20:43:19.114296 systemd-logind[1522]: New session 2 of user core. Nov 12 20:43:19.118255 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 12 20:43:19.124659 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 12 20:43:19.126511 (systemd)[1699]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 12 20:43:19.193533 systemd[1699]: Queued start job for default target default.target. Nov 12 20:43:19.198467 systemd[1699]: Created slice app.slice - User Application Slice. Nov 12 20:43:19.198549 systemd[1699]: Reached target paths.target - Paths. Nov 12 20:43:19.198643 systemd[1699]: Reached target timers.target - Timers. Nov 12 20:43:19.200123 systemd[1699]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 12 20:43:19.206981 systemd[1699]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 12 20:43:19.207020 systemd[1699]: Reached target sockets.target - Sockets. Nov 12 20:43:19.207029 systemd[1699]: Reached target basic.target - Basic System. Nov 12 20:43:19.207052 systemd[1699]: Reached target default.target - Main User Target. Nov 12 20:43:19.207071 systemd[1699]: Startup finished in 76ms. Nov 12 20:43:19.207124 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 12 20:43:19.208641 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 12 20:43:19.209322 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 12 20:43:19.890324 kubelet[1691]: E1112 20:43:19.890268 1691 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:43:19.891936 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:43:19.892024 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:43:30.142299 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 12 20:43:30.150313 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:43:30.495290 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:43:30.499403 (kubelet)[1742]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:43:30.610150 kubelet[1742]: E1112 20:43:30.610107 1742 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:43:30.612819 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:43:30.612909 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:43:40.757560 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 12 20:43:40.767468 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:43:41.051628 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:43:41.054474 (kubelet)[1757]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:43:41.111141 kubelet[1757]: E1112 20:43:41.111032 1757 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:43:41.112952 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:43:41.113091 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:44:56.329616 systemd-resolved[1417]: Clock change detected. Flushing caches. Nov 12 20:44:56.329749 systemd-timesyncd[1436]: Contacted time server 74.208.117.38:123 (2.flatcar.pool.ntp.org). Nov 12 20:44:56.329781 systemd-timesyncd[1436]: Initial clock synchronization to Tue 2024-11-12 20:44:56.329574 UTC. Nov 12 20:44:58.892853 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 12 20:44:58.903670 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:44:59.153845 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:44:59.157356 (kubelet)[1772]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:44:59.195281 kubelet[1772]: E1112 20:44:59.195246 1772 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:44:59.196808 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:44:59.196891 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:45:04.791995 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 12 20:45:04.799763 systemd[1]: Started sshd@0-139.178.70.104:22-139.178.68.195:53508.service - OpenSSH per-connection server daemon (139.178.68.195:53508). Nov 12 20:45:04.846783 sshd[1783]: Accepted publickey for core from 139.178.68.195 port 53508 ssh2: RSA SHA256:eW+66Zcd2Hcqsdn9w7YOca9/FmdLw/8eMbZ4A5lBUuE Nov 12 20:45:04.847691 sshd[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:45:04.850520 systemd-logind[1522]: New session 3 of user core. Nov 12 20:45:04.859674 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 12 20:45:04.920760 systemd[1]: Started sshd@1-139.178.70.104:22-139.178.68.195:53512.service - OpenSSH per-connection server daemon (139.178.68.195:53512). Nov 12 20:45:04.949217 sshd[1788]: Accepted publickey for core from 139.178.68.195 port 53512 ssh2: RSA SHA256:eW+66Zcd2Hcqsdn9w7YOca9/FmdLw/8eMbZ4A5lBUuE Nov 12 20:45:04.949898 sshd[1788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:45:04.952292 systemd-logind[1522]: New session 4 of user core. Nov 12 20:45:04.958650 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 12 20:45:05.008713 sshd[1788]: pam_unix(sshd:session): session closed for user core Nov 12 20:45:05.013978 systemd[1]: sshd@1-139.178.70.104:22-139.178.68.195:53512.service: Deactivated successfully. Nov 12 20:45:05.014885 systemd[1]: session-4.scope: Deactivated successfully. Nov 12 20:45:05.016055 systemd-logind[1522]: Session 4 logged out. Waiting for processes to exit. Nov 12 20:45:05.026163 systemd[1]: Started sshd@2-139.178.70.104:22-139.178.68.195:53518.service - OpenSSH per-connection server daemon (139.178.68.195:53518). Nov 12 20:45:05.027073 systemd-logind[1522]: Removed session 4. Nov 12 20:45:05.051946 sshd[1795]: Accepted publickey for core from 139.178.68.195 port 53518 ssh2: RSA SHA256:eW+66Zcd2Hcqsdn9w7YOca9/FmdLw/8eMbZ4A5lBUuE Nov 12 20:45:05.053026 sshd[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:45:05.055968 systemd-logind[1522]: New session 5 of user core. Nov 12 20:45:05.072663 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 12 20:45:05.118623 sshd[1795]: pam_unix(sshd:session): session closed for user core Nov 12 20:45:05.126390 systemd[1]: sshd@2-139.178.70.104:22-139.178.68.195:53518.service: Deactivated successfully. Nov 12 20:45:05.127221 systemd[1]: session-5.scope: Deactivated successfully. Nov 12 20:45:05.127956 systemd-logind[1522]: Session 5 logged out. Waiting for processes to exit. Nov 12 20:45:05.131587 systemd[1]: Started sshd@3-139.178.70.104:22-139.178.68.195:53534.service - OpenSSH per-connection server daemon (139.178.68.195:53534). Nov 12 20:45:05.132597 systemd-logind[1522]: Removed session 5. Nov 12 20:45:05.157821 sshd[1802]: Accepted publickey for core from 139.178.68.195 port 53534 ssh2: RSA SHA256:eW+66Zcd2Hcqsdn9w7YOca9/FmdLw/8eMbZ4A5lBUuE Nov 12 20:45:05.158794 sshd[1802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:45:05.161772 systemd-logind[1522]: New session 6 of user core. Nov 12 20:45:05.173676 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 12 20:45:05.222785 sshd[1802]: pam_unix(sshd:session): session closed for user core Nov 12 20:45:05.235269 systemd[1]: sshd@3-139.178.70.104:22-139.178.68.195:53534.service: Deactivated successfully. Nov 12 20:45:05.236249 systemd[1]: session-6.scope: Deactivated successfully. Nov 12 20:45:05.237255 systemd-logind[1522]: Session 6 logged out. Waiting for processes to exit. Nov 12 20:45:05.238159 systemd[1]: Started sshd@4-139.178.70.104:22-139.178.68.195:53546.service - OpenSSH per-connection server daemon (139.178.68.195:53546). Nov 12 20:45:05.239838 systemd-logind[1522]: Removed session 6. Nov 12 20:45:05.270514 sshd[1809]: Accepted publickey for core from 139.178.68.195 port 53546 ssh2: RSA SHA256:eW+66Zcd2Hcqsdn9w7YOca9/FmdLw/8eMbZ4A5lBUuE Nov 12 20:45:05.271378 sshd[1809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:45:05.275688 systemd-logind[1522]: New session 7 of user core. Nov 12 20:45:05.281724 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 12 20:45:05.400897 sudo[1812]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 12 20:45:05.401139 sudo[1812]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:45:05.417445 sudo[1812]: pam_unix(sudo:session): session closed for user root Nov 12 20:45:05.418722 sshd[1809]: pam_unix(sshd:session): session closed for user core Nov 12 20:45:05.424277 systemd[1]: sshd@4-139.178.70.104:22-139.178.68.195:53546.service: Deactivated successfully. Nov 12 20:45:05.425433 systemd[1]: session-7.scope: Deactivated successfully. Nov 12 20:45:05.426268 systemd-logind[1522]: Session 7 logged out. Waiting for processes to exit. Nov 12 20:45:05.432094 systemd[1]: Started sshd@5-139.178.70.104:22-139.178.68.195:53552.service - OpenSSH per-connection server daemon (139.178.68.195:53552). Nov 12 20:45:05.433115 systemd-logind[1522]: Removed session 7. Nov 12 20:45:05.461709 sshd[1817]: Accepted publickey for core from 139.178.68.195 port 53552 ssh2: RSA SHA256:eW+66Zcd2Hcqsdn9w7YOca9/FmdLw/8eMbZ4A5lBUuE Nov 12 20:45:05.462834 sshd[1817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:45:05.466599 systemd-logind[1522]: New session 8 of user core. Nov 12 20:45:05.480769 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 12 20:45:05.529788 sudo[1821]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 12 20:45:05.529997 sudo[1821]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:45:05.532325 sudo[1821]: pam_unix(sudo:session): session closed for user root Nov 12 20:45:05.535920 sudo[1820]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 12 20:45:05.536112 sudo[1820]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:45:05.546803 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 12 20:45:05.547689 auditctl[1824]: No rules Nov 12 20:45:05.548001 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 20:45:05.548125 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 12 20:45:05.550855 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 20:45:05.567134 augenrules[1842]: No rules Nov 12 20:45:05.567775 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 20:45:05.568487 sudo[1820]: pam_unix(sudo:session): session closed for user root Nov 12 20:45:05.569636 sshd[1817]: pam_unix(sshd:session): session closed for user core Nov 12 20:45:05.575330 systemd[1]: sshd@5-139.178.70.104:22-139.178.68.195:53552.service: Deactivated successfully. Nov 12 20:45:05.576238 systemd[1]: session-8.scope: Deactivated successfully. Nov 12 20:45:05.577343 systemd-logind[1522]: Session 8 logged out. Waiting for processes to exit. Nov 12 20:45:05.580986 systemd[1]: Started sshd@6-139.178.70.104:22-139.178.68.195:37004.service - OpenSSH per-connection server daemon (139.178.68.195:37004). Nov 12 20:45:05.581719 systemd-logind[1522]: Removed session 8. Nov 12 20:45:05.606355 sshd[1850]: Accepted publickey for core from 139.178.68.195 port 37004 ssh2: RSA SHA256:eW+66Zcd2Hcqsdn9w7YOca9/FmdLw/8eMbZ4A5lBUuE Nov 12 20:45:05.607063 sshd[1850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:45:05.609653 systemd-logind[1522]: New session 9 of user core. Nov 12 20:45:05.615790 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 12 20:45:05.664642 sudo[1853]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 12 20:45:05.664891 sudo[1853]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:45:06.015742 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 12 20:45:06.015839 (dockerd)[1869]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 12 20:45:06.417750 dockerd[1869]: time="2024-11-12T20:45:06.417694600Z" level=info msg="Starting up" Nov 12 20:45:06.501753 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1596942184-merged.mount: Deactivated successfully. Nov 12 20:45:06.518724 dockerd[1869]: time="2024-11-12T20:45:06.518360375Z" level=info msg="Loading containers: start." Nov 12 20:45:06.612637 kernel: Initializing XFRM netlink socket Nov 12 20:45:06.681966 systemd-networkd[1465]: docker0: Link UP Nov 12 20:45:06.692506 dockerd[1869]: time="2024-11-12T20:45:06.692472724Z" level=info msg="Loading containers: done." Nov 12 20:45:06.703532 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck621682831-merged.mount: Deactivated successfully. Nov 12 20:45:06.705149 dockerd[1869]: time="2024-11-12T20:45:06.705092702Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 12 20:45:06.705254 dockerd[1869]: time="2024-11-12T20:45:06.705211255Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 12 20:45:06.705440 dockerd[1869]: time="2024-11-12T20:45:06.705312302Z" level=info msg="Daemon has completed initialization" Nov 12 20:45:06.724246 dockerd[1869]: time="2024-11-12T20:45:06.724125108Z" level=info msg="API listen on /run/docker.sock" Nov 12 20:45:06.724498 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 12 20:45:07.709567 containerd[1545]: time="2024-11-12T20:45:07.709437404Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\"" Nov 12 20:45:08.290098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2694302798.mount: Deactivated successfully. Nov 12 20:45:09.392985 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 12 20:45:09.398725 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:45:09.458649 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:45:09.462154 (kubelet)[2076]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:45:09.522940 kubelet[2076]: E1112 20:45:09.522914 2076 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:45:09.524596 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:45:09.524679 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:45:09.882682 containerd[1545]: time="2024-11-12T20:45:09.882605842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:09.895696 containerd[1545]: time="2024-11-12T20:45:09.895645546Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.10: active requests=0, bytes read=35140799" Nov 12 20:45:09.910771 containerd[1545]: time="2024-11-12T20:45:09.910732297Z" level=info msg="ImageCreate event name:\"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:09.925608 containerd[1545]: time="2024-11-12T20:45:09.925567730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:09.926250 containerd[1545]: time="2024-11-12T20:45:09.926155716Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.10\" with image id \"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\", size \"35137599\" in 2.216694871s" Nov 12 20:45:09.926250 containerd[1545]: time="2024-11-12T20:45:09.926175770Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\" returns image reference \"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\"" Nov 12 20:45:09.939495 containerd[1545]: time="2024-11-12T20:45:09.939312744Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\"" Nov 12 20:45:10.265262 update_engine[1528]: I20241112 20:45:10.265204 1528 update_attempter.cc:509] Updating boot flags... Nov 12 20:45:10.295579 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2097) Nov 12 20:45:10.427601 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2099) Nov 12 20:45:11.828317 containerd[1545]: time="2024-11-12T20:45:11.828268115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:11.844084 containerd[1545]: time="2024-11-12T20:45:11.844036853Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.10: active requests=0, bytes read=32218299" Nov 12 20:45:11.856331 containerd[1545]: time="2024-11-12T20:45:11.856285109Z" level=info msg="ImageCreate event name:\"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:11.867819 containerd[1545]: time="2024-11-12T20:45:11.867759508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:11.868830 containerd[1545]: time="2024-11-12T20:45:11.868706519Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.10\" with image id \"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\", size \"33663665\" in 1.929362659s" Nov 12 20:45:11.868830 containerd[1545]: time="2024-11-12T20:45:11.868731830Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\" returns image reference \"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\"" Nov 12 20:45:11.886019 containerd[1545]: time="2024-11-12T20:45:11.885999166Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\"" Nov 12 20:45:13.229596 containerd[1545]: time="2024-11-12T20:45:13.229544113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:13.232680 containerd[1545]: time="2024-11-12T20:45:13.232644289Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.10: active requests=0, bytes read=17332660" Nov 12 20:45:13.237272 containerd[1545]: time="2024-11-12T20:45:13.237240855Z" level=info msg="ImageCreate event name:\"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:13.241754 containerd[1545]: time="2024-11-12T20:45:13.241716733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:13.242747 containerd[1545]: time="2024-11-12T20:45:13.242400618Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.10\" with image id \"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\", size \"18778044\" in 1.356273812s" Nov 12 20:45:13.242747 containerd[1545]: time="2024-11-12T20:45:13.242424096Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\" returns image reference \"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\"" Nov 12 20:45:13.256675 containerd[1545]: time="2024-11-12T20:45:13.256652393Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\"" Nov 12 20:45:14.352097 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1732977909.mount: Deactivated successfully. Nov 12 20:45:15.000210 containerd[1545]: time="2024-11-12T20:45:15.000155749Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:15.000845 containerd[1545]: time="2024-11-12T20:45:15.000796221Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.10: active requests=0, bytes read=28616816" Nov 12 20:45:15.002467 containerd[1545]: time="2024-11-12T20:45:15.002434893Z" level=info msg="ImageCreate event name:\"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:15.003299 containerd[1545]: time="2024-11-12T20:45:15.003133940Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.10\" with image id \"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\", repo tag \"registry.k8s.io/kube-proxy:v1.29.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\", size \"28615835\" in 1.746331419s" Nov 12 20:45:15.003299 containerd[1545]: time="2024-11-12T20:45:15.003178571Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\" returns image reference \"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\"" Nov 12 20:45:15.003607 containerd[1545]: time="2024-11-12T20:45:15.003546790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:15.020734 containerd[1545]: time="2024-11-12T20:45:15.020231996Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Nov 12 20:45:15.596330 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3460303803.mount: Deactivated successfully. Nov 12 20:45:16.952571 containerd[1545]: time="2024-11-12T20:45:16.952500057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:16.957571 containerd[1545]: time="2024-11-12T20:45:16.957536426Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Nov 12 20:45:16.959772 containerd[1545]: time="2024-11-12T20:45:16.959754059Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:16.963077 containerd[1545]: time="2024-11-12T20:45:16.963051114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:16.963635 containerd[1545]: time="2024-11-12T20:45:16.963481878Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.943223469s" Nov 12 20:45:16.963635 containerd[1545]: time="2024-11-12T20:45:16.963503716Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Nov 12 20:45:16.976114 containerd[1545]: time="2024-11-12T20:45:16.976086091Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Nov 12 20:45:18.113123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount45376038.mount: Deactivated successfully. Nov 12 20:45:18.138116 containerd[1545]: time="2024-11-12T20:45:18.137585964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:18.142210 containerd[1545]: time="2024-11-12T20:45:18.142182546Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Nov 12 20:45:18.148952 containerd[1545]: time="2024-11-12T20:45:18.148921978Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:18.155816 containerd[1545]: time="2024-11-12T20:45:18.155786229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:18.156338 containerd[1545]: time="2024-11-12T20:45:18.156245607Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 1.180058863s" Nov 12 20:45:18.156338 containerd[1545]: time="2024-11-12T20:45:18.156265762Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Nov 12 20:45:18.172605 containerd[1545]: time="2024-11-12T20:45:18.172509757Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Nov 12 20:45:18.965599 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2736450980.mount: Deactivated successfully. Nov 12 20:45:19.642991 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Nov 12 20:45:19.651755 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:45:20.110798 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:45:20.111835 (kubelet)[2242]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:45:20.215946 kubelet[2242]: E1112 20:45:20.215860 2242 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:45:20.217005 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:45:20.217093 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:45:20.721003 containerd[1545]: time="2024-11-12T20:45:20.720966715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:20.726667 containerd[1545]: time="2024-11-12T20:45:20.726637562Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Nov 12 20:45:20.734304 containerd[1545]: time="2024-11-12T20:45:20.734278014Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:20.744336 containerd[1545]: time="2024-11-12T20:45:20.744287953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:20.745070 containerd[1545]: time="2024-11-12T20:45:20.744956638Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.572420511s" Nov 12 20:45:20.745070 containerd[1545]: time="2024-11-12T20:45:20.744978550Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Nov 12 20:45:22.978878 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:45:22.989708 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:45:23.003519 systemd[1]: Reloading requested from client PID 2318 ('systemctl') (unit session-9.scope)... Nov 12 20:45:23.003528 systemd[1]: Reloading... Nov 12 20:45:23.082601 zram_generator::config[2355]: No configuration found. Nov 12 20:45:23.147808 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 12 20:45:23.162751 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:45:23.206194 systemd[1]: Reloading finished in 202 ms. Nov 12 20:45:23.230414 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 12 20:45:23.230496 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 12 20:45:23.230719 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:45:23.235777 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:45:23.605145 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:45:23.608582 (kubelet)[2423]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 20:45:23.660251 kubelet[2423]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:45:23.660251 kubelet[2423]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 20:45:23.660251 kubelet[2423]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:45:23.660482 kubelet[2423]: I1112 20:45:23.660280 2423 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 20:45:24.071136 kubelet[2423]: I1112 20:45:24.070690 2423 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 12 20:45:24.071136 kubelet[2423]: I1112 20:45:24.070708 2423 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 20:45:24.071136 kubelet[2423]: I1112 20:45:24.070838 2423 server.go:919] "Client rotation is on, will bootstrap in background" Nov 12 20:45:24.109043 kubelet[2423]: E1112 20:45:24.109019 2423 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.104:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.104:6443: connect: connection refused Nov 12 20:45:24.109809 kubelet[2423]: I1112 20:45:24.109745 2423 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:45:24.174483 kubelet[2423]: I1112 20:45:24.174415 2423 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 20:45:24.181887 kubelet[2423]: I1112 20:45:24.181868 2423 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 20:45:24.189555 kubelet[2423]: I1112 20:45:24.189533 2423 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 20:45:24.193552 kubelet[2423]: I1112 20:45:24.193535 2423 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 20:45:24.193589 kubelet[2423]: I1112 20:45:24.193555 2423 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 20:45:24.193645 kubelet[2423]: I1112 20:45:24.193632 2423 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:45:24.193764 kubelet[2423]: I1112 20:45:24.193709 2423 kubelet.go:396] "Attempting to sync node with API server" Nov 12 20:45:24.193764 kubelet[2423]: I1112 20:45:24.193722 2423 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 20:45:24.199143 kubelet[2423]: I1112 20:45:24.199017 2423 kubelet.go:312] "Adding apiserver pod source" Nov 12 20:45:24.199143 kubelet[2423]: I1112 20:45:24.199030 2423 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 20:45:24.199471 kubelet[2423]: W1112 20:45:24.199443 2423 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://139.178.70.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Nov 12 20:45:24.199491 kubelet[2423]: E1112 20:45:24.199477 2423 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Nov 12 20:45:24.201983 kubelet[2423]: W1112 20:45:24.201959 2423 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://139.178.70.104:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Nov 12 20:45:24.201983 kubelet[2423]: E1112 20:45:24.201983 2423 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.104:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Nov 12 20:45:24.204817 kubelet[2423]: I1112 20:45:24.204804 2423 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 20:45:24.222705 kubelet[2423]: I1112 20:45:24.222679 2423 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 20:45:24.232733 kubelet[2423]: W1112 20:45:24.232714 2423 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 12 20:45:24.233114 kubelet[2423]: I1112 20:45:24.233101 2423 server.go:1256] "Started kubelet" Nov 12 20:45:24.233562 kubelet[2423]: I1112 20:45:24.233257 2423 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 20:45:24.233880 kubelet[2423]: I1112 20:45:24.233872 2423 server.go:461] "Adding debug handlers to kubelet server" Nov 12 20:45:24.256339 kubelet[2423]: I1112 20:45:24.256056 2423 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 20:45:24.256339 kubelet[2423]: I1112 20:45:24.256216 2423 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 20:45:24.263140 kubelet[2423]: I1112 20:45:24.262774 2423 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 20:45:24.266155 kubelet[2423]: E1112 20:45:24.266141 2423 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.104:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.104:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18075370764784ae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-11-12 20:45:24.233086126 +0000 UTC m=+0.622269115,LastTimestamp:2024-11-12 20:45:24.233086126 +0000 UTC m=+0.622269115,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 12 20:45:24.274904 kubelet[2423]: I1112 20:45:24.274599 2423 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 20:45:24.275267 kubelet[2423]: E1112 20:45:24.275254 2423 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.104:6443: connect: connection refused" interval="200ms" Nov 12 20:45:24.275267 kubelet[2423]: I1112 20:45:24.275268 2423 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 12 20:45:24.275444 kubelet[2423]: W1112 20:45:24.275409 2423 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://139.178.70.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Nov 12 20:45:24.275466 kubelet[2423]: E1112 20:45:24.275447 2423 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Nov 12 20:45:24.275484 kubelet[2423]: I1112 20:45:24.275478 2423 reconciler_new.go:29] "Reconciler: start to sync state" Nov 12 20:45:24.286690 kubelet[2423]: I1112 20:45:24.286672 2423 factory.go:221] Registration of the systemd container factory successfully Nov 12 20:45:24.286752 kubelet[2423]: I1112 20:45:24.286741 2423 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 20:45:24.292339 kubelet[2423]: I1112 20:45:24.292325 2423 factory.go:221] Registration of the containerd container factory successfully Nov 12 20:45:24.295814 kubelet[2423]: I1112 20:45:24.295602 2423 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 20:45:24.298690 kubelet[2423]: E1112 20:45:24.298644 2423 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 20:45:24.299057 kubelet[2423]: I1112 20:45:24.299027 2423 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 20:45:24.299057 kubelet[2423]: I1112 20:45:24.299042 2423 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 20:45:24.299057 kubelet[2423]: I1112 20:45:24.299051 2423 kubelet.go:2329] "Starting kubelet main sync loop" Nov 12 20:45:24.299127 kubelet[2423]: E1112 20:45:24.299074 2423 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 20:45:24.305417 kubelet[2423]: W1112 20:45:24.305320 2423 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://139.178.70.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Nov 12 20:45:24.305417 kubelet[2423]: E1112 20:45:24.305357 2423 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Nov 12 20:45:24.312853 kubelet[2423]: I1112 20:45:24.312816 2423 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 20:45:24.312853 kubelet[2423]: I1112 20:45:24.312825 2423 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 20:45:24.312853 kubelet[2423]: I1112 20:45:24.312835 2423 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:45:24.326731 kubelet[2423]: I1112 20:45:24.326017 2423 policy_none.go:49] "None policy: Start" Nov 12 20:45:24.326731 kubelet[2423]: I1112 20:45:24.326346 2423 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 20:45:24.326731 kubelet[2423]: I1112 20:45:24.326357 2423 state_mem.go:35] "Initializing new in-memory state store" Nov 12 20:45:24.358649 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 12 20:45:24.368186 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 12 20:45:24.370461 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 12 20:45:24.375689 kubelet[2423]: I1112 20:45:24.375678 2423 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:45:24.375968 kubelet[2423]: E1112 20:45:24.375956 2423 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.104:6443/api/v1/nodes\": dial tcp 139.178.70.104:6443: connect: connection refused" node="localhost" Nov 12 20:45:24.380560 kubelet[2423]: I1112 20:45:24.380515 2423 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 20:45:24.380675 kubelet[2423]: I1112 20:45:24.380664 2423 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 20:45:24.381252 kubelet[2423]: E1112 20:45:24.381245 2423 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 12 20:45:24.400157 kubelet[2423]: I1112 20:45:24.399697 2423 topology_manager.go:215] "Topology Admit Handler" podUID="9ed1dd4d1f440435597beb28b35df6c9" podNamespace="kube-system" podName="kube-apiserver-localhost" Nov 12 20:45:24.400517 kubelet[2423]: I1112 20:45:24.400508 2423 topology_manager.go:215] "Topology Admit Handler" podUID="33932df710fd78419c0859d7fa44b8e7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Nov 12 20:45:24.401113 kubelet[2423]: I1112 20:45:24.401087 2423 topology_manager.go:215] "Topology Admit Handler" podUID="c7145bec6839b5d7dcb0c5beff5515b4" podNamespace="kube-system" podName="kube-scheduler-localhost" Nov 12 20:45:24.404539 systemd[1]: Created slice kubepods-burstable-pod9ed1dd4d1f440435597beb28b35df6c9.slice - libcontainer container kubepods-burstable-pod9ed1dd4d1f440435597beb28b35df6c9.slice. Nov 12 20:45:24.424738 systemd[1]: Created slice kubepods-burstable-pod33932df710fd78419c0859d7fa44b8e7.slice - libcontainer container kubepods-burstable-pod33932df710fd78419c0859d7fa44b8e7.slice. Nov 12 20:45:24.433201 systemd[1]: Created slice kubepods-burstable-podc7145bec6839b5d7dcb0c5beff5515b4.slice - libcontainer container kubepods-burstable-podc7145bec6839b5d7dcb0c5beff5515b4.slice. Nov 12 20:45:24.476562 kubelet[2423]: E1112 20:45:24.476526 2423 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.104:6443: connect: connection refused" interval="400ms" Nov 12 20:45:24.577361 kubelet[2423]: I1112 20:45:24.576877 2423 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9ed1dd4d1f440435597beb28b35df6c9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9ed1dd4d1f440435597beb28b35df6c9\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:45:24.577361 kubelet[2423]: I1112 20:45:24.576907 2423 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9ed1dd4d1f440435597beb28b35df6c9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9ed1dd4d1f440435597beb28b35df6c9\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:45:24.577361 kubelet[2423]: I1112 20:45:24.576925 2423 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9ed1dd4d1f440435597beb28b35df6c9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9ed1dd4d1f440435597beb28b35df6c9\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:45:24.577361 kubelet[2423]: I1112 20:45:24.576936 2423 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:45:24.577361 kubelet[2423]: I1112 20:45:24.576949 2423 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:45:24.577519 kubelet[2423]: I1112 20:45:24.576961 2423 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:45:24.577519 kubelet[2423]: I1112 20:45:24.576973 2423 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:45:24.577519 kubelet[2423]: I1112 20:45:24.576984 2423 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c7145bec6839b5d7dcb0c5beff5515b4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c7145bec6839b5d7dcb0c5beff5515b4\") " pod="kube-system/kube-scheduler-localhost" Nov 12 20:45:24.577519 kubelet[2423]: I1112 20:45:24.576996 2423 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:45:24.577993 kubelet[2423]: I1112 20:45:24.577695 2423 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:45:24.577993 kubelet[2423]: E1112 20:45:24.577882 2423 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.104:6443/api/v1/nodes\": dial tcp 139.178.70.104:6443: connect: connection refused" node="localhost" Nov 12 20:45:24.724290 containerd[1545]: time="2024-11-12T20:45:24.724256902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9ed1dd4d1f440435597beb28b35df6c9,Namespace:kube-system,Attempt:0,}" Nov 12 20:45:24.729768 kubelet[2423]: E1112 20:45:24.729730 2423 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.104:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.104:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18075370764784ae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-11-12 20:45:24.233086126 +0000 UTC m=+0.622269115,LastTimestamp:2024-11-12 20:45:24.233086126 +0000 UTC m=+0.622269115,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 12 20:45:24.743804 containerd[1545]: time="2024-11-12T20:45:24.743492160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c7145bec6839b5d7dcb0c5beff5515b4,Namespace:kube-system,Attempt:0,}" Nov 12 20:45:24.744024 containerd[1545]: time="2024-11-12T20:45:24.744011426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:33932df710fd78419c0859d7fa44b8e7,Namespace:kube-system,Attempt:0,}" Nov 12 20:45:24.877150 kubelet[2423]: E1112 20:45:24.877090 2423 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.104:6443: connect: connection refused" interval="800ms" Nov 12 20:45:24.979306 kubelet[2423]: I1112 20:45:24.979236 2423 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:45:24.979533 kubelet[2423]: E1112 20:45:24.979521 2423 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.104:6443/api/v1/nodes\": dial tcp 139.178.70.104:6443: connect: connection refused" node="localhost" Nov 12 20:45:25.329243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1520402764.mount: Deactivated successfully. Nov 12 20:45:25.333665 containerd[1545]: time="2024-11-12T20:45:25.333602219Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:45:25.334141 containerd[1545]: time="2024-11-12T20:45:25.334114298Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 12 20:45:25.335003 containerd[1545]: time="2024-11-12T20:45:25.334826400Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:45:25.335003 containerd[1545]: time="2024-11-12T20:45:25.334919048Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 20:45:25.335253 containerd[1545]: time="2024-11-12T20:45:25.335238110Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 20:45:25.335511 containerd[1545]: time="2024-11-12T20:45:25.335492091Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:45:25.338709 containerd[1545]: time="2024-11-12T20:45:25.338696538Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:45:25.339399 containerd[1545]: time="2024-11-12T20:45:25.339258217Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 595.708783ms" Nov 12 20:45:25.340412 containerd[1545]: time="2024-11-12T20:45:25.340368197Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 596.281329ms" Nov 12 20:45:25.341867 containerd[1545]: time="2024-11-12T20:45:25.341570891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:45:25.341867 containerd[1545]: time="2024-11-12T20:45:25.341823687Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 617.49494ms" Nov 12 20:45:25.354422 kubelet[2423]: W1112 20:45:25.354356 2423 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://139.178.70.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Nov 12 20:45:25.354422 kubelet[2423]: E1112 20:45:25.354408 2423 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Nov 12 20:45:25.365637 kubelet[2423]: W1112 20:45:25.365591 2423 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://139.178.70.104:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Nov 12 20:45:25.365637 kubelet[2423]: E1112 20:45:25.365626 2423 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.104:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Nov 12 20:45:25.494613 kubelet[2423]: W1112 20:45:25.494569 2423 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://139.178.70.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Nov 12 20:45:25.494613 kubelet[2423]: E1112 20:45:25.494613 2423 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Nov 12 20:45:25.622197 containerd[1545]: time="2024-11-12T20:45:25.621723326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:45:25.622197 containerd[1545]: time="2024-11-12T20:45:25.621758177Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:45:25.622197 containerd[1545]: time="2024-11-12T20:45:25.621768591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:45:25.622197 containerd[1545]: time="2024-11-12T20:45:25.621820087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:45:25.622197 containerd[1545]: time="2024-11-12T20:45:25.622074437Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:45:25.622197 containerd[1545]: time="2024-11-12T20:45:25.622100461Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:45:25.622197 containerd[1545]: time="2024-11-12T20:45:25.622110381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:45:25.624071 containerd[1545]: time="2024-11-12T20:45:25.622438825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:45:25.628573 containerd[1545]: time="2024-11-12T20:45:25.628507049Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:45:25.628701 containerd[1545]: time="2024-11-12T20:45:25.628685422Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:45:25.628779 containerd[1545]: time="2024-11-12T20:45:25.628763986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:45:25.628928 containerd[1545]: time="2024-11-12T20:45:25.628879914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:45:25.654696 systemd[1]: Started cri-containerd-a4154cdbd92cdbccd858eabb4283b7cd8e0bea130f2d978dee7a6d1858d704f3.scope - libcontainer container a4154cdbd92cdbccd858eabb4283b7cd8e0bea130f2d978dee7a6d1858d704f3. Nov 12 20:45:25.657809 systemd[1]: Started cri-containerd-4afe0c63d51f277c1e839f293d6990791ee5df64db4e15cf589bbfb107b346c5.scope - libcontainer container 4afe0c63d51f277c1e839f293d6990791ee5df64db4e15cf589bbfb107b346c5. Nov 12 20:45:25.658937 systemd[1]: Started cri-containerd-fb766d3a028c9a3ffd1a8f6092e74def2399871186f61660831da2729d275ba5.scope - libcontainer container fb766d3a028c9a3ffd1a8f6092e74def2399871186f61660831da2729d275ba5. Nov 12 20:45:25.690623 kubelet[2423]: E1112 20:45:25.690600 2423 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.104:6443: connect: connection refused" interval="1.6s" Nov 12 20:45:25.716600 containerd[1545]: time="2024-11-12T20:45:25.716578990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c7145bec6839b5d7dcb0c5beff5515b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"4afe0c63d51f277c1e839f293d6990791ee5df64db4e15cf589bbfb107b346c5\"" Nov 12 20:45:25.725121 containerd[1545]: time="2024-11-12T20:45:25.725091424Z" level=info msg="CreateContainer within sandbox \"4afe0c63d51f277c1e839f293d6990791ee5df64db4e15cf589bbfb107b346c5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 12 20:45:25.729399 containerd[1545]: time="2024-11-12T20:45:25.729290844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:33932df710fd78419c0859d7fa44b8e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb766d3a028c9a3ffd1a8f6092e74def2399871186f61660831da2729d275ba5\"" Nov 12 20:45:25.731913 containerd[1545]: time="2024-11-12T20:45:25.731896889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9ed1dd4d1f440435597beb28b35df6c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"a4154cdbd92cdbccd858eabb4283b7cd8e0bea130f2d978dee7a6d1858d704f3\"" Nov 12 20:45:25.734315 containerd[1545]: time="2024-11-12T20:45:25.734179690Z" level=info msg="CreateContainer within sandbox \"fb766d3a028c9a3ffd1a8f6092e74def2399871186f61660831da2729d275ba5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 12 20:45:25.735129 containerd[1545]: time="2024-11-12T20:45:25.735097719Z" level=info msg="CreateContainer within sandbox \"a4154cdbd92cdbccd858eabb4283b7cd8e0bea130f2d978dee7a6d1858d704f3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 12 20:45:25.777901 kubelet[2423]: W1112 20:45:25.767533 2423 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://139.178.70.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Nov 12 20:45:25.778146 kubelet[2423]: E1112 20:45:25.777912 2423 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Nov 12 20:45:25.780472 kubelet[2423]: I1112 20:45:25.780453 2423 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:45:25.780678 kubelet[2423]: E1112 20:45:25.780668 2423 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.104:6443/api/v1/nodes\": dial tcp 139.178.70.104:6443: connect: connection refused" node="localhost" Nov 12 20:45:25.843613 containerd[1545]: time="2024-11-12T20:45:25.843581975Z" level=info msg="CreateContainer within sandbox \"4afe0c63d51f277c1e839f293d6990791ee5df64db4e15cf589bbfb107b346c5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1f10ae9a200ea44107273e7c11907ab8e93ac0a2d214c12a0bf4ec4c91ceeee1\"" Nov 12 20:45:25.844564 containerd[1545]: time="2024-11-12T20:45:25.844009199Z" level=info msg="StartContainer for \"1f10ae9a200ea44107273e7c11907ab8e93ac0a2d214c12a0bf4ec4c91ceeee1\"" Nov 12 20:45:25.845887 containerd[1545]: time="2024-11-12T20:45:25.845871865Z" level=info msg="CreateContainer within sandbox \"fb766d3a028c9a3ffd1a8f6092e74def2399871186f61660831da2729d275ba5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1c13cd7e83ffa2baa251765d7aec4e95626bde7278d80fea78aa3c7f851193e3\"" Nov 12 20:45:25.846699 containerd[1545]: time="2024-11-12T20:45:25.846684025Z" level=info msg="CreateContainer within sandbox \"a4154cdbd92cdbccd858eabb4283b7cd8e0bea130f2d978dee7a6d1858d704f3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6df7be89ab653775bc74a11717f67c14217a058d893667927334f73ae67e39db\"" Nov 12 20:45:25.847057 containerd[1545]: time="2024-11-12T20:45:25.847045421Z" level=info msg="StartContainer for \"1c13cd7e83ffa2baa251765d7aec4e95626bde7278d80fea78aa3c7f851193e3\"" Nov 12 20:45:25.851294 containerd[1545]: time="2024-11-12T20:45:25.851266910Z" level=info msg="StartContainer for \"6df7be89ab653775bc74a11717f67c14217a058d893667927334f73ae67e39db\"" Nov 12 20:45:25.864867 systemd[1]: Started cri-containerd-1f10ae9a200ea44107273e7c11907ab8e93ac0a2d214c12a0bf4ec4c91ceeee1.scope - libcontainer container 1f10ae9a200ea44107273e7c11907ab8e93ac0a2d214c12a0bf4ec4c91ceeee1. Nov 12 20:45:25.875690 systemd[1]: Started cri-containerd-1c13cd7e83ffa2baa251765d7aec4e95626bde7278d80fea78aa3c7f851193e3.scope - libcontainer container 1c13cd7e83ffa2baa251765d7aec4e95626bde7278d80fea78aa3c7f851193e3. Nov 12 20:45:25.878710 systemd[1]: Started cri-containerd-6df7be89ab653775bc74a11717f67c14217a058d893667927334f73ae67e39db.scope - libcontainer container 6df7be89ab653775bc74a11717f67c14217a058d893667927334f73ae67e39db. Nov 12 20:45:25.917420 containerd[1545]: time="2024-11-12T20:45:25.917397543Z" level=info msg="StartContainer for \"1f10ae9a200ea44107273e7c11907ab8e93ac0a2d214c12a0bf4ec4c91ceeee1\" returns successfully" Nov 12 20:45:25.922564 containerd[1545]: time="2024-11-12T20:45:25.922328331Z" level=info msg="StartContainer for \"1c13cd7e83ffa2baa251765d7aec4e95626bde7278d80fea78aa3c7f851193e3\" returns successfully" Nov 12 20:45:25.931821 containerd[1545]: time="2024-11-12T20:45:25.931793618Z" level=info msg="StartContainer for \"6df7be89ab653775bc74a11717f67c14217a058d893667927334f73ae67e39db\" returns successfully" Nov 12 20:45:26.275285 kubelet[2423]: E1112 20:45:26.275262 2423 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.104:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.104:6443: connect: connection refused Nov 12 20:45:27.382268 kubelet[2423]: I1112 20:45:27.382249 2423 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:45:27.554037 kubelet[2423]: E1112 20:45:27.554016 2423 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 12 20:45:27.641051 kubelet[2423]: I1112 20:45:27.640779 2423 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Nov 12 20:45:27.650265 kubelet[2423]: E1112 20:45:27.650246 2423 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:45:27.750368 kubelet[2423]: E1112 20:45:27.750331 2423 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:45:27.850991 kubelet[2423]: E1112 20:45:27.850965 2423 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:45:27.951619 kubelet[2423]: E1112 20:45:27.951586 2423 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:45:28.052131 kubelet[2423]: E1112 20:45:28.052095 2423 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:45:28.152905 kubelet[2423]: E1112 20:45:28.152878 2423 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:45:28.202700 kubelet[2423]: I1112 20:45:28.202587 2423 apiserver.go:52] "Watching apiserver" Nov 12 20:45:28.275648 kubelet[2423]: I1112 20:45:28.275608 2423 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 12 20:45:29.786734 systemd[1]: Reloading requested from client PID 2698 ('systemctl') (unit session-9.scope)... Nov 12 20:45:29.786912 systemd[1]: Reloading... Nov 12 20:45:29.840570 zram_generator::config[2740]: No configuration found. Nov 12 20:45:29.912131 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 12 20:45:29.927085 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:45:29.977890 systemd[1]: Reloading finished in 190 ms. Nov 12 20:45:30.001360 kubelet[2423]: I1112 20:45:30.001340 2423 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:45:30.001938 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:45:30.011715 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 20:45:30.011913 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:45:30.016790 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:45:30.205307 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:45:30.215209 (kubelet)[2803]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 20:45:30.430584 kubelet[2803]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:45:30.430584 kubelet[2803]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 20:45:30.430584 kubelet[2803]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:45:30.430801 kubelet[2803]: I1112 20:45:30.430618 2803 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 20:45:30.433051 kubelet[2803]: I1112 20:45:30.433036 2803 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 12 20:45:30.433051 kubelet[2803]: I1112 20:45:30.433049 2803 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 20:45:30.433211 kubelet[2803]: I1112 20:45:30.433201 2803 server.go:919] "Client rotation is on, will bootstrap in background" Nov 12 20:45:30.434141 kubelet[2803]: I1112 20:45:30.434129 2803 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 12 20:45:30.435564 kubelet[2803]: I1112 20:45:30.435293 2803 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:45:30.440410 kubelet[2803]: I1112 20:45:30.440398 2803 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 20:45:30.440895 kubelet[2803]: I1112 20:45:30.440638 2803 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 20:45:30.440895 kubelet[2803]: I1112 20:45:30.440739 2803 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 20:45:30.440895 kubelet[2803]: I1112 20:45:30.440752 2803 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 20:45:30.440895 kubelet[2803]: I1112 20:45:30.440759 2803 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 20:45:30.440895 kubelet[2803]: I1112 20:45:30.440786 2803 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:45:30.440895 kubelet[2803]: I1112 20:45:30.440839 2803 kubelet.go:396] "Attempting to sync node with API server" Nov 12 20:45:30.441053 kubelet[2803]: I1112 20:45:30.440849 2803 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 20:45:30.441053 kubelet[2803]: I1112 20:45:30.440863 2803 kubelet.go:312] "Adding apiserver pod source" Nov 12 20:45:30.441053 kubelet[2803]: I1112 20:45:30.440872 2803 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 20:45:30.441841 kubelet[2803]: I1112 20:45:30.441833 2803 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 20:45:30.443679 kubelet[2803]: I1112 20:45:30.442017 2803 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 20:45:30.443679 kubelet[2803]: I1112 20:45:30.442243 2803 server.go:1256] "Started kubelet" Nov 12 20:45:30.443679 kubelet[2803]: I1112 20:45:30.443274 2803 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 20:45:30.446849 kubelet[2803]: I1112 20:45:30.445237 2803 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 20:45:30.446849 kubelet[2803]: I1112 20:45:30.445747 2803 server.go:461] "Adding debug handlers to kubelet server" Nov 12 20:45:30.446849 kubelet[2803]: I1112 20:45:30.446282 2803 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 20:45:30.446849 kubelet[2803]: I1112 20:45:30.446371 2803 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 20:45:30.456490 kubelet[2803]: E1112 20:45:30.455786 2803 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 20:45:30.459223 kubelet[2803]: I1112 20:45:30.458718 2803 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 20:45:30.459223 kubelet[2803]: I1112 20:45:30.458768 2803 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 12 20:45:30.459223 kubelet[2803]: I1112 20:45:30.458837 2803 reconciler_new.go:29] "Reconciler: start to sync state" Nov 12 20:45:30.460057 kubelet[2803]: I1112 20:45:30.459443 2803 factory.go:221] Registration of the systemd container factory successfully Nov 12 20:45:30.460057 kubelet[2803]: I1112 20:45:30.459495 2803 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 20:45:30.462976 kubelet[2803]: I1112 20:45:30.462961 2803 factory.go:221] Registration of the containerd container factory successfully Nov 12 20:45:30.464124 kubelet[2803]: I1112 20:45:30.464057 2803 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 20:45:30.474751 kubelet[2803]: I1112 20:45:30.474732 2803 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 20:45:30.474827 kubelet[2803]: I1112 20:45:30.474762 2803 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 20:45:30.474827 kubelet[2803]: I1112 20:45:30.474775 2803 kubelet.go:2329] "Starting kubelet main sync loop" Nov 12 20:45:30.474827 kubelet[2803]: E1112 20:45:30.474813 2803 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 20:45:30.511729 kubelet[2803]: I1112 20:45:30.511709 2803 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 20:45:30.511852 kubelet[2803]: I1112 20:45:30.511773 2803 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 20:45:30.511852 kubelet[2803]: I1112 20:45:30.511785 2803 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:45:30.511892 kubelet[2803]: I1112 20:45:30.511888 2803 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 12 20:45:30.511908 kubelet[2803]: I1112 20:45:30.511905 2803 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 12 20:45:30.511929 kubelet[2803]: I1112 20:45:30.511909 2803 policy_none.go:49] "None policy: Start" Nov 12 20:45:30.512678 kubelet[2803]: I1112 20:45:30.512177 2803 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 20:45:30.512678 kubelet[2803]: I1112 20:45:30.512189 2803 state_mem.go:35] "Initializing new in-memory state store" Nov 12 20:45:30.512678 kubelet[2803]: I1112 20:45:30.512289 2803 state_mem.go:75] "Updated machine memory state" Nov 12 20:45:30.515838 kubelet[2803]: I1112 20:45:30.515813 2803 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 20:45:30.516404 kubelet[2803]: I1112 20:45:30.516149 2803 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 20:45:30.560357 kubelet[2803]: I1112 20:45:30.560343 2803 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:45:30.565386 kubelet[2803]: I1112 20:45:30.565364 2803 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Nov 12 20:45:30.566005 kubelet[2803]: I1112 20:45:30.565417 2803 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Nov 12 20:45:30.575766 kubelet[2803]: I1112 20:45:30.575745 2803 topology_manager.go:215] "Topology Admit Handler" podUID="c7145bec6839b5d7dcb0c5beff5515b4" podNamespace="kube-system" podName="kube-scheduler-localhost" Nov 12 20:45:30.575848 kubelet[2803]: I1112 20:45:30.575812 2803 topology_manager.go:215] "Topology Admit Handler" podUID="9ed1dd4d1f440435597beb28b35df6c9" podNamespace="kube-system" podName="kube-apiserver-localhost" Nov 12 20:45:30.575848 kubelet[2803]: I1112 20:45:30.575836 2803 topology_manager.go:215] "Topology Admit Handler" podUID="33932df710fd78419c0859d7fa44b8e7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Nov 12 20:45:30.759870 kubelet[2803]: I1112 20:45:30.759741 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c7145bec6839b5d7dcb0c5beff5515b4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c7145bec6839b5d7dcb0c5beff5515b4\") " pod="kube-system/kube-scheduler-localhost" Nov 12 20:45:30.759870 kubelet[2803]: I1112 20:45:30.759779 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9ed1dd4d1f440435597beb28b35df6c9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9ed1dd4d1f440435597beb28b35df6c9\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:45:30.759870 kubelet[2803]: I1112 20:45:30.759806 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9ed1dd4d1f440435597beb28b35df6c9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9ed1dd4d1f440435597beb28b35df6c9\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:45:30.759870 kubelet[2803]: I1112 20:45:30.759842 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:45:30.759870 kubelet[2803]: I1112 20:45:30.759866 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:45:30.760029 kubelet[2803]: I1112 20:45:30.759886 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:45:30.760029 kubelet[2803]: I1112 20:45:30.759917 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:45:30.760029 kubelet[2803]: I1112 20:45:30.759939 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9ed1dd4d1f440435597beb28b35df6c9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9ed1dd4d1f440435597beb28b35df6c9\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:45:30.760029 kubelet[2803]: I1112 20:45:30.759986 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:45:31.442134 kubelet[2803]: I1112 20:45:31.442107 2803 apiserver.go:52] "Watching apiserver" Nov 12 20:45:31.525386 kubelet[2803]: E1112 20:45:31.525359 2803 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 12 20:45:31.547018 kubelet[2803]: I1112 20:45:31.546986 2803 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.54695179 podStartE2EDuration="1.54695179s" podCreationTimestamp="2024-11-12 20:45:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:45:31.528284694 +0000 UTC m=+1.186553354" watchObservedRunningTime="2024-11-12 20:45:31.54695179 +0000 UTC m=+1.205220447" Nov 12 20:45:31.559390 kubelet[2803]: I1112 20:45:31.559362 2803 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 12 20:45:31.597728 kubelet[2803]: I1112 20:45:31.597701 2803 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.597673952 podStartE2EDuration="1.597673952s" podCreationTimestamp="2024-11-12 20:45:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:45:31.596168404 +0000 UTC m=+1.254437063" watchObservedRunningTime="2024-11-12 20:45:31.597673952 +0000 UTC m=+1.255942618" Nov 12 20:45:31.597846 kubelet[2803]: I1112 20:45:31.597752 2803 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.597739874 podStartE2EDuration="1.597739874s" podCreationTimestamp="2024-11-12 20:45:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:45:31.549628812 +0000 UTC m=+1.207897477" watchObservedRunningTime="2024-11-12 20:45:31.597739874 +0000 UTC m=+1.256008540" Nov 12 20:45:34.656996 sudo[1853]: pam_unix(sudo:session): session closed for user root Nov 12 20:45:34.658618 sshd[1850]: pam_unix(sshd:session): session closed for user core Nov 12 20:45:34.660560 systemd[1]: sshd@6-139.178.70.104:22-139.178.68.195:37004.service: Deactivated successfully. Nov 12 20:45:34.661689 systemd[1]: session-9.scope: Deactivated successfully. Nov 12 20:45:34.661816 systemd[1]: session-9.scope: Consumed 3.321s CPU time, 189.9M memory peak, 0B memory swap peak. Nov 12 20:45:34.662217 systemd-logind[1522]: Session 9 logged out. Waiting for processes to exit. Nov 12 20:45:34.662785 systemd-logind[1522]: Removed session 9. Nov 12 20:45:43.305310 kubelet[2803]: I1112 20:45:43.305284 2803 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 12 20:45:43.306301 containerd[1545]: time="2024-11-12T20:45:43.305933087Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 12 20:45:43.306622 kubelet[2803]: I1112 20:45:43.306311 2803 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 12 20:45:43.536922 kubelet[2803]: I1112 20:45:43.536695 2803 topology_manager.go:215] "Topology Admit Handler" podUID="66074096-dbe6-46a3-9d46-6f329973a02c" podNamespace="kube-system" podName="kube-proxy-lfm2r" Nov 12 20:45:43.545184 systemd[1]: Created slice kubepods-besteffort-pod66074096_dbe6_46a3_9d46_6f329973a02c.slice - libcontainer container kubepods-besteffort-pod66074096_dbe6_46a3_9d46_6f329973a02c.slice. Nov 12 20:45:43.641461 kubelet[2803]: I1112 20:45:43.641339 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/66074096-dbe6-46a3-9d46-6f329973a02c-kube-proxy\") pod \"kube-proxy-lfm2r\" (UID: \"66074096-dbe6-46a3-9d46-6f329973a02c\") " pod="kube-system/kube-proxy-lfm2r" Nov 12 20:45:43.641461 kubelet[2803]: I1112 20:45:43.641376 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/66074096-dbe6-46a3-9d46-6f329973a02c-lib-modules\") pod \"kube-proxy-lfm2r\" (UID: \"66074096-dbe6-46a3-9d46-6f329973a02c\") " pod="kube-system/kube-proxy-lfm2r" Nov 12 20:45:43.641461 kubelet[2803]: I1112 20:45:43.641398 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/66074096-dbe6-46a3-9d46-6f329973a02c-xtables-lock\") pod \"kube-proxy-lfm2r\" (UID: \"66074096-dbe6-46a3-9d46-6f329973a02c\") " pod="kube-system/kube-proxy-lfm2r" Nov 12 20:45:43.641461 kubelet[2803]: I1112 20:45:43.641411 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvjsq\" (UniqueName: \"kubernetes.io/projected/66074096-dbe6-46a3-9d46-6f329973a02c-kube-api-access-lvjsq\") pod \"kube-proxy-lfm2r\" (UID: \"66074096-dbe6-46a3-9d46-6f329973a02c\") " pod="kube-system/kube-proxy-lfm2r" Nov 12 20:45:43.763181 kubelet[2803]: E1112 20:45:43.763142 2803 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 12 20:45:43.763181 kubelet[2803]: E1112 20:45:43.763185 2803 projected.go:200] Error preparing data for projected volume kube-api-access-lvjsq for pod kube-system/kube-proxy-lfm2r: configmap "kube-root-ca.crt" not found Nov 12 20:45:43.763307 kubelet[2803]: E1112 20:45:43.763260 2803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/66074096-dbe6-46a3-9d46-6f329973a02c-kube-api-access-lvjsq podName:66074096-dbe6-46a3-9d46-6f329973a02c nodeName:}" failed. No retries permitted until 2024-11-12 20:45:44.263240452 +0000 UTC m=+13.921509111 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lvjsq" (UniqueName: "kubernetes.io/projected/66074096-dbe6-46a3-9d46-6f329973a02c-kube-api-access-lvjsq") pod "kube-proxy-lfm2r" (UID: "66074096-dbe6-46a3-9d46-6f329973a02c") : configmap "kube-root-ca.crt" not found Nov 12 20:45:44.277907 kubelet[2803]: I1112 20:45:44.277883 2803 topology_manager.go:215] "Topology Admit Handler" podUID="73ac0910-ca54-4143-9d4a-e9c526ffebd3" podNamespace="tigera-operator" podName="tigera-operator-56b74f76df-jttr2" Nov 12 20:45:44.284927 systemd[1]: Created slice kubepods-besteffort-pod73ac0910_ca54_4143_9d4a_e9c526ffebd3.slice - libcontainer container kubepods-besteffort-pod73ac0910_ca54_4143_9d4a_e9c526ffebd3.slice. Nov 12 20:45:44.446593 kubelet[2803]: I1112 20:45:44.446519 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/73ac0910-ca54-4143-9d4a-e9c526ffebd3-var-lib-calico\") pod \"tigera-operator-56b74f76df-jttr2\" (UID: \"73ac0910-ca54-4143-9d4a-e9c526ffebd3\") " pod="tigera-operator/tigera-operator-56b74f76df-jttr2" Nov 12 20:45:44.446593 kubelet[2803]: I1112 20:45:44.446561 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcvv7\" (UniqueName: \"kubernetes.io/projected/73ac0910-ca54-4143-9d4a-e9c526ffebd3-kube-api-access-dcvv7\") pod \"tigera-operator-56b74f76df-jttr2\" (UID: \"73ac0910-ca54-4143-9d4a-e9c526ffebd3\") " pod="tigera-operator/tigera-operator-56b74f76df-jttr2" Nov 12 20:45:44.452234 containerd[1545]: time="2024-11-12T20:45:44.452206299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lfm2r,Uid:66074096-dbe6-46a3-9d46-6f329973a02c,Namespace:kube-system,Attempt:0,}" Nov 12 20:45:44.473069 containerd[1545]: time="2024-11-12T20:45:44.472993877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:45:44.473069 containerd[1545]: time="2024-11-12T20:45:44.473040652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:45:44.473069 containerd[1545]: time="2024-11-12T20:45:44.473054539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:45:44.473309 containerd[1545]: time="2024-11-12T20:45:44.473128014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:45:44.490693 systemd[1]: Started cri-containerd-ad4a73cf4ddaecd335d11f6278a479ab1e3cfcfc1ce9ebdf8c30f37ac1ee851c.scope - libcontainer container ad4a73cf4ddaecd335d11f6278a479ab1e3cfcfc1ce9ebdf8c30f37ac1ee851c. Nov 12 20:45:44.506384 containerd[1545]: time="2024-11-12T20:45:44.506350074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lfm2r,Uid:66074096-dbe6-46a3-9d46-6f329973a02c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad4a73cf4ddaecd335d11f6278a479ab1e3cfcfc1ce9ebdf8c30f37ac1ee851c\"" Nov 12 20:45:44.508276 containerd[1545]: time="2024-11-12T20:45:44.508241704Z" level=info msg="CreateContainer within sandbox \"ad4a73cf4ddaecd335d11f6278a479ab1e3cfcfc1ce9ebdf8c30f37ac1ee851c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 12 20:45:44.519132 containerd[1545]: time="2024-11-12T20:45:44.519106969Z" level=info msg="CreateContainer within sandbox \"ad4a73cf4ddaecd335d11f6278a479ab1e3cfcfc1ce9ebdf8c30f37ac1ee851c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6c3db2962449493b11505aff704f5076f7f235b581bb7047364e3549c20e4512\"" Nov 12 20:45:44.519856 containerd[1545]: time="2024-11-12T20:45:44.519738160Z" level=info msg="StartContainer for \"6c3db2962449493b11505aff704f5076f7f235b581bb7047364e3549c20e4512\"" Nov 12 20:45:44.536675 systemd[1]: Started cri-containerd-6c3db2962449493b11505aff704f5076f7f235b581bb7047364e3549c20e4512.scope - libcontainer container 6c3db2962449493b11505aff704f5076f7f235b581bb7047364e3549c20e4512. Nov 12 20:45:44.556593 containerd[1545]: time="2024-11-12T20:45:44.556526434Z" level=info msg="StartContainer for \"6c3db2962449493b11505aff704f5076f7f235b581bb7047364e3549c20e4512\" returns successfully" Nov 12 20:45:44.589370 containerd[1545]: time="2024-11-12T20:45:44.589329524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-56b74f76df-jttr2,Uid:73ac0910-ca54-4143-9d4a-e9c526ffebd3,Namespace:tigera-operator,Attempt:0,}" Nov 12 20:45:44.601781 containerd[1545]: time="2024-11-12T20:45:44.601731963Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:45:44.601901 containerd[1545]: time="2024-11-12T20:45:44.601768827Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:45:44.601901 containerd[1545]: time="2024-11-12T20:45:44.601778887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:45:44.601901 containerd[1545]: time="2024-11-12T20:45:44.601834358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:45:44.615737 systemd[1]: Started cri-containerd-f2d7c67aed7321b5399ce0dd1cc08be21f6088ccd5bd315ba440381e76e3186c.scope - libcontainer container f2d7c67aed7321b5399ce0dd1cc08be21f6088ccd5bd315ba440381e76e3186c. Nov 12 20:45:44.643066 containerd[1545]: time="2024-11-12T20:45:44.643032674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-56b74f76df-jttr2,Uid:73ac0910-ca54-4143-9d4a-e9c526ffebd3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f2d7c67aed7321b5399ce0dd1cc08be21f6088ccd5bd315ba440381e76e3186c\"" Nov 12 20:45:44.644197 containerd[1545]: time="2024-11-12T20:45:44.644101548Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\"" Nov 12 20:45:45.351125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3604767675.mount: Deactivated successfully. Nov 12 20:45:47.398774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1143087007.mount: Deactivated successfully. Nov 12 20:45:47.741644 containerd[1545]: time="2024-11-12T20:45:47.741563640Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:47.747349 containerd[1545]: time="2024-11-12T20:45:47.747305664Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.0: active requests=0, bytes read=21763359" Nov 12 20:45:47.752537 containerd[1545]: time="2024-11-12T20:45:47.752522229Z" level=info msg="ImageCreate event name:\"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:47.758815 containerd[1545]: time="2024-11-12T20:45:47.758772984Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:47.777118 containerd[1545]: time="2024-11-12T20:45:47.777031601Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.0\" with image id \"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\", repo tag \"quay.io/tigera/operator:v1.36.0\", repo digest \"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\", size \"21757542\" in 3.132911393s" Nov 12 20:45:47.777118 containerd[1545]: time="2024-11-12T20:45:47.777054997Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\" returns image reference \"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\"" Nov 12 20:45:47.778880 containerd[1545]: time="2024-11-12T20:45:47.778795224Z" level=info msg="CreateContainer within sandbox \"f2d7c67aed7321b5399ce0dd1cc08be21f6088ccd5bd315ba440381e76e3186c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 12 20:45:47.852340 containerd[1545]: time="2024-11-12T20:45:47.852310170Z" level=info msg="CreateContainer within sandbox \"f2d7c67aed7321b5399ce0dd1cc08be21f6088ccd5bd315ba440381e76e3186c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5fbfbd71a1d4c554c5cad7047bdf0c6765931269cf9261bc4add82d3a3f9b72f\"" Nov 12 20:45:47.853251 containerd[1545]: time="2024-11-12T20:45:47.853124937Z" level=info msg="StartContainer for \"5fbfbd71a1d4c554c5cad7047bdf0c6765931269cf9261bc4add82d3a3f9b72f\"" Nov 12 20:45:47.876657 systemd[1]: Started cri-containerd-5fbfbd71a1d4c554c5cad7047bdf0c6765931269cf9261bc4add82d3a3f9b72f.scope - libcontainer container 5fbfbd71a1d4c554c5cad7047bdf0c6765931269cf9261bc4add82d3a3f9b72f. Nov 12 20:45:47.901335 containerd[1545]: time="2024-11-12T20:45:47.901211273Z" level=info msg="StartContainer for \"5fbfbd71a1d4c554c5cad7047bdf0c6765931269cf9261bc4add82d3a3f9b72f\" returns successfully" Nov 12 20:45:48.556400 kubelet[2803]: I1112 20:45:48.556316 2803 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-lfm2r" podStartSLOduration=5.556283304 podStartE2EDuration="5.556283304s" podCreationTimestamp="2024-11-12 20:45:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:45:45.563768246 +0000 UTC m=+15.222036912" watchObservedRunningTime="2024-11-12 20:45:48.556283304 +0000 UTC m=+18.214551963" Nov 12 20:45:50.514106 kubelet[2803]: I1112 20:45:50.514079 2803 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-56b74f76df-jttr2" podStartSLOduration=3.3805533199999998 podStartE2EDuration="6.514053765s" podCreationTimestamp="2024-11-12 20:45:44 +0000 UTC" firstStartedPulling="2024-11-12 20:45:44.64382702 +0000 UTC m=+14.302095675" lastFinishedPulling="2024-11-12 20:45:47.777327466 +0000 UTC m=+17.435596120" observedRunningTime="2024-11-12 20:45:48.556920759 +0000 UTC m=+18.215189423" watchObservedRunningTime="2024-11-12 20:45:50.514053765 +0000 UTC m=+20.172322429" Nov 12 20:45:50.863880 kubelet[2803]: I1112 20:45:50.863795 2803 topology_manager.go:215] "Topology Admit Handler" podUID="6f9d6667-5b4a-434c-a25f-4a273757ef7f" podNamespace="calico-system" podName="calico-typha-568f9457f6-pftgn" Nov 12 20:45:50.907726 systemd[1]: Created slice kubepods-besteffort-pod6f9d6667_5b4a_434c_a25f_4a273757ef7f.slice - libcontainer container kubepods-besteffort-pod6f9d6667_5b4a_434c_a25f_4a273757ef7f.slice. Nov 12 20:45:50.967631 kubelet[2803]: I1112 20:45:50.967383 2803 topology_manager.go:215] "Topology Admit Handler" podUID="aa789dee-6d4d-4e09-8c29-cb02d5225385" podNamespace="calico-system" podName="calico-node-n2q5b" Nov 12 20:45:50.975653 systemd[1]: Created slice kubepods-besteffort-podaa789dee_6d4d_4e09_8c29_cb02d5225385.slice - libcontainer container kubepods-besteffort-podaa789dee_6d4d_4e09_8c29_cb02d5225385.slice. Nov 12 20:45:50.988954 kubelet[2803]: I1112 20:45:50.988883 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6f9d6667-5b4a-434c-a25f-4a273757ef7f-typha-certs\") pod \"calico-typha-568f9457f6-pftgn\" (UID: \"6f9d6667-5b4a-434c-a25f-4a273757ef7f\") " pod="calico-system/calico-typha-568f9457f6-pftgn" Nov 12 20:45:50.988954 kubelet[2803]: I1112 20:45:50.988923 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f9d6667-5b4a-434c-a25f-4a273757ef7f-tigera-ca-bundle\") pod \"calico-typha-568f9457f6-pftgn\" (UID: \"6f9d6667-5b4a-434c-a25f-4a273757ef7f\") " pod="calico-system/calico-typha-568f9457f6-pftgn" Nov 12 20:45:50.988954 kubelet[2803]: I1112 20:45:50.988941 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bbst\" (UniqueName: \"kubernetes.io/projected/6f9d6667-5b4a-434c-a25f-4a273757ef7f-kube-api-access-6bbst\") pod \"calico-typha-568f9457f6-pftgn\" (UID: \"6f9d6667-5b4a-434c-a25f-4a273757ef7f\") " pod="calico-system/calico-typha-568f9457f6-pftgn" Nov 12 20:45:51.068407 kubelet[2803]: I1112 20:45:51.068380 2803 topology_manager.go:215] "Topology Admit Handler" podUID="5475df33-a25f-4c6b-acfc-caf320cb59b1" podNamespace="calico-system" podName="csi-node-driver-mr2hw" Nov 12 20:45:51.068785 kubelet[2803]: E1112 20:45:51.068600 2803 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mr2hw" podUID="5475df33-a25f-4c6b-acfc-caf320cb59b1" Nov 12 20:45:51.089672 kubelet[2803]: I1112 20:45:51.089647 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/aa789dee-6d4d-4e09-8c29-cb02d5225385-var-run-calico\") pod \"calico-node-n2q5b\" (UID: \"aa789dee-6d4d-4e09-8c29-cb02d5225385\") " pod="calico-system/calico-node-n2q5b" Nov 12 20:45:51.089766 kubelet[2803]: I1112 20:45:51.089693 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa789dee-6d4d-4e09-8c29-cb02d5225385-lib-modules\") pod \"calico-node-n2q5b\" (UID: \"aa789dee-6d4d-4e09-8c29-cb02d5225385\") " pod="calico-system/calico-node-n2q5b" Nov 12 20:45:51.089766 kubelet[2803]: I1112 20:45:51.089722 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/aa789dee-6d4d-4e09-8c29-cb02d5225385-var-lib-calico\") pod \"calico-node-n2q5b\" (UID: \"aa789dee-6d4d-4e09-8c29-cb02d5225385\") " pod="calico-system/calico-node-n2q5b" Nov 12 20:45:51.089766 kubelet[2803]: I1112 20:45:51.089740 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/aa789dee-6d4d-4e09-8c29-cb02d5225385-cni-bin-dir\") pod \"calico-node-n2q5b\" (UID: \"aa789dee-6d4d-4e09-8c29-cb02d5225385\") " pod="calico-system/calico-node-n2q5b" Nov 12 20:45:51.089766 kubelet[2803]: I1112 20:45:51.089753 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/aa789dee-6d4d-4e09-8c29-cb02d5225385-cni-net-dir\") pod \"calico-node-n2q5b\" (UID: \"aa789dee-6d4d-4e09-8c29-cb02d5225385\") " pod="calico-system/calico-node-n2q5b" Nov 12 20:45:51.089766 kubelet[2803]: I1112 20:45:51.089766 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa789dee-6d4d-4e09-8c29-cb02d5225385-tigera-ca-bundle\") pod \"calico-node-n2q5b\" (UID: \"aa789dee-6d4d-4e09-8c29-cb02d5225385\") " pod="calico-system/calico-node-n2q5b" Nov 12 20:45:51.089865 kubelet[2803]: I1112 20:45:51.089781 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/aa789dee-6d4d-4e09-8c29-cb02d5225385-flexvol-driver-host\") pod \"calico-node-n2q5b\" (UID: \"aa789dee-6d4d-4e09-8c29-cb02d5225385\") " pod="calico-system/calico-node-n2q5b" Nov 12 20:45:51.089865 kubelet[2803]: I1112 20:45:51.089805 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa789dee-6d4d-4e09-8c29-cb02d5225385-xtables-lock\") pod \"calico-node-n2q5b\" (UID: \"aa789dee-6d4d-4e09-8c29-cb02d5225385\") " pod="calico-system/calico-node-n2q5b" Nov 12 20:45:51.089865 kubelet[2803]: I1112 20:45:51.089816 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/aa789dee-6d4d-4e09-8c29-cb02d5225385-policysync\") pod \"calico-node-n2q5b\" (UID: \"aa789dee-6d4d-4e09-8c29-cb02d5225385\") " pod="calico-system/calico-node-n2q5b" Nov 12 20:45:51.089865 kubelet[2803]: I1112 20:45:51.089826 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/aa789dee-6d4d-4e09-8c29-cb02d5225385-cni-log-dir\") pod \"calico-node-n2q5b\" (UID: \"aa789dee-6d4d-4e09-8c29-cb02d5225385\") " pod="calico-system/calico-node-n2q5b" Nov 12 20:45:51.089865 kubelet[2803]: I1112 20:45:51.089838 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr7gt\" (UniqueName: \"kubernetes.io/projected/aa789dee-6d4d-4e09-8c29-cb02d5225385-kube-api-access-cr7gt\") pod \"calico-node-n2q5b\" (UID: \"aa789dee-6d4d-4e09-8c29-cb02d5225385\") " pod="calico-system/calico-node-n2q5b" Nov 12 20:45:51.089968 kubelet[2803]: I1112 20:45:51.089850 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/aa789dee-6d4d-4e09-8c29-cb02d5225385-node-certs\") pod \"calico-node-n2q5b\" (UID: \"aa789dee-6d4d-4e09-8c29-cb02d5225385\") " pod="calico-system/calico-node-n2q5b" Nov 12 20:45:51.190694 kubelet[2803]: I1112 20:45:51.190655 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5475df33-a25f-4c6b-acfc-caf320cb59b1-kubelet-dir\") pod \"csi-node-driver-mr2hw\" (UID: \"5475df33-a25f-4c6b-acfc-caf320cb59b1\") " pod="calico-system/csi-node-driver-mr2hw" Nov 12 20:45:51.190813 kubelet[2803]: I1112 20:45:51.190726 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5475df33-a25f-4c6b-acfc-caf320cb59b1-varrun\") pod \"csi-node-driver-mr2hw\" (UID: \"5475df33-a25f-4c6b-acfc-caf320cb59b1\") " pod="calico-system/csi-node-driver-mr2hw" Nov 12 20:45:51.190813 kubelet[2803]: I1112 20:45:51.190740 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptnm7\" (UniqueName: \"kubernetes.io/projected/5475df33-a25f-4c6b-acfc-caf320cb59b1-kube-api-access-ptnm7\") pod \"csi-node-driver-mr2hw\" (UID: \"5475df33-a25f-4c6b-acfc-caf320cb59b1\") " pod="calico-system/csi-node-driver-mr2hw" Nov 12 20:45:51.190813 kubelet[2803]: I1112 20:45:51.190776 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5475df33-a25f-4c6b-acfc-caf320cb59b1-registration-dir\") pod \"csi-node-driver-mr2hw\" (UID: \"5475df33-a25f-4c6b-acfc-caf320cb59b1\") " pod="calico-system/csi-node-driver-mr2hw" Nov 12 20:45:51.190813 kubelet[2803]: I1112 20:45:51.190794 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5475df33-a25f-4c6b-acfc-caf320cb59b1-socket-dir\") pod \"csi-node-driver-mr2hw\" (UID: \"5475df33-a25f-4c6b-acfc-caf320cb59b1\") " pod="calico-system/csi-node-driver-mr2hw" Nov 12 20:45:51.204700 kubelet[2803]: E1112 20:45:51.204628 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:51.204700 kubelet[2803]: W1112 20:45:51.204651 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:51.204700 kubelet[2803]: E1112 20:45:51.204670 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:51.230191 containerd[1545]: time="2024-11-12T20:45:51.230154218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-568f9457f6-pftgn,Uid:6f9d6667-5b4a-434c-a25f-4a273757ef7f,Namespace:calico-system,Attempt:0,}" Nov 12 20:45:51.245761 containerd[1545]: time="2024-11-12T20:45:51.245228430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:45:51.245761 containerd[1545]: time="2024-11-12T20:45:51.245279439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:45:51.245761 containerd[1545]: time="2024-11-12T20:45:51.245288639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:45:51.245761 containerd[1545]: time="2024-11-12T20:45:51.245348650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:45:51.259793 systemd[1]: Started cri-containerd-2b3c2eb902d353a9fec4ed249feb00bdc537e5e499395e6434303a31d881abe2.scope - libcontainer container 2b3c2eb902d353a9fec4ed249feb00bdc537e5e499395e6434303a31d881abe2. Nov 12 20:45:51.279047 containerd[1545]: time="2024-11-12T20:45:51.278998689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n2q5b,Uid:aa789dee-6d4d-4e09-8c29-cb02d5225385,Namespace:calico-system,Attempt:0,}" Nov 12 20:45:51.292095 kubelet[2803]: E1112 20:45:51.291990 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:51.292095 kubelet[2803]: W1112 20:45:51.292022 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:51.292095 kubelet[2803]: E1112 20:45:51.292061 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:51.292413 kubelet[2803]: E1112 20:45:51.292386 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:51.292413 kubelet[2803]: W1112 20:45:51.292392 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:51.292621 kubelet[2803]: E1112 20:45:51.292570 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:51.292832 kubelet[2803]: E1112 20:45:51.292771 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:51.292832 kubelet[2803]: W1112 20:45:51.292778 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:51.292832 kubelet[2803]: E1112 20:45:51.292789 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:51.292974 kubelet[2803]: E1112 20:45:51.292924 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:51.292974 kubelet[2803]: W1112 20:45:51.292930 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:51.292974 kubelet[2803]: E1112 20:45:51.292938 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:51.293151 kubelet[2803]: E1112 20:45:51.293033 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:51.293151 kubelet[2803]: W1112 20:45:51.293037 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:51.293151 kubelet[2803]: E1112 20:45:51.293044 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:51.293384 kubelet[2803]: E1112 20:45:51.293371 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:51.293417 kubelet[2803]: W1112 20:45:51.293383 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:51.293417 kubelet[2803]: E1112 20:45:51.293399 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:51.294286 kubelet[2803]: E1112 20:45:51.293725 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:51.294286 kubelet[2803]: W1112 20:45:51.293933 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:51.294286 kubelet[2803]: E1112 20:45:51.293944 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:51.294364 kubelet[2803]: E1112 20:45:51.294304 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:51.294364 kubelet[2803]: W1112 20:45:51.294310 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:51.294364 kubelet[2803]: E1112 20:45:51.294317 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:51.294670 kubelet[2803]: E1112 20:45:51.294663 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:51.294829 kubelet[2803]: W1112 20:45:51.294821 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:51.294881 kubelet[2803]: E1112 20:45:51.294876 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:51.295338 kubelet[2803]: E1112 20:45:51.295331 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:51.295410 kubelet[2803]: W1112 20:45:51.295389 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:51.295530 kubelet[2803]: E1112 20:45:51.295492 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:51.296013 kubelet[2803]: E1112 20:45:51.295949 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:51.296013 kubelet[2803]: W1112 20:45:51.295956 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:51.296013 kubelet[2803]: E1112 20:45:51.295967 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:51.296264 kubelet[2803]: E1112 20:45:51.296195 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:51.296264 kubelet[2803]: W1112 20:45:51.296201 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:51.296264 kubelet[2803]: E1112 20:45:51.296209 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:51.296762 kubelet[2803]: E1112 20:45:51.296725 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:51.296762 kubelet[2803]: W1112 20:45:51.296731 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:51.296995 kubelet[2803]: E1112 20:45:51.296923 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:51.297729 kubelet[2803]: E1112 20:45:51.297722 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:51.297856 kubelet[2803]: W1112 20:45:51.297816 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:51.297856 kubelet[2803]: E1112 20:45:51.297842 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:51.298276 kubelet[2803]: E1112 20:45:51.298211 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:51.298276 kubelet[2803]: W1112 20:45:51.298219 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:51.298475 kubelet[2803]: E1112 20:45:51.298363 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:51.298607 kubelet[2803]: E1112 20:45:51.298600 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:51.298918 kubelet[2803]: W1112 20:45:51.298679 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:51.299087 kubelet[2803]: E1112 20:45:51.299063 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:51.299170 kubelet[2803]: E1112 20:45:51.299140 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:51.299170 kubelet[2803]: W1112 20:45:51.299145 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:51.299494 kubelet[2803]: E1112 20:45:51.299483 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:51.299700 kubelet[2803]: E1112 20:45:51.299693 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:51.299760 kubelet[2803]: W1112 20:45:51.299748 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:51.299963 kubelet[2803]: E1112 20:45:51.299888 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:51.300021 kubelet[2803]: E1112 20:45:51.300015 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:51.300086 kubelet[2803]: W1112 20:45:51.300044 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:51.300195 kubelet[2803]: E1112 20:45:51.300157 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:51.300713 kubelet[2803]: E1112 20:45:51.300587 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:51.300713 kubelet[2803]: W1112 20:45:51.300594 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:51.300854 kubelet[2803]: E1112 20:45:51.300845 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:51.301595 kubelet[2803]: E1112 20:45:51.301160 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:51.301595 kubelet[2803]: W1112 20:45:51.301167 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:51.301595 kubelet[2803]: E1112 20:45:51.301175 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:51.301867 kubelet[2803]: E1112 20:45:51.301861 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:51.302261 kubelet[2803]: W1112 20:45:51.301913 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:51.302261 kubelet[2803]: E1112 20:45:51.301928 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:51.302384 kubelet[2803]: E1112 20:45:51.302363 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:51.302384 kubelet[2803]: W1112 20:45:51.302372 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:51.302384 kubelet[2803]: E1112 20:45:51.302385 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:51.302956 kubelet[2803]: E1112 20:45:51.302944 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:51.302956 kubelet[2803]: W1112 20:45:51.302952 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:51.303023 kubelet[2803]: E1112 20:45:51.302971 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:51.303197 kubelet[2803]: E1112 20:45:51.303189 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:51.303197 kubelet[2803]: W1112 20:45:51.303196 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:51.303240 kubelet[2803]: E1112 20:45:51.303204 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:51.308419 containerd[1545]: time="2024-11-12T20:45:51.308388704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-568f9457f6-pftgn,Uid:6f9d6667-5b4a-434c-a25f-4a273757ef7f,Namespace:calico-system,Attempt:0,} returns sandbox id \"2b3c2eb902d353a9fec4ed249feb00bdc537e5e499395e6434303a31d881abe2\"" Nov 12 20:45:51.309344 kubelet[2803]: E1112 20:45:51.309280 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:51.309344 kubelet[2803]: W1112 20:45:51.309294 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:51.309344 kubelet[2803]: E1112 20:45:51.309308 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:51.313409 containerd[1545]: time="2024-11-12T20:45:51.313264430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:45:51.313819 containerd[1545]: time="2024-11-12T20:45:51.313495883Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:45:51.313996 containerd[1545]: time="2024-11-12T20:45:51.313844314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:45:51.313996 containerd[1545]: time="2024-11-12T20:45:51.313915226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:45:51.320729 containerd[1545]: time="2024-11-12T20:45:51.320708983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\"" Nov 12 20:45:51.327721 systemd[1]: Started cri-containerd-c419678050f6a4a4f1dc960586bd674d1e2a760bc5a95cc86016d4e5083c9c04.scope - libcontainer container c419678050f6a4a4f1dc960586bd674d1e2a760bc5a95cc86016d4e5083c9c04. Nov 12 20:45:51.346374 containerd[1545]: time="2024-11-12T20:45:51.346098902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n2q5b,Uid:aa789dee-6d4d-4e09-8c29-cb02d5225385,Namespace:calico-system,Attempt:0,} returns sandbox id \"c419678050f6a4a4f1dc960586bd674d1e2a760bc5a95cc86016d4e5083c9c04\"" Nov 12 20:45:52.475975 kubelet[2803]: E1112 20:45:52.475730 2803 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mr2hw" podUID="5475df33-a25f-4c6b-acfc-caf320cb59b1" Nov 12 20:45:53.743291 containerd[1545]: time="2024-11-12T20:45:53.743244720Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:53.743756 containerd[1545]: time="2024-11-12T20:45:53.743621522Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.0: active requests=0, bytes read=29849168" Nov 12 20:45:53.744437 containerd[1545]: time="2024-11-12T20:45:53.744038684Z" level=info msg="ImageCreate event name:\"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:53.745194 containerd[1545]: time="2024-11-12T20:45:53.745172813Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:53.748164 containerd[1545]: time="2024-11-12T20:45:53.748086862Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.0\" with image id \"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\", size \"31342252\" in 2.427238843s" Nov 12 20:45:53.748164 containerd[1545]: time="2024-11-12T20:45:53.748118302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\" returns image reference \"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\"" Nov 12 20:45:53.749084 containerd[1545]: time="2024-11-12T20:45:53.748793826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\"" Nov 12 20:45:53.762121 containerd[1545]: time="2024-11-12T20:45:53.762097437Z" level=info msg="CreateContainer within sandbox \"2b3c2eb902d353a9fec4ed249feb00bdc537e5e499395e6434303a31d881abe2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 12 20:45:53.769564 containerd[1545]: time="2024-11-12T20:45:53.769531141Z" level=info msg="CreateContainer within sandbox \"2b3c2eb902d353a9fec4ed249feb00bdc537e5e499395e6434303a31d881abe2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d33930a260c9e49e8c244a4a358faede76235554578944616bb18039694036b4\"" Nov 12 20:45:53.770472 containerd[1545]: time="2024-11-12T20:45:53.770454894Z" level=info msg="StartContainer for \"d33930a260c9e49e8c244a4a358faede76235554578944616bb18039694036b4\"" Nov 12 20:45:53.802688 systemd[1]: Started cri-containerd-d33930a260c9e49e8c244a4a358faede76235554578944616bb18039694036b4.scope - libcontainer container d33930a260c9e49e8c244a4a358faede76235554578944616bb18039694036b4. Nov 12 20:45:53.832918 containerd[1545]: time="2024-11-12T20:45:53.832892312Z" level=info msg="StartContainer for \"d33930a260c9e49e8c244a4a358faede76235554578944616bb18039694036b4\" returns successfully" Nov 12 20:45:54.484767 kubelet[2803]: E1112 20:45:54.484599 2803 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mr2hw" podUID="5475df33-a25f-4c6b-acfc-caf320cb59b1" Nov 12 20:45:54.635878 kubelet[2803]: E1112 20:45:54.635846 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:54.635878 kubelet[2803]: W1112 20:45:54.635874 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:54.635992 kubelet[2803]: E1112 20:45:54.635891 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:54.636065 kubelet[2803]: E1112 20:45:54.636055 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:54.636065 kubelet[2803]: W1112 20:45:54.636062 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:54.636122 kubelet[2803]: E1112 20:45:54.636070 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:54.636157 kubelet[2803]: E1112 20:45:54.636150 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:54.636157 kubelet[2803]: W1112 20:45:54.636154 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:54.636208 kubelet[2803]: E1112 20:45:54.636160 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:54.636247 kubelet[2803]: E1112 20:45:54.636240 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:54.636247 kubelet[2803]: W1112 20:45:54.636245 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:54.636296 kubelet[2803]: E1112 20:45:54.636251 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:54.636346 kubelet[2803]: E1112 20:45:54.636337 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:54.636346 kubelet[2803]: W1112 20:45:54.636344 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:54.636393 kubelet[2803]: E1112 20:45:54.636350 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:54.636437 kubelet[2803]: E1112 20:45:54.636427 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:54.636437 kubelet[2803]: W1112 20:45:54.636434 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:54.636488 kubelet[2803]: E1112 20:45:54.636440 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:54.636527 kubelet[2803]: E1112 20:45:54.636510 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:54.636527 kubelet[2803]: W1112 20:45:54.636514 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:54.636527 kubelet[2803]: E1112 20:45:54.636523 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:54.636624 kubelet[2803]: E1112 20:45:54.636606 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:54.636624 kubelet[2803]: W1112 20:45:54.636610 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:54.636624 kubelet[2803]: E1112 20:45:54.636615 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:54.636703 kubelet[2803]: E1112 20:45:54.636696 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:54.636703 kubelet[2803]: W1112 20:45:54.636701 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:54.636751 kubelet[2803]: E1112 20:45:54.636707 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:54.636791 kubelet[2803]: E1112 20:45:54.636779 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:54.636791 kubelet[2803]: W1112 20:45:54.636788 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:54.636837 kubelet[2803]: E1112 20:45:54.636793 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:54.636876 kubelet[2803]: E1112 20:45:54.636867 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:54.636876 kubelet[2803]: W1112 20:45:54.636873 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:54.636926 kubelet[2803]: E1112 20:45:54.636878 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:54.636954 kubelet[2803]: E1112 20:45:54.636951 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:54.636972 kubelet[2803]: W1112 20:45:54.636955 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:54.636972 kubelet[2803]: E1112 20:45:54.636960 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:54.637043 kubelet[2803]: E1112 20:45:54.637035 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:54.637043 kubelet[2803]: W1112 20:45:54.637040 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:54.637086 kubelet[2803]: E1112 20:45:54.637046 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:54.637147 kubelet[2803]: E1112 20:45:54.637119 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:54.637147 kubelet[2803]: W1112 20:45:54.637128 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:54.637147 kubelet[2803]: E1112 20:45:54.637133 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:54.637230 kubelet[2803]: E1112 20:45:54.637206 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:54.637230 kubelet[2803]: W1112 20:45:54.637210 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:54.637230 kubelet[2803]: E1112 20:45:54.637215 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:54.712464 kubelet[2803]: E1112 20:45:54.712441 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:54.712464 kubelet[2803]: W1112 20:45:54.712458 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:54.712763 kubelet[2803]: E1112 20:45:54.712475 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:54.712763 kubelet[2803]: E1112 20:45:54.712617 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:54.712763 kubelet[2803]: W1112 20:45:54.712622 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:54.712763 kubelet[2803]: E1112 20:45:54.712631 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:54.712763 kubelet[2803]: E1112 20:45:54.712727 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:54.712763 kubelet[2803]: W1112 20:45:54.712732 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:54.712763 kubelet[2803]: E1112 20:45:54.712737 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:54.714145 kubelet[2803]: E1112 20:45:54.714131 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:54.714145 kubelet[2803]: W1112 20:45:54.714144 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:54.714213 kubelet[2803]: E1112 20:45:54.714167 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:54.714382 kubelet[2803]: E1112 20:45:54.714354 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:54.714405 kubelet[2803]: W1112 20:45:54.714381 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:54.714425 kubelet[2803]: E1112 20:45:54.714403 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:54.714539 kubelet[2803]: E1112 20:45:54.714527 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:54.714576 kubelet[2803]: W1112 20:45:54.714539 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:54.714576 kubelet[2803]: E1112 20:45:54.714563 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:54.717006 kubelet[2803]: E1112 20:45:54.716986 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:54.717006 kubelet[2803]: W1112 20:45:54.717004 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:54.717081 kubelet[2803]: E1112 20:45:54.717027 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:54.717336 kubelet[2803]: E1112 20:45:54.717268 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:54.717336 kubelet[2803]: W1112 20:45:54.717276 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:54.717336 kubelet[2803]: E1112 20:45:54.717288 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:54.720678 kubelet[2803]: E1112 20:45:54.717413 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:54.720678 kubelet[2803]: W1112 20:45:54.717418 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:54.720678 kubelet[2803]: E1112 20:45:54.717436 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:54.720678 kubelet[2803]: E1112 20:45:54.717571 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:54.720678 kubelet[2803]: W1112 20:45:54.717580 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:54.720678 kubelet[2803]: E1112 20:45:54.717613 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:54.720678 kubelet[2803]: E1112 20:45:54.717680 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:54.720678 kubelet[2803]: W1112 20:45:54.717684 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:54.720678 kubelet[2803]: E1112 20:45:54.717694 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:54.720678 kubelet[2803]: E1112 20:45:54.717826 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:54.721198 kubelet[2803]: W1112 20:45:54.717832 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:54.721198 kubelet[2803]: E1112 20:45:54.717841 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:54.721198 kubelet[2803]: E1112 20:45:54.717960 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:54.721198 kubelet[2803]: W1112 20:45:54.717965 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:54.721198 kubelet[2803]: E1112 20:45:54.717974 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:54.721198 kubelet[2803]: E1112 20:45:54.718164 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:54.721198 kubelet[2803]: W1112 20:45:54.718172 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:54.721198 kubelet[2803]: E1112 20:45:54.718194 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:54.721198 kubelet[2803]: E1112 20:45:54.718321 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:54.721198 kubelet[2803]: W1112 20:45:54.718328 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:54.721366 kubelet[2803]: E1112 20:45:54.718343 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:54.721366 kubelet[2803]: E1112 20:45:54.718469 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:54.721366 kubelet[2803]: W1112 20:45:54.718478 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:54.721366 kubelet[2803]: E1112 20:45:54.718492 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:54.721366 kubelet[2803]: E1112 20:45:54.718709 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:54.721366 kubelet[2803]: W1112 20:45:54.718717 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:54.721366 kubelet[2803]: E1112 20:45:54.718726 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:54.721366 kubelet[2803]: E1112 20:45:54.718833 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:54.721366 kubelet[2803]: W1112 20:45:54.718840 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:54.721366 kubelet[2803]: E1112 20:45:54.718850 2803 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:55.463913 containerd[1545]: time="2024-11-12T20:45:55.463880842Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:55.464677 containerd[1545]: time="2024-11-12T20:45:55.464650568Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0: active requests=0, bytes read=5362116" Nov 12 20:45:55.465003 containerd[1545]: time="2024-11-12T20:45:55.464984796Z" level=info msg="ImageCreate event name:\"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:55.465949 containerd[1545]: time="2024-11-12T20:45:55.465925445Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:55.466569 containerd[1545]: time="2024-11-12T20:45:55.466316083Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" with image id \"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\", size \"6855168\" in 1.717501846s" Nov 12 20:45:55.466569 containerd[1545]: time="2024-11-12T20:45:55.466334408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" returns image reference \"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\"" Nov 12 20:45:55.467511 containerd[1545]: time="2024-11-12T20:45:55.467428113Z" level=info msg="CreateContainer within sandbox \"c419678050f6a4a4f1dc960586bd674d1e2a760bc5a95cc86016d4e5083c9c04\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 12 20:45:55.484498 containerd[1545]: time="2024-11-12T20:45:55.484468935Z" level=info msg="CreateContainer within sandbox \"c419678050f6a4a4f1dc960586bd674d1e2a760bc5a95cc86016d4e5083c9c04\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5922089d4ea4d4772bc992a7b4845e6daf6f1473dc8fddfcffccae054317efbd\"" Nov 12 20:45:55.485888 containerd[1545]: time="2024-11-12T20:45:55.485068640Z" level=info msg="StartContainer for \"5922089d4ea4d4772bc992a7b4845e6daf6f1473dc8fddfcffccae054317efbd\"" Nov 12 20:45:55.508720 systemd[1]: Started cri-containerd-5922089d4ea4d4772bc992a7b4845e6daf6f1473dc8fddfcffccae054317efbd.scope - libcontainer container 5922089d4ea4d4772bc992a7b4845e6daf6f1473dc8fddfcffccae054317efbd. Nov 12 20:45:55.530229 containerd[1545]: time="2024-11-12T20:45:55.530189301Z" level=info msg="StartContainer for \"5922089d4ea4d4772bc992a7b4845e6daf6f1473dc8fddfcffccae054317efbd\" returns successfully" Nov 12 20:45:55.538937 systemd[1]: cri-containerd-5922089d4ea4d4772bc992a7b4845e6daf6f1473dc8fddfcffccae054317efbd.scope: Deactivated successfully. Nov 12 20:45:55.551094 kubelet[2803]: I1112 20:45:55.550994 2803 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:45:55.565804 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5922089d4ea4d4772bc992a7b4845e6daf6f1473dc8fddfcffccae054317efbd-rootfs.mount: Deactivated successfully. Nov 12 20:45:55.584269 kubelet[2803]: I1112 20:45:55.584170 2803 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-568f9457f6-pftgn" podStartSLOduration=3.1456294160000002 podStartE2EDuration="5.584128313s" podCreationTimestamp="2024-11-12 20:45:50 +0000 UTC" firstStartedPulling="2024-11-12 20:45:51.309862598 +0000 UTC m=+20.968131254" lastFinishedPulling="2024-11-12 20:45:53.748361497 +0000 UTC m=+23.406630151" observedRunningTime="2024-11-12 20:45:54.559999272 +0000 UTC m=+24.218267938" watchObservedRunningTime="2024-11-12 20:45:55.584128313 +0000 UTC m=+25.242396972" Nov 12 20:45:55.609042 containerd[1545]: time="2024-11-12T20:45:55.600948920Z" level=info msg="shim disconnected" id=5922089d4ea4d4772bc992a7b4845e6daf6f1473dc8fddfcffccae054317efbd namespace=k8s.io Nov 12 20:45:55.609042 containerd[1545]: time="2024-11-12T20:45:55.608970250Z" level=warning msg="cleaning up after shim disconnected" id=5922089d4ea4d4772bc992a7b4845e6daf6f1473dc8fddfcffccae054317efbd namespace=k8s.io Nov 12 20:45:55.609042 containerd[1545]: time="2024-11-12T20:45:55.608980827Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:45:56.475923 kubelet[2803]: E1112 20:45:56.475809 2803 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mr2hw" podUID="5475df33-a25f-4c6b-acfc-caf320cb59b1" Nov 12 20:45:56.554331 containerd[1545]: time="2024-11-12T20:45:56.554142173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\"" Nov 12 20:45:58.475378 kubelet[2803]: E1112 20:45:58.475092 2803 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mr2hw" podUID="5475df33-a25f-4c6b-acfc-caf320cb59b1" Nov 12 20:45:59.966176 containerd[1545]: time="2024-11-12T20:45:59.966144012Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:59.967052 containerd[1545]: time="2024-11-12T20:45:59.967020997Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.0: active requests=0, bytes read=96163683" Nov 12 20:45:59.967202 containerd[1545]: time="2024-11-12T20:45:59.967181426Z" level=info msg="ImageCreate event name:\"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:59.972771 containerd[1545]: time="2024-11-12T20:45:59.972744830Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:59.974211 containerd[1545]: time="2024-11-12T20:45:59.974187657Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.0\" with image id \"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\", size \"97656775\" in 3.420021552s" Nov 12 20:45:59.976093 containerd[1545]: time="2024-11-12T20:45:59.974217352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\" returns image reference \"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\"" Nov 12 20:45:59.977585 containerd[1545]: time="2024-11-12T20:45:59.977563344Z" level=info msg="CreateContainer within sandbox \"c419678050f6a4a4f1dc960586bd674d1e2a760bc5a95cc86016d4e5083c9c04\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 12 20:45:59.985063 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3685981101.mount: Deactivated successfully. Nov 12 20:45:59.998053 containerd[1545]: time="2024-11-12T20:45:59.998029212Z" level=info msg="CreateContainer within sandbox \"c419678050f6a4a4f1dc960586bd674d1e2a760bc5a95cc86016d4e5083c9c04\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f570b984c43152c15d3a482ff43e6c671b38ce0cced9a891b788b7416ed0b4d4\"" Nov 12 20:45:59.998369 containerd[1545]: time="2024-11-12T20:45:59.998351740Z" level=info msg="StartContainer for \"f570b984c43152c15d3a482ff43e6c671b38ce0cced9a891b788b7416ed0b4d4\"" Nov 12 20:46:00.032992 systemd[1]: Started cri-containerd-f570b984c43152c15d3a482ff43e6c671b38ce0cced9a891b788b7416ed0b4d4.scope - libcontainer container f570b984c43152c15d3a482ff43e6c671b38ce0cced9a891b788b7416ed0b4d4. Nov 12 20:46:00.085214 containerd[1545]: time="2024-11-12T20:46:00.085188649Z" level=info msg="StartContainer for \"f570b984c43152c15d3a482ff43e6c671b38ce0cced9a891b788b7416ed0b4d4\" returns successfully" Nov 12 20:46:00.476187 kubelet[2803]: E1112 20:46:00.475887 2803 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mr2hw" podUID="5475df33-a25f-4c6b-acfc-caf320cb59b1" Nov 12 20:46:01.726805 systemd[1]: cri-containerd-f570b984c43152c15d3a482ff43e6c671b38ce0cced9a891b788b7416ed0b4d4.scope: Deactivated successfully. Nov 12 20:46:01.811226 kubelet[2803]: I1112 20:46:01.811198 2803 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Nov 12 20:46:01.821427 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f570b984c43152c15d3a482ff43e6c671b38ce0cced9a891b788b7416ed0b4d4-rootfs.mount: Deactivated successfully. Nov 12 20:46:01.850985 containerd[1545]: time="2024-11-12T20:46:01.850931249Z" level=info msg="shim disconnected" id=f570b984c43152c15d3a482ff43e6c671b38ce0cced9a891b788b7416ed0b4d4 namespace=k8s.io Nov 12 20:46:01.850985 containerd[1545]: time="2024-11-12T20:46:01.850972708Z" level=warning msg="cleaning up after shim disconnected" id=f570b984c43152c15d3a482ff43e6c671b38ce0cced9a891b788b7416ed0b4d4 namespace=k8s.io Nov 12 20:46:01.850985 containerd[1545]: time="2024-11-12T20:46:01.850978314Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:46:01.920201 kubelet[2803]: I1112 20:46:01.920172 2803 topology_manager.go:215] "Topology Admit Handler" podUID="2cb5bf3f-a6f0-4cca-8e64-700b92fbd244" podNamespace="kube-system" podName="coredns-76f75df574-jx2wl" Nov 12 20:46:01.924032 kubelet[2803]: I1112 20:46:01.923671 2803 topology_manager.go:215] "Topology Admit Handler" podUID="4b0d351b-c7ca-4b0e-9343-66e8cb4acd5c" podNamespace="calico-apiserver" podName="calico-apiserver-55d5bcd669-b7s55" Nov 12 20:46:01.927435 systemd[1]: Created slice kubepods-burstable-pod2cb5bf3f_a6f0_4cca_8e64_700b92fbd244.slice - libcontainer container kubepods-burstable-pod2cb5bf3f_a6f0_4cca_8e64_700b92fbd244.slice. Nov 12 20:46:01.940563 kubelet[2803]: I1112 20:46:01.937950 2803 topology_manager.go:215] "Topology Admit Handler" podUID="92ebb2b6-a9b5-4cb2-9d42-b7d671fa5e35" podNamespace="kube-system" podName="coredns-76f75df574-q4n22" Nov 12 20:46:01.940563 kubelet[2803]: I1112 20:46:01.938133 2803 topology_manager.go:215] "Topology Admit Handler" podUID="7a262788-14a2-4491-b838-16c919aed65b" podNamespace="calico-apiserver" podName="calico-apiserver-55d5bcd669-xs25j" Nov 12 20:46:01.940563 kubelet[2803]: I1112 20:46:01.938420 2803 topology_manager.go:215] "Topology Admit Handler" podUID="b1afbb6d-8e27-4966-8b72-7f067d947668" podNamespace="calico-system" podName="calico-kube-controllers-6d6985898d-bw6xq" Nov 12 20:46:01.948059 systemd[1]: Created slice kubepods-besteffort-pod4b0d351b_c7ca_4b0e_9343_66e8cb4acd5c.slice - libcontainer container kubepods-besteffort-pod4b0d351b_c7ca_4b0e_9343_66e8cb4acd5c.slice. Nov 12 20:46:01.951967 systemd[1]: Created slice kubepods-besteffort-podb1afbb6d_8e27_4966_8b72_7f067d947668.slice - libcontainer container kubepods-besteffort-podb1afbb6d_8e27_4966_8b72_7f067d947668.slice. Nov 12 20:46:01.955445 systemd[1]: Created slice kubepods-burstable-pod92ebb2b6_a9b5_4cb2_9d42_b7d671fa5e35.slice - libcontainer container kubepods-burstable-pod92ebb2b6_a9b5_4cb2_9d42_b7d671fa5e35.slice. Nov 12 20:46:01.958833 systemd[1]: Created slice kubepods-besteffort-pod7a262788_14a2_4491_b838_16c919aed65b.slice - libcontainer container kubepods-besteffort-pod7a262788_14a2_4491_b838_16c919aed65b.slice. Nov 12 20:46:02.062063 kubelet[2803]: I1112 20:46:02.061792 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkpsh\" (UniqueName: \"kubernetes.io/projected/92ebb2b6-a9b5-4cb2-9d42-b7d671fa5e35-kube-api-access-dkpsh\") pod \"coredns-76f75df574-q4n22\" (UID: \"92ebb2b6-a9b5-4cb2-9d42-b7d671fa5e35\") " pod="kube-system/coredns-76f75df574-q4n22" Nov 12 20:46:02.062063 kubelet[2803]: I1112 20:46:02.061820 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92ebb2b6-a9b5-4cb2-9d42-b7d671fa5e35-config-volume\") pod \"coredns-76f75df574-q4n22\" (UID: \"92ebb2b6-a9b5-4cb2-9d42-b7d671fa5e35\") " pod="kube-system/coredns-76f75df574-q4n22" Nov 12 20:46:02.062063 kubelet[2803]: I1112 20:46:02.061834 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1afbb6d-8e27-4966-8b72-7f067d947668-tigera-ca-bundle\") pod \"calico-kube-controllers-6d6985898d-bw6xq\" (UID: \"b1afbb6d-8e27-4966-8b72-7f067d947668\") " pod="calico-system/calico-kube-controllers-6d6985898d-bw6xq" Nov 12 20:46:02.062063 kubelet[2803]: I1112 20:46:02.061849 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkd62\" (UniqueName: \"kubernetes.io/projected/b1afbb6d-8e27-4966-8b72-7f067d947668-kube-api-access-hkd62\") pod \"calico-kube-controllers-6d6985898d-bw6xq\" (UID: \"b1afbb6d-8e27-4966-8b72-7f067d947668\") " pod="calico-system/calico-kube-controllers-6d6985898d-bw6xq" Nov 12 20:46:02.062063 kubelet[2803]: I1112 20:46:02.061862 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brzrk\" (UniqueName: \"kubernetes.io/projected/7a262788-14a2-4491-b838-16c919aed65b-kube-api-access-brzrk\") pod \"calico-apiserver-55d5bcd669-xs25j\" (UID: \"7a262788-14a2-4491-b838-16c919aed65b\") " pod="calico-apiserver/calico-apiserver-55d5bcd669-xs25j" Nov 12 20:46:02.062216 kubelet[2803]: I1112 20:46:02.061878 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7a262788-14a2-4491-b838-16c919aed65b-calico-apiserver-certs\") pod \"calico-apiserver-55d5bcd669-xs25j\" (UID: \"7a262788-14a2-4491-b838-16c919aed65b\") " pod="calico-apiserver/calico-apiserver-55d5bcd669-xs25j" Nov 12 20:46:02.062216 kubelet[2803]: I1112 20:46:02.061890 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpnff\" (UniqueName: \"kubernetes.io/projected/2cb5bf3f-a6f0-4cca-8e64-700b92fbd244-kube-api-access-vpnff\") pod \"coredns-76f75df574-jx2wl\" (UID: \"2cb5bf3f-a6f0-4cca-8e64-700b92fbd244\") " pod="kube-system/coredns-76f75df574-jx2wl" Nov 12 20:46:02.062216 kubelet[2803]: I1112 20:46:02.061902 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4b0d351b-c7ca-4b0e-9343-66e8cb4acd5c-calico-apiserver-certs\") pod \"calico-apiserver-55d5bcd669-b7s55\" (UID: \"4b0d351b-c7ca-4b0e-9343-66e8cb4acd5c\") " pod="calico-apiserver/calico-apiserver-55d5bcd669-b7s55" Nov 12 20:46:02.062216 kubelet[2803]: I1112 20:46:02.061913 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2cb5bf3f-a6f0-4cca-8e64-700b92fbd244-config-volume\") pod \"coredns-76f75df574-jx2wl\" (UID: \"2cb5bf3f-a6f0-4cca-8e64-700b92fbd244\") " pod="kube-system/coredns-76f75df574-jx2wl" Nov 12 20:46:02.062216 kubelet[2803]: I1112 20:46:02.061926 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mlnr\" (UniqueName: \"kubernetes.io/projected/4b0d351b-c7ca-4b0e-9343-66e8cb4acd5c-kube-api-access-2mlnr\") pod \"calico-apiserver-55d5bcd669-b7s55\" (UID: \"4b0d351b-c7ca-4b0e-9343-66e8cb4acd5c\") " pod="calico-apiserver/calico-apiserver-55d5bcd669-b7s55" Nov 12 20:46:02.246339 containerd[1545]: time="2024-11-12T20:46:02.246300495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-jx2wl,Uid:2cb5bf3f-a6f0-4cca-8e64-700b92fbd244,Namespace:kube-system,Attempt:0,}" Nov 12 20:46:02.250802 containerd[1545]: time="2024-11-12T20:46:02.250690335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55d5bcd669-b7s55,Uid:4b0d351b-c7ca-4b0e-9343-66e8cb4acd5c,Namespace:calico-apiserver,Attempt:0,}" Nov 12 20:46:02.254451 containerd[1545]: time="2024-11-12T20:46:02.254308932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d6985898d-bw6xq,Uid:b1afbb6d-8e27-4966-8b72-7f067d947668,Namespace:calico-system,Attempt:0,}" Nov 12 20:46:02.257876 containerd[1545]: time="2024-11-12T20:46:02.257856026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-q4n22,Uid:92ebb2b6-a9b5-4cb2-9d42-b7d671fa5e35,Namespace:kube-system,Attempt:0,}" Nov 12 20:46:02.260283 containerd[1545]: time="2024-11-12T20:46:02.260271267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55d5bcd669-xs25j,Uid:7a262788-14a2-4491-b838-16c919aed65b,Namespace:calico-apiserver,Attempt:0,}" Nov 12 20:46:02.479927 systemd[1]: Created slice kubepods-besteffort-pod5475df33_a25f_4c6b_acfc_caf320cb59b1.slice - libcontainer container kubepods-besteffort-pod5475df33_a25f_4c6b_acfc_caf320cb59b1.slice. Nov 12 20:46:02.482132 containerd[1545]: time="2024-11-12T20:46:02.482111041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mr2hw,Uid:5475df33-a25f-4c6b-acfc-caf320cb59b1,Namespace:calico-system,Attempt:0,}" Nov 12 20:46:02.564066 containerd[1545]: time="2024-11-12T20:46:02.564003455Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\"" Nov 12 20:46:02.900470 containerd[1545]: time="2024-11-12T20:46:02.900427667Z" level=error msg="Failed to destroy network for sandbox \"72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:02.901972 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185-shm.mount: Deactivated successfully. Nov 12 20:46:02.905474 containerd[1545]: time="2024-11-12T20:46:02.905241663Z" level=error msg="encountered an error cleaning up failed sandbox \"72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:02.905474 containerd[1545]: time="2024-11-12T20:46:02.905291375Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mr2hw,Uid:5475df33-a25f-4c6b-acfc-caf320cb59b1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:02.906573 containerd[1545]: time="2024-11-12T20:46:02.906509213Z" level=error msg="Failed to destroy network for sandbox \"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:02.907085 containerd[1545]: time="2024-11-12T20:46:02.906761912Z" level=error msg="encountered an error cleaning up failed sandbox \"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:02.907085 containerd[1545]: time="2024-11-12T20:46:02.906789936Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-q4n22,Uid:92ebb2b6-a9b5-4cb2-9d42-b7d671fa5e35,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:02.909013 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3-shm.mount: Deactivated successfully. Nov 12 20:46:02.910140 containerd[1545]: time="2024-11-12T20:46:02.909314937Z" level=error msg="Failed to destroy network for sandbox \"9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:02.910140 containerd[1545]: time="2024-11-12T20:46:02.909511203Z" level=error msg="encountered an error cleaning up failed sandbox \"9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:02.910140 containerd[1545]: time="2024-11-12T20:46:02.909532454Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55d5bcd669-xs25j,Uid:7a262788-14a2-4491-b838-16c919aed65b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:02.910140 containerd[1545]: time="2024-11-12T20:46:02.909617135Z" level=error msg="Failed to destroy network for sandbox \"7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:02.910140 containerd[1545]: time="2024-11-12T20:46:02.909776746Z" level=error msg="encountered an error cleaning up failed sandbox \"7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:02.910140 containerd[1545]: time="2024-11-12T20:46:02.909807842Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-jx2wl,Uid:2cb5bf3f-a6f0-4cca-8e64-700b92fbd244,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:02.910140 containerd[1545]: time="2024-11-12T20:46:02.909853089Z" level=error msg="Failed to destroy network for sandbox \"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:02.910140 containerd[1545]: time="2024-11-12T20:46:02.910004069Z" level=error msg="encountered an error cleaning up failed sandbox \"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:02.910140 containerd[1545]: time="2024-11-12T20:46:02.910031354Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d6985898d-bw6xq,Uid:b1afbb6d-8e27-4966-8b72-7f067d947668,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:02.910140 containerd[1545]: time="2024-11-12T20:46:02.910084865Z" level=error msg="Failed to destroy network for sandbox \"754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:02.913223 containerd[1545]: time="2024-11-12T20:46:02.910641848Z" level=error msg="encountered an error cleaning up failed sandbox \"754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:02.913223 containerd[1545]: time="2024-11-12T20:46:02.910679337Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55d5bcd669-b7s55,Uid:4b0d351b-c7ca-4b0e-9343-66e8cb4acd5c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:02.911490 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c-shm.mount: Deactivated successfully. Nov 12 20:46:02.911543 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537-shm.mount: Deactivated successfully. Nov 12 20:46:02.911606 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1-shm.mount: Deactivated successfully. Nov 12 20:46:02.911651 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36-shm.mount: Deactivated successfully. Nov 12 20:46:02.916520 kubelet[2803]: E1112 20:46:02.916496 2803 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:02.916709 kubelet[2803]: E1112 20:46:02.916540 2803 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55d5bcd669-b7s55" Nov 12 20:46:02.916709 kubelet[2803]: E1112 20:46:02.916583 2803 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55d5bcd669-b7s55" Nov 12 20:46:02.916709 kubelet[2803]: E1112 20:46:02.916621 2803 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55d5bcd669-b7s55_calico-apiserver(4b0d351b-c7ca-4b0e-9343-66e8cb4acd5c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55d5bcd669-b7s55_calico-apiserver(4b0d351b-c7ca-4b0e-9343-66e8cb4acd5c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55d5bcd669-b7s55" podUID="4b0d351b-c7ca-4b0e-9343-66e8cb4acd5c" Nov 12 20:46:02.917081 kubelet[2803]: E1112 20:46:02.916648 2803 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:02.917081 kubelet[2803]: E1112 20:46:02.916661 2803 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55d5bcd669-xs25j" Nov 12 20:46:02.917081 kubelet[2803]: E1112 20:46:02.916673 2803 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55d5bcd669-xs25j" Nov 12 20:46:02.917157 kubelet[2803]: E1112 20:46:02.916692 2803 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55d5bcd669-xs25j_calico-apiserver(7a262788-14a2-4491-b838-16c919aed65b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55d5bcd669-xs25j_calico-apiserver(7a262788-14a2-4491-b838-16c919aed65b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55d5bcd669-xs25j" podUID="7a262788-14a2-4491-b838-16c919aed65b" Nov 12 20:46:02.917157 kubelet[2803]: E1112 20:46:02.916709 2803 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:02.917157 kubelet[2803]: E1112 20:46:02.916719 2803 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-jx2wl" Nov 12 20:46:02.917245 kubelet[2803]: E1112 20:46:02.916729 2803 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-jx2wl" Nov 12 20:46:02.917245 kubelet[2803]: E1112 20:46:02.916750 2803 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-jx2wl_kube-system(2cb5bf3f-a6f0-4cca-8e64-700b92fbd244)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-jx2wl_kube-system(2cb5bf3f-a6f0-4cca-8e64-700b92fbd244)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-jx2wl" podUID="2cb5bf3f-a6f0-4cca-8e64-700b92fbd244" Nov 12 20:46:02.917245 kubelet[2803]: E1112 20:46:02.916765 2803 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:02.917330 kubelet[2803]: E1112 20:46:02.916778 2803 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d6985898d-bw6xq" Nov 12 20:46:02.917330 kubelet[2803]: E1112 20:46:02.916790 2803 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d6985898d-bw6xq" Nov 12 20:46:02.917330 kubelet[2803]: E1112 20:46:02.916806 2803 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d6985898d-bw6xq_calico-system(b1afbb6d-8e27-4966-8b72-7f067d947668)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d6985898d-bw6xq_calico-system(b1afbb6d-8e27-4966-8b72-7f067d947668)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d6985898d-bw6xq" podUID="b1afbb6d-8e27-4966-8b72-7f067d947668" Nov 12 20:46:02.917417 kubelet[2803]: E1112 20:46:02.916812 2803 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:02.917417 kubelet[2803]: E1112 20:46:02.916833 2803 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-q4n22" Nov 12 20:46:02.917417 kubelet[2803]: E1112 20:46:02.916845 2803 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-q4n22" Nov 12 20:46:02.917485 kubelet[2803]: E1112 20:46:02.916882 2803 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-q4n22_kube-system(92ebb2b6-a9b5-4cb2-9d42-b7d671fa5e35)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-q4n22_kube-system(92ebb2b6-a9b5-4cb2-9d42-b7d671fa5e35)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-q4n22" podUID="92ebb2b6-a9b5-4cb2-9d42-b7d671fa5e35" Nov 12 20:46:02.917485 kubelet[2803]: E1112 20:46:02.916909 2803 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:02.917485 kubelet[2803]: E1112 20:46:02.916921 2803 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mr2hw" Nov 12 20:46:02.917576 kubelet[2803]: E1112 20:46:02.916942 2803 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mr2hw" Nov 12 20:46:02.917576 kubelet[2803]: E1112 20:46:02.916964 2803 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mr2hw_calico-system(5475df33-a25f-4c6b-acfc-caf320cb59b1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mr2hw_calico-system(5475df33-a25f-4c6b-acfc-caf320cb59b1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mr2hw" podUID="5475df33-a25f-4c6b-acfc-caf320cb59b1" Nov 12 20:46:03.565656 kubelet[2803]: I1112 20:46:03.565625 2803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3" Nov 12 20:46:03.566691 kubelet[2803]: I1112 20:46:03.566674 2803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c" Nov 12 20:46:03.640804 containerd[1545]: time="2024-11-12T20:46:03.640744337Z" level=info msg="StopPodSandbox for \"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\"" Nov 12 20:46:03.641443 containerd[1545]: time="2024-11-12T20:46:03.641267174Z" level=info msg="StopPodSandbox for \"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\"" Nov 12 20:46:03.642422 containerd[1545]: time="2024-11-12T20:46:03.642266957Z" level=info msg="Ensure that sandbox a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3 in task-service has been cleanup successfully" Nov 12 20:46:03.643248 kubelet[2803]: I1112 20:46:03.642545 2803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1" Nov 12 20:46:03.643296 containerd[1545]: time="2024-11-12T20:46:03.642267392Z" level=info msg="Ensure that sandbox 7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c in task-service has been cleanup successfully" Nov 12 20:46:03.644468 containerd[1545]: time="2024-11-12T20:46:03.644407428Z" level=info msg="StopPodSandbox for \"754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1\"" Nov 12 20:46:03.644589 containerd[1545]: time="2024-11-12T20:46:03.644572991Z" level=info msg="Ensure that sandbox 754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1 in task-service has been cleanup successfully" Nov 12 20:46:03.645480 kubelet[2803]: I1112 20:46:03.645022 2803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36" Nov 12 20:46:03.647062 containerd[1545]: time="2024-11-12T20:46:03.646832487Z" level=info msg="StopPodSandbox for \"9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36\"" Nov 12 20:46:03.647522 kubelet[2803]: I1112 20:46:03.647512 2803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537" Nov 12 20:46:03.648369 containerd[1545]: time="2024-11-12T20:46:03.648355463Z" level=info msg="StopPodSandbox for \"7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537\"" Nov 12 20:46:03.649193 containerd[1545]: time="2024-11-12T20:46:03.649170957Z" level=info msg="Ensure that sandbox 7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537 in task-service has been cleanup successfully" Nov 12 20:46:03.649819 kubelet[2803]: I1112 20:46:03.649790 2803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185" Nov 12 20:46:03.651052 containerd[1545]: time="2024-11-12T20:46:03.649662398Z" level=info msg="Ensure that sandbox 9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36 in task-service has been cleanup successfully" Nov 12 20:46:03.651278 containerd[1545]: time="2024-11-12T20:46:03.651210869Z" level=info msg="StopPodSandbox for \"72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185\"" Nov 12 20:46:03.651565 containerd[1545]: time="2024-11-12T20:46:03.651502199Z" level=info msg="Ensure that sandbox 72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185 in task-service has been cleanup successfully" Nov 12 20:46:03.697755 containerd[1545]: time="2024-11-12T20:46:03.697704461Z" level=error msg="StopPodSandbox for \"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\" failed" error="failed to destroy network for sandbox \"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:03.698043 kubelet[2803]: E1112 20:46:03.697886 2803 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c" Nov 12 20:46:03.703120 kubelet[2803]: E1112 20:46:03.703009 2803 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c"} Nov 12 20:46:03.703120 kubelet[2803]: E1112 20:46:03.703058 2803 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b1afbb6d-8e27-4966-8b72-7f067d947668\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:46:03.703120 kubelet[2803]: E1112 20:46:03.703101 2803 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b1afbb6d-8e27-4966-8b72-7f067d947668\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d6985898d-bw6xq" podUID="b1afbb6d-8e27-4966-8b72-7f067d947668" Nov 12 20:46:03.709652 containerd[1545]: time="2024-11-12T20:46:03.709273182Z" level=error msg="StopPodSandbox for \"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\" failed" error="failed to destroy network for sandbox \"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:03.712621 kubelet[2803]: E1112 20:46:03.710710 2803 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3" Nov 12 20:46:03.712621 kubelet[2803]: E1112 20:46:03.710737 2803 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3"} Nov 12 20:46:03.712621 kubelet[2803]: E1112 20:46:03.710775 2803 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"92ebb2b6-a9b5-4cb2-9d42-b7d671fa5e35\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:46:03.712621 kubelet[2803]: E1112 20:46:03.710794 2803 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"92ebb2b6-a9b5-4cb2-9d42-b7d671fa5e35\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-q4n22" podUID="92ebb2b6-a9b5-4cb2-9d42-b7d671fa5e35" Nov 12 20:46:03.713838 containerd[1545]: time="2024-11-12T20:46:03.713809073Z" level=error msg="StopPodSandbox for \"72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185\" failed" error="failed to destroy network for sandbox \"72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:03.713962 kubelet[2803]: E1112 20:46:03.713952 2803 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185" Nov 12 20:46:03.714025 kubelet[2803]: E1112 20:46:03.714019 2803 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185"} Nov 12 20:46:03.714091 kubelet[2803]: E1112 20:46:03.714084 2803 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5475df33-a25f-4c6b-acfc-caf320cb59b1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:46:03.714172 kubelet[2803]: E1112 20:46:03.714158 2803 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5475df33-a25f-4c6b-acfc-caf320cb59b1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mr2hw" podUID="5475df33-a25f-4c6b-acfc-caf320cb59b1" Nov 12 20:46:03.722710 containerd[1545]: time="2024-11-12T20:46:03.722674691Z" level=error msg="StopPodSandbox for \"9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36\" failed" error="failed to destroy network for sandbox \"9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:03.722878 kubelet[2803]: E1112 20:46:03.722867 2803 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36" Nov 12 20:46:03.723012 kubelet[2803]: E1112 20:46:03.722946 2803 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36"} Nov 12 20:46:03.723012 kubelet[2803]: E1112 20:46:03.722970 2803 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7a262788-14a2-4491-b838-16c919aed65b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:46:03.723012 kubelet[2803]: E1112 20:46:03.722996 2803 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7a262788-14a2-4491-b838-16c919aed65b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55d5bcd669-xs25j" podUID="7a262788-14a2-4491-b838-16c919aed65b" Nov 12 20:46:03.723904 containerd[1545]: time="2024-11-12T20:46:03.723872881Z" level=error msg="StopPodSandbox for \"7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537\" failed" error="failed to destroy network for sandbox \"7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:03.724063 kubelet[2803]: E1112 20:46:03.723973 2803 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537" Nov 12 20:46:03.724063 kubelet[2803]: E1112 20:46:03.723988 2803 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537"} Nov 12 20:46:03.724063 kubelet[2803]: E1112 20:46:03.724007 2803 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2cb5bf3f-a6f0-4cca-8e64-700b92fbd244\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:46:03.724063 kubelet[2803]: E1112 20:46:03.724022 2803 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2cb5bf3f-a6f0-4cca-8e64-700b92fbd244\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-jx2wl" podUID="2cb5bf3f-a6f0-4cca-8e64-700b92fbd244" Nov 12 20:46:03.724313 containerd[1545]: time="2024-11-12T20:46:03.724274305Z" level=error msg="StopPodSandbox for \"754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1\" failed" error="failed to destroy network for sandbox \"754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:03.724469 kubelet[2803]: E1112 20:46:03.724372 2803 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1" Nov 12 20:46:03.724469 kubelet[2803]: E1112 20:46:03.724398 2803 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1"} Nov 12 20:46:03.724469 kubelet[2803]: E1112 20:46:03.724417 2803 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4b0d351b-c7ca-4b0e-9343-66e8cb4acd5c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:46:03.724469 kubelet[2803]: E1112 20:46:03.724431 2803 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4b0d351b-c7ca-4b0e-9343-66e8cb4acd5c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55d5bcd669-b7s55" podUID="4b0d351b-c7ca-4b0e-9343-66e8cb4acd5c" Nov 12 20:46:06.709475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2339441353.mount: Deactivated successfully. Nov 12 20:46:06.768119 containerd[1545]: time="2024-11-12T20:46:06.767487464Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.0: active requests=0, bytes read=140580710" Nov 12 20:46:06.777153 containerd[1545]: time="2024-11-12T20:46:06.777088794Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:46:06.784597 containerd[1545]: time="2024-11-12T20:46:06.784323938Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.0\" with image id \"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\", size \"140580572\" in 4.220253815s" Nov 12 20:46:06.784597 containerd[1545]: time="2024-11-12T20:46:06.784346023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\" returns image reference \"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\"" Nov 12 20:46:06.798472 containerd[1545]: time="2024-11-12T20:46:06.798439398Z" level=info msg="ImageCreate event name:\"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:46:06.799006 containerd[1545]: time="2024-11-12T20:46:06.798822764Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:46:06.856861 containerd[1545]: time="2024-11-12T20:46:06.856833353Z" level=info msg="CreateContainer within sandbox \"c419678050f6a4a4f1dc960586bd674d1e2a760bc5a95cc86016d4e5083c9c04\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 12 20:46:06.914505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2984797391.mount: Deactivated successfully. Nov 12 20:46:06.920535 containerd[1545]: time="2024-11-12T20:46:06.920475963Z" level=info msg="CreateContainer within sandbox \"c419678050f6a4a4f1dc960586bd674d1e2a760bc5a95cc86016d4e5083c9c04\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8cf079e031ccc792f1c8e4850ab5291cefccdec1ad4b7acc681a7db658e6830c\"" Nov 12 20:46:06.926228 containerd[1545]: time="2024-11-12T20:46:06.926203918Z" level=info msg="StartContainer for \"8cf079e031ccc792f1c8e4850ab5291cefccdec1ad4b7acc681a7db658e6830c\"" Nov 12 20:46:06.981755 systemd[1]: Started cri-containerd-8cf079e031ccc792f1c8e4850ab5291cefccdec1ad4b7acc681a7db658e6830c.scope - libcontainer container 8cf079e031ccc792f1c8e4850ab5291cefccdec1ad4b7acc681a7db658e6830c. Nov 12 20:46:07.004741 containerd[1545]: time="2024-11-12T20:46:07.004712547Z" level=info msg="StartContainer for \"8cf079e031ccc792f1c8e4850ab5291cefccdec1ad4b7acc681a7db658e6830c\" returns successfully" Nov 12 20:46:07.071988 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 12 20:46:07.074631 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 12 20:46:07.091172 systemd[1]: cri-containerd-8cf079e031ccc792f1c8e4850ab5291cefccdec1ad4b7acc681a7db658e6830c.scope: Deactivated successfully. Nov 12 20:46:07.333444 containerd[1545]: time="2024-11-12T20:46:07.333231035Z" level=info msg="shim disconnected" id=8cf079e031ccc792f1c8e4850ab5291cefccdec1ad4b7acc681a7db658e6830c namespace=k8s.io Nov 12 20:46:07.333747 containerd[1545]: time="2024-11-12T20:46:07.333622187Z" level=warning msg="cleaning up after shim disconnected" id=8cf079e031ccc792f1c8e4850ab5291cefccdec1ad4b7acc681a7db658e6830c namespace=k8s.io Nov 12 20:46:07.333747 containerd[1545]: time="2024-11-12T20:46:07.333657588Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:46:07.692775 kubelet[2803]: I1112 20:46:07.692749 2803 scope.go:117] "RemoveContainer" containerID="8cf079e031ccc792f1c8e4850ab5291cefccdec1ad4b7acc681a7db658e6830c" Nov 12 20:46:07.712668 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8cf079e031ccc792f1c8e4850ab5291cefccdec1ad4b7acc681a7db658e6830c-rootfs.mount: Deactivated successfully. Nov 12 20:46:07.713175 containerd[1545]: time="2024-11-12T20:46:07.712975809Z" level=info msg="CreateContainer within sandbox \"c419678050f6a4a4f1dc960586bd674d1e2a760bc5a95cc86016d4e5083c9c04\" for container &ContainerMetadata{Name:calico-node,Attempt:1,}" Nov 12 20:46:07.723953 containerd[1545]: time="2024-11-12T20:46:07.723515229Z" level=info msg="CreateContainer within sandbox \"c419678050f6a4a4f1dc960586bd674d1e2a760bc5a95cc86016d4e5083c9c04\" for &ContainerMetadata{Name:calico-node,Attempt:1,} returns container id \"ba677e74f60a67203acb041fec30d74efdbee077a49686f92ad2d08b372c7f00\"" Nov 12 20:46:07.725112 containerd[1545]: time="2024-11-12T20:46:07.724326619Z" level=info msg="StartContainer for \"ba677e74f60a67203acb041fec30d74efdbee077a49686f92ad2d08b372c7f00\"" Nov 12 20:46:07.752016 systemd[1]: Started cri-containerd-ba677e74f60a67203acb041fec30d74efdbee077a49686f92ad2d08b372c7f00.scope - libcontainer container ba677e74f60a67203acb041fec30d74efdbee077a49686f92ad2d08b372c7f00. Nov 12 20:46:07.780528 containerd[1545]: time="2024-11-12T20:46:07.780506113Z" level=info msg="StartContainer for \"ba677e74f60a67203acb041fec30d74efdbee077a49686f92ad2d08b372c7f00\" returns successfully" Nov 12 20:46:07.903582 systemd[1]: cri-containerd-ba677e74f60a67203acb041fec30d74efdbee077a49686f92ad2d08b372c7f00.scope: Deactivated successfully. Nov 12 20:46:07.916816 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba677e74f60a67203acb041fec30d74efdbee077a49686f92ad2d08b372c7f00-rootfs.mount: Deactivated successfully. Nov 12 20:46:07.917716 containerd[1545]: time="2024-11-12T20:46:07.917671085Z" level=info msg="shim disconnected" id=ba677e74f60a67203acb041fec30d74efdbee077a49686f92ad2d08b372c7f00 namespace=k8s.io Nov 12 20:46:07.917716 containerd[1545]: time="2024-11-12T20:46:07.917706169Z" level=warning msg="cleaning up after shim disconnected" id=ba677e74f60a67203acb041fec30d74efdbee077a49686f92ad2d08b372c7f00 namespace=k8s.io Nov 12 20:46:07.917716 containerd[1545]: time="2024-11-12T20:46:07.917711841Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:46:08.688085 kubelet[2803]: I1112 20:46:08.687929 2803 scope.go:117] "RemoveContainer" containerID="8cf079e031ccc792f1c8e4850ab5291cefccdec1ad4b7acc681a7db658e6830c" Nov 12 20:46:08.688195 kubelet[2803]: I1112 20:46:08.688159 2803 scope.go:117] "RemoveContainer" containerID="ba677e74f60a67203acb041fec30d74efdbee077a49686f92ad2d08b372c7f00" Nov 12 20:46:08.689857 kubelet[2803]: E1112 20:46:08.689699 2803 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-n2q5b_calico-system(aa789dee-6d4d-4e09-8c29-cb02d5225385)\"" pod="calico-system/calico-node-n2q5b" podUID="aa789dee-6d4d-4e09-8c29-cb02d5225385" Nov 12 20:46:08.695512 containerd[1545]: time="2024-11-12T20:46:08.695275923Z" level=info msg="RemoveContainer for \"8cf079e031ccc792f1c8e4850ab5291cefccdec1ad4b7acc681a7db658e6830c\"" Nov 12 20:46:08.698900 containerd[1545]: time="2024-11-12T20:46:08.698877570Z" level=info msg="RemoveContainer for \"8cf079e031ccc792f1c8e4850ab5291cefccdec1ad4b7acc681a7db658e6830c\" returns successfully" Nov 12 20:46:09.055007 kubelet[2803]: I1112 20:46:09.054802 2803 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:46:15.476364 containerd[1545]: time="2024-11-12T20:46:15.476300854Z" level=info msg="StopPodSandbox for \"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\"" Nov 12 20:46:15.477719 containerd[1545]: time="2024-11-12T20:46:15.476302086Z" level=info msg="StopPodSandbox for \"9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36\"" Nov 12 20:46:15.502614 containerd[1545]: time="2024-11-12T20:46:15.502574625Z" level=error msg="StopPodSandbox for \"9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36\" failed" error="failed to destroy network for sandbox \"9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:15.502982 kubelet[2803]: E1112 20:46:15.502781 2803 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36" Nov 12 20:46:15.502982 kubelet[2803]: E1112 20:46:15.502829 2803 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36"} Nov 12 20:46:15.502982 kubelet[2803]: E1112 20:46:15.502867 2803 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7a262788-14a2-4491-b838-16c919aed65b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:46:15.502982 kubelet[2803]: E1112 20:46:15.502892 2803 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7a262788-14a2-4491-b838-16c919aed65b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55d5bcd669-xs25j" podUID="7a262788-14a2-4491-b838-16c919aed65b" Nov 12 20:46:15.506301 containerd[1545]: time="2024-11-12T20:46:15.506266073Z" level=error msg="StopPodSandbox for \"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\" failed" error="failed to destroy network for sandbox \"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:15.506485 kubelet[2803]: E1112 20:46:15.506400 2803 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c" Nov 12 20:46:15.506485 kubelet[2803]: E1112 20:46:15.506419 2803 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c"} Nov 12 20:46:15.506485 kubelet[2803]: E1112 20:46:15.506440 2803 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b1afbb6d-8e27-4966-8b72-7f067d947668\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:46:15.506485 kubelet[2803]: E1112 20:46:15.506457 2803 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b1afbb6d-8e27-4966-8b72-7f067d947668\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d6985898d-bw6xq" podUID="b1afbb6d-8e27-4966-8b72-7f067d947668" Nov 12 20:46:16.478012 containerd[1545]: time="2024-11-12T20:46:16.477589408Z" level=info msg="StopPodSandbox for \"72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185\"" Nov 12 20:46:16.478927 containerd[1545]: time="2024-11-12T20:46:16.477801661Z" level=info msg="StopPodSandbox for \"754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1\"" Nov 12 20:46:16.498048 containerd[1545]: time="2024-11-12T20:46:16.498009366Z" level=error msg="StopPodSandbox for \"72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185\" failed" error="failed to destroy network for sandbox \"72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:16.498290 kubelet[2803]: E1112 20:46:16.498161 2803 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185" Nov 12 20:46:16.498290 kubelet[2803]: E1112 20:46:16.498188 2803 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185"} Nov 12 20:46:16.498290 kubelet[2803]: E1112 20:46:16.498214 2803 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5475df33-a25f-4c6b-acfc-caf320cb59b1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:46:16.498290 kubelet[2803]: E1112 20:46:16.498233 2803 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5475df33-a25f-4c6b-acfc-caf320cb59b1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mr2hw" podUID="5475df33-a25f-4c6b-acfc-caf320cb59b1" Nov 12 20:46:16.504433 containerd[1545]: time="2024-11-12T20:46:16.504377706Z" level=error msg="StopPodSandbox for \"754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1\" failed" error="failed to destroy network for sandbox \"754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:16.504583 kubelet[2803]: E1112 20:46:16.504517 2803 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1" Nov 12 20:46:16.504583 kubelet[2803]: E1112 20:46:16.504545 2803 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1"} Nov 12 20:46:16.504583 kubelet[2803]: E1112 20:46:16.504578 2803 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4b0d351b-c7ca-4b0e-9343-66e8cb4acd5c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:46:16.504822 kubelet[2803]: E1112 20:46:16.504595 2803 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4b0d351b-c7ca-4b0e-9343-66e8cb4acd5c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55d5bcd669-b7s55" podUID="4b0d351b-c7ca-4b0e-9343-66e8cb4acd5c" Nov 12 20:46:17.476449 containerd[1545]: time="2024-11-12T20:46:17.476348963Z" level=info msg="StopPodSandbox for \"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\"" Nov 12 20:46:17.506064 containerd[1545]: time="2024-11-12T20:46:17.506019229Z" level=error msg="StopPodSandbox for \"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\" failed" error="failed to destroy network for sandbox \"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:17.506471 kubelet[2803]: E1112 20:46:17.506229 2803 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3" Nov 12 20:46:17.506471 kubelet[2803]: E1112 20:46:17.506267 2803 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3"} Nov 12 20:46:17.506471 kubelet[2803]: E1112 20:46:17.506297 2803 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"92ebb2b6-a9b5-4cb2-9d42-b7d671fa5e35\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:46:17.506471 kubelet[2803]: E1112 20:46:17.506316 2803 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"92ebb2b6-a9b5-4cb2-9d42-b7d671fa5e35\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-q4n22" podUID="92ebb2b6-a9b5-4cb2-9d42-b7d671fa5e35" Nov 12 20:46:18.210947 kubelet[2803]: I1112 20:46:18.210813 2803 scope.go:117] "RemoveContainer" containerID="ba677e74f60a67203acb041fec30d74efdbee077a49686f92ad2d08b372c7f00" Nov 12 20:46:18.214029 containerd[1545]: time="2024-11-12T20:46:18.213894686Z" level=info msg="CreateContainer within sandbox \"c419678050f6a4a4f1dc960586bd674d1e2a760bc5a95cc86016d4e5083c9c04\" for container &ContainerMetadata{Name:calico-node,Attempt:2,}" Nov 12 20:46:18.223968 containerd[1545]: time="2024-11-12T20:46:18.223865317Z" level=info msg="CreateContainer within sandbox \"c419678050f6a4a4f1dc960586bd674d1e2a760bc5a95cc86016d4e5083c9c04\" for &ContainerMetadata{Name:calico-node,Attempt:2,} returns container id \"8f8b7a4f57c7c3c1aecfeb35b82c1dae1d36cac7be714b46965662e6bd107d4b\"" Nov 12 20:46:18.224417 containerd[1545]: time="2024-11-12T20:46:18.224387752Z" level=info msg="StartContainer for \"8f8b7a4f57c7c3c1aecfeb35b82c1dae1d36cac7be714b46965662e6bd107d4b\"" Nov 12 20:46:18.248671 systemd[1]: Started cri-containerd-8f8b7a4f57c7c3c1aecfeb35b82c1dae1d36cac7be714b46965662e6bd107d4b.scope - libcontainer container 8f8b7a4f57c7c3c1aecfeb35b82c1dae1d36cac7be714b46965662e6bd107d4b. Nov 12 20:46:18.281342 containerd[1545]: time="2024-11-12T20:46:18.281311074Z" level=info msg="StartContainer for \"8f8b7a4f57c7c3c1aecfeb35b82c1dae1d36cac7be714b46965662e6bd107d4b\" returns successfully" Nov 12 20:46:18.476754 containerd[1545]: time="2024-11-12T20:46:18.476658908Z" level=info msg="StopPodSandbox for \"7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537\"" Nov 12 20:46:18.497354 containerd[1545]: time="2024-11-12T20:46:18.497310723Z" level=error msg="StopPodSandbox for \"7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537\" failed" error="failed to destroy network for sandbox \"7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:18.497513 kubelet[2803]: E1112 20:46:18.497495 2803 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537" Nov 12 20:46:18.497576 kubelet[2803]: E1112 20:46:18.497533 2803 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537"} Nov 12 20:46:18.497603 kubelet[2803]: E1112 20:46:18.497589 2803 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2cb5bf3f-a6f0-4cca-8e64-700b92fbd244\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:46:18.497664 kubelet[2803]: E1112 20:46:18.497623 2803 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2cb5bf3f-a6f0-4cca-8e64-700b92fbd244\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-jx2wl" podUID="2cb5bf3f-a6f0-4cca-8e64-700b92fbd244" Nov 12 20:46:18.613119 systemd[1]: cri-containerd-8f8b7a4f57c7c3c1aecfeb35b82c1dae1d36cac7be714b46965662e6bd107d4b.scope: Deactivated successfully. Nov 12 20:46:18.652355 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f8b7a4f57c7c3c1aecfeb35b82c1dae1d36cac7be714b46965662e6bd107d4b-rootfs.mount: Deactivated successfully. Nov 12 20:46:18.719701 containerd[1545]: time="2024-11-12T20:46:18.719571520Z" level=info msg="shim disconnected" id=8f8b7a4f57c7c3c1aecfeb35b82c1dae1d36cac7be714b46965662e6bd107d4b namespace=k8s.io Nov 12 20:46:18.719701 containerd[1545]: time="2024-11-12T20:46:18.719644642Z" level=warning msg="cleaning up after shim disconnected" id=8f8b7a4f57c7c3c1aecfeb35b82c1dae1d36cac7be714b46965662e6bd107d4b namespace=k8s.io Nov 12 20:46:18.719701 containerd[1545]: time="2024-11-12T20:46:18.719657782Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:46:18.841357 containerd[1545]: time="2024-11-12T20:46:18.841023257Z" level=error msg="ExecSync for \"8f8b7a4f57c7c3c1aecfeb35b82c1dae1d36cac7be714b46965662e6bd107d4b\" failed" error="failed to exec in container: container is in CONTAINER_EXITED state" Nov 12 20:46:18.841432 kubelet[2803]: E1112 20:46:18.841213 2803 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="8f8b7a4f57c7c3c1aecfeb35b82c1dae1d36cac7be714b46965662e6bd107d4b" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Nov 12 20:46:18.841654 containerd[1545]: time="2024-11-12T20:46:18.841629641Z" level=error msg="ExecSync for \"8f8b7a4f57c7c3c1aecfeb35b82c1dae1d36cac7be714b46965662e6bd107d4b\" failed" error="failed to exec in container: container is in CONTAINER_EXITED state" Nov 12 20:46:18.841734 kubelet[2803]: E1112 20:46:18.841720 2803 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="8f8b7a4f57c7c3c1aecfeb35b82c1dae1d36cac7be714b46965662e6bd107d4b" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Nov 12 20:46:18.841844 containerd[1545]: time="2024-11-12T20:46:18.841826505Z" level=error msg="ExecSync for \"8f8b7a4f57c7c3c1aecfeb35b82c1dae1d36cac7be714b46965662e6bd107d4b\" failed" error="failed to exec in container: container is in CONTAINER_EXITED state" Nov 12 20:46:18.841888 kubelet[2803]: E1112 20:46:18.841878 2803 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="8f8b7a4f57c7c3c1aecfeb35b82c1dae1d36cac7be714b46965662e6bd107d4b" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Nov 12 20:46:18.855153 kubelet[2803]: I1112 20:46:18.854770 2803 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-n2q5b" podStartSLOduration=13.327762147 podStartE2EDuration="28.765139362s" podCreationTimestamp="2024-11-12 20:45:50 +0000 UTC" firstStartedPulling="2024-11-12 20:45:51.347126844 +0000 UTC m=+21.005395500" lastFinishedPulling="2024-11-12 20:46:06.78450406 +0000 UTC m=+36.442772715" observedRunningTime="2024-11-12 20:46:18.765063779 +0000 UTC m=+48.423332444" watchObservedRunningTime="2024-11-12 20:46:18.765139362 +0000 UTC m=+48.423408022" Nov 12 20:46:19.716714 kubelet[2803]: I1112 20:46:19.716651 2803 scope.go:117] "RemoveContainer" containerID="ba677e74f60a67203acb041fec30d74efdbee077a49686f92ad2d08b372c7f00" Nov 12 20:46:19.716924 kubelet[2803]: I1112 20:46:19.716880 2803 scope.go:117] "RemoveContainer" containerID="8f8b7a4f57c7c3c1aecfeb35b82c1dae1d36cac7be714b46965662e6bd107d4b" Nov 12 20:46:19.717285 kubelet[2803]: E1112 20:46:19.717224 2803 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-n2q5b_calico-system(aa789dee-6d4d-4e09-8c29-cb02d5225385)\"" pod="calico-system/calico-node-n2q5b" podUID="aa789dee-6d4d-4e09-8c29-cb02d5225385" Nov 12 20:46:19.717927 containerd[1545]: time="2024-11-12T20:46:19.717817468Z" level=info msg="RemoveContainer for \"ba677e74f60a67203acb041fec30d74efdbee077a49686f92ad2d08b372c7f00\"" Nov 12 20:46:19.726635 containerd[1545]: time="2024-11-12T20:46:19.726609930Z" level=info msg="RemoveContainer for \"ba677e74f60a67203acb041fec30d74efdbee077a49686f92ad2d08b372c7f00\" returns successfully" Nov 12 20:46:20.719572 kubelet[2803]: I1112 20:46:20.719503 2803 scope.go:117] "RemoveContainer" containerID="8f8b7a4f57c7c3c1aecfeb35b82c1dae1d36cac7be714b46965662e6bd107d4b" Nov 12 20:46:20.719835 kubelet[2803]: E1112 20:46:20.719798 2803 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-n2q5b_calico-system(aa789dee-6d4d-4e09-8c29-cb02d5225385)\"" pod="calico-system/calico-node-n2q5b" podUID="aa789dee-6d4d-4e09-8c29-cb02d5225385" Nov 12 20:46:28.476926 containerd[1545]: time="2024-11-12T20:46:28.476896436Z" level=info msg="StopPodSandbox for \"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\"" Nov 12 20:46:28.495998 containerd[1545]: time="2024-11-12T20:46:28.495955274Z" level=error msg="StopPodSandbox for \"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\" failed" error="failed to destroy network for sandbox \"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:28.496410 kubelet[2803]: E1112 20:46:28.496317 2803 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c" Nov 12 20:46:28.496410 kubelet[2803]: E1112 20:46:28.496349 2803 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c"} Nov 12 20:46:28.496410 kubelet[2803]: E1112 20:46:28.496379 2803 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b1afbb6d-8e27-4966-8b72-7f067d947668\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:46:28.496410 kubelet[2803]: E1112 20:46:28.496398 2803 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b1afbb6d-8e27-4966-8b72-7f067d947668\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d6985898d-bw6xq" podUID="b1afbb6d-8e27-4966-8b72-7f067d947668" Nov 12 20:46:29.476451 containerd[1545]: time="2024-11-12T20:46:29.476226892Z" level=info msg="StopPodSandbox for \"754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1\"" Nov 12 20:46:29.494397 containerd[1545]: time="2024-11-12T20:46:29.494356810Z" level=error msg="StopPodSandbox for \"754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1\" failed" error="failed to destroy network for sandbox \"754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:29.494809 kubelet[2803]: E1112 20:46:29.494530 2803 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1" Nov 12 20:46:29.494809 kubelet[2803]: E1112 20:46:29.494571 2803 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1"} Nov 12 20:46:29.494809 kubelet[2803]: E1112 20:46:29.494600 2803 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4b0d351b-c7ca-4b0e-9343-66e8cb4acd5c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:46:29.494809 kubelet[2803]: E1112 20:46:29.494619 2803 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4b0d351b-c7ca-4b0e-9343-66e8cb4acd5c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55d5bcd669-b7s55" podUID="4b0d351b-c7ca-4b0e-9343-66e8cb4acd5c" Nov 12 20:46:30.486542 containerd[1545]: time="2024-11-12T20:46:30.486176113Z" level=info msg="StopPodSandbox for \"9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36\"" Nov 12 20:46:30.486689 containerd[1545]: time="2024-11-12T20:46:30.486597910Z" level=info msg="StopPodSandbox for \"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\"" Nov 12 20:46:30.508997 containerd[1545]: time="2024-11-12T20:46:30.508957994Z" level=error msg="StopPodSandbox for \"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\" failed" error="failed to destroy network for sandbox \"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:30.509422 kubelet[2803]: E1112 20:46:30.509398 2803 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3" Nov 12 20:46:30.509655 kubelet[2803]: E1112 20:46:30.509440 2803 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3"} Nov 12 20:46:30.509655 kubelet[2803]: E1112 20:46:30.509477 2803 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"92ebb2b6-a9b5-4cb2-9d42-b7d671fa5e35\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:46:30.509655 kubelet[2803]: E1112 20:46:30.509503 2803 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"92ebb2b6-a9b5-4cb2-9d42-b7d671fa5e35\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-q4n22" podUID="92ebb2b6-a9b5-4cb2-9d42-b7d671fa5e35" Nov 12 20:46:30.517911 containerd[1545]: time="2024-11-12T20:46:30.517848172Z" level=error msg="StopPodSandbox for \"9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36\" failed" error="failed to destroy network for sandbox \"9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:30.518111 kubelet[2803]: E1112 20:46:30.518005 2803 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36" Nov 12 20:46:30.518111 kubelet[2803]: E1112 20:46:30.518040 2803 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36"} Nov 12 20:46:30.518111 kubelet[2803]: E1112 20:46:30.518063 2803 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7a262788-14a2-4491-b838-16c919aed65b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:46:30.518111 kubelet[2803]: E1112 20:46:30.518081 2803 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7a262788-14a2-4491-b838-16c919aed65b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55d5bcd669-xs25j" podUID="7a262788-14a2-4491-b838-16c919aed65b" Nov 12 20:46:31.476228 containerd[1545]: time="2024-11-12T20:46:31.476190886Z" level=info msg="StopPodSandbox for \"72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185\"" Nov 12 20:46:31.492451 containerd[1545]: time="2024-11-12T20:46:31.492374784Z" level=error msg="StopPodSandbox for \"72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185\" failed" error="failed to destroy network for sandbox \"72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:31.492576 kubelet[2803]: E1112 20:46:31.492506 2803 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185" Nov 12 20:46:31.492576 kubelet[2803]: E1112 20:46:31.492536 2803 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185"} Nov 12 20:46:31.492647 kubelet[2803]: E1112 20:46:31.492583 2803 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5475df33-a25f-4c6b-acfc-caf320cb59b1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:46:31.492647 kubelet[2803]: E1112 20:46:31.492604 2803 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5475df33-a25f-4c6b-acfc-caf320cb59b1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mr2hw" podUID="5475df33-a25f-4c6b-acfc-caf320cb59b1" Nov 12 20:46:32.476258 containerd[1545]: time="2024-11-12T20:46:32.476130337Z" level=info msg="StopPodSandbox for \"7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537\"" Nov 12 20:46:32.492516 containerd[1545]: time="2024-11-12T20:46:32.492459696Z" level=error msg="StopPodSandbox for \"7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537\" failed" error="failed to destroy network for sandbox \"7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:32.493000 kubelet[2803]: E1112 20:46:32.492619 2803 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537" Nov 12 20:46:32.493000 kubelet[2803]: E1112 20:46:32.492652 2803 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537"} Nov 12 20:46:32.493000 kubelet[2803]: E1112 20:46:32.492673 2803 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2cb5bf3f-a6f0-4cca-8e64-700b92fbd244\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:46:32.493000 kubelet[2803]: E1112 20:46:32.492691 2803 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2cb5bf3f-a6f0-4cca-8e64-700b92fbd244\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-jx2wl" podUID="2cb5bf3f-a6f0-4cca-8e64-700b92fbd244" Nov 12 20:46:35.475883 kubelet[2803]: I1112 20:46:35.475854 2803 scope.go:117] "RemoveContainer" containerID="8f8b7a4f57c7c3c1aecfeb35b82c1dae1d36cac7be714b46965662e6bd107d4b" Nov 12 20:46:35.476326 kubelet[2803]: E1112 20:46:35.476219 2803 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-n2q5b_calico-system(aa789dee-6d4d-4e09-8c29-cb02d5225385)\"" pod="calico-system/calico-node-n2q5b" podUID="aa789dee-6d4d-4e09-8c29-cb02d5225385" Nov 12 20:46:41.475930 containerd[1545]: time="2024-11-12T20:46:41.475718705Z" level=info msg="StopPodSandbox for \"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\"" Nov 12 20:46:41.493539 containerd[1545]: time="2024-11-12T20:46:41.493491504Z" level=error msg="StopPodSandbox for \"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\" failed" error="failed to destroy network for sandbox \"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:41.493730 kubelet[2803]: E1112 20:46:41.493716 2803 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3" Nov 12 20:46:41.493940 kubelet[2803]: E1112 20:46:41.493749 2803 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3"} Nov 12 20:46:41.493940 kubelet[2803]: E1112 20:46:41.493778 2803 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"92ebb2b6-a9b5-4cb2-9d42-b7d671fa5e35\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:46:41.493940 kubelet[2803]: E1112 20:46:41.493796 2803 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"92ebb2b6-a9b5-4cb2-9d42-b7d671fa5e35\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-q4n22" podUID="92ebb2b6-a9b5-4cb2-9d42-b7d671fa5e35" Nov 12 20:46:42.477164 containerd[1545]: time="2024-11-12T20:46:42.475529270Z" level=info msg="StopPodSandbox for \"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\"" Nov 12 20:46:42.496012 containerd[1545]: time="2024-11-12T20:46:42.495976716Z" level=error msg="StopPodSandbox for \"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\" failed" error="failed to destroy network for sandbox \"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:42.496161 kubelet[2803]: E1112 20:46:42.496142 2803 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c" Nov 12 20:46:42.496350 kubelet[2803]: E1112 20:46:42.496171 2803 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c"} Nov 12 20:46:42.496350 kubelet[2803]: E1112 20:46:42.496197 2803 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b1afbb6d-8e27-4966-8b72-7f067d947668\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:46:42.496350 kubelet[2803]: E1112 20:46:42.496216 2803 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b1afbb6d-8e27-4966-8b72-7f067d947668\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d6985898d-bw6xq" podUID="b1afbb6d-8e27-4966-8b72-7f067d947668" Nov 12 20:46:44.476665 containerd[1545]: time="2024-11-12T20:46:44.476466327Z" level=info msg="StopPodSandbox for \"754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1\"" Nov 12 20:46:44.479961 containerd[1545]: time="2024-11-12T20:46:44.478782172Z" level=info msg="StopPodSandbox for \"9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36\"" Nov 12 20:46:44.497729 containerd[1545]: time="2024-11-12T20:46:44.497697899Z" level=error msg="StopPodSandbox for \"754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1\" failed" error="failed to destroy network for sandbox \"754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:44.498141 kubelet[2803]: E1112 20:46:44.497848 2803 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1" Nov 12 20:46:44.498141 kubelet[2803]: E1112 20:46:44.497876 2803 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1"} Nov 12 20:46:44.498141 kubelet[2803]: E1112 20:46:44.497917 2803 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4b0d351b-c7ca-4b0e-9343-66e8cb4acd5c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:46:44.501211 containerd[1545]: time="2024-11-12T20:46:44.501191543Z" level=error msg="StopPodSandbox for \"9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36\" failed" error="failed to destroy network for sandbox \"9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:44.501302 kubelet[2803]: E1112 20:46:44.501290 2803 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36" Nov 12 20:46:44.501331 kubelet[2803]: E1112 20:46:44.501314 2803 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36"} Nov 12 20:46:44.501360 kubelet[2803]: E1112 20:46:44.501337 2803 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7a262788-14a2-4491-b838-16c919aed65b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:46:44.501360 kubelet[2803]: E1112 20:46:44.501353 2803 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7a262788-14a2-4491-b838-16c919aed65b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55d5bcd669-xs25j" podUID="7a262788-14a2-4491-b838-16c919aed65b" Nov 12 20:46:44.503136 kubelet[2803]: E1112 20:46:44.503123 2803 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4b0d351b-c7ca-4b0e-9343-66e8cb4acd5c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55d5bcd669-b7s55" podUID="4b0d351b-c7ca-4b0e-9343-66e8cb4acd5c" Nov 12 20:46:46.476529 containerd[1545]: time="2024-11-12T20:46:46.476433509Z" level=info msg="StopPodSandbox for \"7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537\"" Nov 12 20:46:46.477158 containerd[1545]: time="2024-11-12T20:46:46.477102681Z" level=info msg="StopPodSandbox for \"72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185\"" Nov 12 20:46:46.496818 containerd[1545]: time="2024-11-12T20:46:46.496739735Z" level=error msg="StopPodSandbox for \"7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537\" failed" error="failed to destroy network for sandbox \"7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:46.497122 kubelet[2803]: E1112 20:46:46.496899 2803 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537" Nov 12 20:46:46.497122 kubelet[2803]: E1112 20:46:46.496925 2803 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537"} Nov 12 20:46:46.497122 kubelet[2803]: E1112 20:46:46.496947 2803 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2cb5bf3f-a6f0-4cca-8e64-700b92fbd244\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:46:46.497122 kubelet[2803]: E1112 20:46:46.496965 2803 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2cb5bf3f-a6f0-4cca-8e64-700b92fbd244\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-jx2wl" podUID="2cb5bf3f-a6f0-4cca-8e64-700b92fbd244" Nov 12 20:46:46.503673 containerd[1545]: time="2024-11-12T20:46:46.503599615Z" level=error msg="StopPodSandbox for \"72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185\" failed" error="failed to destroy network for sandbox \"72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:46.503801 kubelet[2803]: E1112 20:46:46.503742 2803 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185" Nov 12 20:46:46.503801 kubelet[2803]: E1112 20:46:46.503772 2803 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185"} Nov 12 20:46:46.503801 kubelet[2803]: E1112 20:46:46.503794 2803 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5475df33-a25f-4c6b-acfc-caf320cb59b1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:46:46.503900 kubelet[2803]: E1112 20:46:46.503816 2803 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5475df33-a25f-4c6b-acfc-caf320cb59b1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mr2hw" podUID="5475df33-a25f-4c6b-acfc-caf320cb59b1" Nov 12 20:46:48.395808 systemd[1]: Started sshd@7-139.178.70.104:22-139.178.68.195:32840.service - OpenSSH per-connection server daemon (139.178.68.195:32840). Nov 12 20:46:48.545432 sshd[4353]: Accepted publickey for core from 139.178.68.195 port 32840 ssh2: RSA SHA256:eW+66Zcd2Hcqsdn9w7YOca9/FmdLw/8eMbZ4A5lBUuE Nov 12 20:46:48.546062 sshd[4353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:46:48.551212 systemd-logind[1522]: New session 10 of user core. Nov 12 20:46:48.554718 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 12 20:46:48.738485 sshd[4353]: pam_unix(sshd:session): session closed for user core Nov 12 20:46:48.741010 systemd[1]: sshd@7-139.178.70.104:22-139.178.68.195:32840.service: Deactivated successfully. Nov 12 20:46:48.742192 systemd[1]: session-10.scope: Deactivated successfully. Nov 12 20:46:48.742648 systemd-logind[1522]: Session 10 logged out. Waiting for processes to exit. Nov 12 20:46:48.743428 systemd-logind[1522]: Removed session 10. Nov 12 20:46:49.475921 kubelet[2803]: I1112 20:46:49.475897 2803 scope.go:117] "RemoveContainer" containerID="8f8b7a4f57c7c3c1aecfeb35b82c1dae1d36cac7be714b46965662e6bd107d4b" Nov 12 20:46:49.481618 containerd[1545]: time="2024-11-12T20:46:49.481563977Z" level=info msg="CreateContainer within sandbox \"c419678050f6a4a4f1dc960586bd674d1e2a760bc5a95cc86016d4e5083c9c04\" for container &ContainerMetadata{Name:calico-node,Attempt:3,}" Nov 12 20:46:49.498589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1879293607.mount: Deactivated successfully. Nov 12 20:46:49.499355 containerd[1545]: time="2024-11-12T20:46:49.499187446Z" level=info msg="CreateContainer within sandbox \"c419678050f6a4a4f1dc960586bd674d1e2a760bc5a95cc86016d4e5083c9c04\" for &ContainerMetadata{Name:calico-node,Attempt:3,} returns container id \"1cfd3fd6aae74eeb0bf5f1776bcbc817807f3c0996be481fff14cc9c7d30da66\"" Nov 12 20:46:49.499860 containerd[1545]: time="2024-11-12T20:46:49.499696364Z" level=info msg="StartContainer for \"1cfd3fd6aae74eeb0bf5f1776bcbc817807f3c0996be481fff14cc9c7d30da66\"" Nov 12 20:46:49.527687 systemd[1]: Started cri-containerd-1cfd3fd6aae74eeb0bf5f1776bcbc817807f3c0996be481fff14cc9c7d30da66.scope - libcontainer container 1cfd3fd6aae74eeb0bf5f1776bcbc817807f3c0996be481fff14cc9c7d30da66. Nov 12 20:46:49.546179 containerd[1545]: time="2024-11-12T20:46:49.546135926Z" level=info msg="StartContainer for \"1cfd3fd6aae74eeb0bf5f1776bcbc817807f3c0996be481fff14cc9c7d30da66\" returns successfully" Nov 12 20:46:49.702436 systemd[1]: cri-containerd-1cfd3fd6aae74eeb0bf5f1776bcbc817807f3c0996be481fff14cc9c7d30da66.scope: Deactivated successfully. Nov 12 20:46:49.717573 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1cfd3fd6aae74eeb0bf5f1776bcbc817807f3c0996be481fff14cc9c7d30da66-rootfs.mount: Deactivated successfully. Nov 12 20:46:49.791392 containerd[1545]: time="2024-11-12T20:46:49.790889026Z" level=info msg="shim disconnected" id=1cfd3fd6aae74eeb0bf5f1776bcbc817807f3c0996be481fff14cc9c7d30da66 namespace=k8s.io Nov 12 20:46:49.791392 containerd[1545]: time="2024-11-12T20:46:49.791025345Z" level=warning msg="cleaning up after shim disconnected" id=1cfd3fd6aae74eeb0bf5f1776bcbc817807f3c0996be481fff14cc9c7d30da66 namespace=k8s.io Nov 12 20:46:49.791392 containerd[1545]: time="2024-11-12T20:46:49.791035879Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:46:49.813305 containerd[1545]: time="2024-11-12T20:46:49.801097368Z" level=error msg="ExecSync for \"1cfd3fd6aae74eeb0bf5f1776bcbc817807f3c0996be481fff14cc9c7d30da66\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to create exec \"125c9dea1a4a2fb7dd5d00a4204225f2ff9e1503493ea392d43067106ed5e658\": task 1cfd3fd6aae74eeb0bf5f1776bcbc817807f3c0996be481fff14cc9c7d30da66 not found: not found" Nov 12 20:46:49.813508 kubelet[2803]: E1112 20:46:49.813182 2803 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to create exec \"125c9dea1a4a2fb7dd5d00a4204225f2ff9e1503493ea392d43067106ed5e658\": task 1cfd3fd6aae74eeb0bf5f1776bcbc817807f3c0996be481fff14cc9c7d30da66 not found: not found" containerID="1cfd3fd6aae74eeb0bf5f1776bcbc817807f3c0996be481fff14cc9c7d30da66" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Nov 12 20:46:49.813891 containerd[1545]: time="2024-11-12T20:46:49.813870319Z" level=error msg="ExecSync for \"1cfd3fd6aae74eeb0bf5f1776bcbc817807f3c0996be481fff14cc9c7d30da66\" failed" error="failed to exec in container: container is in CONTAINER_EXITED state" Nov 12 20:46:49.814033 kubelet[2803]: E1112 20:46:49.814026 2803 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="1cfd3fd6aae74eeb0bf5f1776bcbc817807f3c0996be481fff14cc9c7d30da66" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Nov 12 20:46:49.814255 containerd[1545]: time="2024-11-12T20:46:49.814206798Z" level=error msg="ExecSync for \"1cfd3fd6aae74eeb0bf5f1776bcbc817807f3c0996be481fff14cc9c7d30da66\" failed" error="failed to exec in container: container is in CONTAINER_EXITED state" Nov 12 20:46:49.814363 kubelet[2803]: E1112 20:46:49.814349 2803 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="1cfd3fd6aae74eeb0bf5f1776bcbc817807f3c0996be481fff14cc9c7d30da66" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Nov 12 20:46:50.786458 kubelet[2803]: I1112 20:46:50.786434 2803 scope.go:117] "RemoveContainer" containerID="8f8b7a4f57c7c3c1aecfeb35b82c1dae1d36cac7be714b46965662e6bd107d4b" Nov 12 20:46:50.786766 kubelet[2803]: I1112 20:46:50.786750 2803 scope.go:117] "RemoveContainer" containerID="1cfd3fd6aae74eeb0bf5f1776bcbc817807f3c0996be481fff14cc9c7d30da66" Nov 12 20:46:50.787233 kubelet[2803]: E1112 20:46:50.787220 2803 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 40s restarting failed container=calico-node pod=calico-node-n2q5b_calico-system(aa789dee-6d4d-4e09-8c29-cb02d5225385)\"" pod="calico-system/calico-node-n2q5b" podUID="aa789dee-6d4d-4e09-8c29-cb02d5225385" Nov 12 20:46:50.787885 containerd[1545]: time="2024-11-12T20:46:50.787862489Z" level=info msg="RemoveContainer for \"8f8b7a4f57c7c3c1aecfeb35b82c1dae1d36cac7be714b46965662e6bd107d4b\"" Nov 12 20:46:50.801675 containerd[1545]: time="2024-11-12T20:46:50.801638304Z" level=info msg="RemoveContainer for \"8f8b7a4f57c7c3c1aecfeb35b82c1dae1d36cac7be714b46965662e6bd107d4b\" returns successfully" Nov 12 20:46:51.796988 kubelet[2803]: I1112 20:46:51.796922 2803 scope.go:117] "RemoveContainer" containerID="1cfd3fd6aae74eeb0bf5f1776bcbc817807f3c0996be481fff14cc9c7d30da66" Nov 12 20:46:51.797749 kubelet[2803]: E1112 20:46:51.797731 2803 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 40s restarting failed container=calico-node pod=calico-node-n2q5b_calico-system(aa789dee-6d4d-4e09-8c29-cb02d5225385)\"" pod="calico-system/calico-node-n2q5b" podUID="aa789dee-6d4d-4e09-8c29-cb02d5225385" Nov 12 20:46:52.475850 containerd[1545]: time="2024-11-12T20:46:52.475762400Z" level=info msg="StopPodSandbox for \"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\"" Nov 12 20:46:52.492977 containerd[1545]: time="2024-11-12T20:46:52.492939698Z" level=error msg="StopPodSandbox for \"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\" failed" error="failed to destroy network for sandbox \"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:52.493121 kubelet[2803]: E1112 20:46:52.493102 2803 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3" Nov 12 20:46:52.493169 kubelet[2803]: E1112 20:46:52.493136 2803 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3"} Nov 12 20:46:52.493169 kubelet[2803]: E1112 20:46:52.493158 2803 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"92ebb2b6-a9b5-4cb2-9d42-b7d671fa5e35\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:46:52.493234 kubelet[2803]: E1112 20:46:52.493179 2803 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"92ebb2b6-a9b5-4cb2-9d42-b7d671fa5e35\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-q4n22" podUID="92ebb2b6-a9b5-4cb2-9d42-b7d671fa5e35" Nov 12 20:46:52.799209 containerd[1545]: time="2024-11-12T20:46:52.798845324Z" level=info msg="StopPodSandbox for \"c419678050f6a4a4f1dc960586bd674d1e2a760bc5a95cc86016d4e5083c9c04\"" Nov 12 20:46:52.799209 containerd[1545]: time="2024-11-12T20:46:52.798875479Z" level=info msg="Container to stop \"5922089d4ea4d4772bc992a7b4845e6daf6f1473dc8fddfcffccae054317efbd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 20:46:52.799209 containerd[1545]: time="2024-11-12T20:46:52.798890029Z" level=info msg="Container to stop \"f570b984c43152c15d3a482ff43e6c671b38ce0cced9a891b788b7416ed0b4d4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 20:46:52.799209 containerd[1545]: time="2024-11-12T20:46:52.798897627Z" level=info msg="Container to stop \"1cfd3fd6aae74eeb0bf5f1776bcbc817807f3c0996be481fff14cc9c7d30da66\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 20:46:52.801471 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c419678050f6a4a4f1dc960586bd674d1e2a760bc5a95cc86016d4e5083c9c04-shm.mount: Deactivated successfully. Nov 12 20:46:52.807410 systemd[1]: cri-containerd-c419678050f6a4a4f1dc960586bd674d1e2a760bc5a95cc86016d4e5083c9c04.scope: Deactivated successfully. Nov 12 20:46:52.824017 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c419678050f6a4a4f1dc960586bd674d1e2a760bc5a95cc86016d4e5083c9c04-rootfs.mount: Deactivated successfully. Nov 12 20:46:52.960954 containerd[1545]: time="2024-11-12T20:46:52.960900556Z" level=info msg="shim disconnected" id=c419678050f6a4a4f1dc960586bd674d1e2a760bc5a95cc86016d4e5083c9c04 namespace=k8s.io Nov 12 20:46:52.960954 containerd[1545]: time="2024-11-12T20:46:52.960945969Z" level=warning msg="cleaning up after shim disconnected" id=c419678050f6a4a4f1dc960586bd674d1e2a760bc5a95cc86016d4e5083c9c04 namespace=k8s.io Nov 12 20:46:52.960954 containerd[1545]: time="2024-11-12T20:46:52.960953556Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:46:52.980674 containerd[1545]: time="2024-11-12T20:46:52.980574760Z" level=info msg="TearDown network for sandbox \"c419678050f6a4a4f1dc960586bd674d1e2a760bc5a95cc86016d4e5083c9c04\" successfully" Nov 12 20:46:52.980674 containerd[1545]: time="2024-11-12T20:46:52.980602751Z" level=info msg="StopPodSandbox for \"c419678050f6a4a4f1dc960586bd674d1e2a760bc5a95cc86016d4e5083c9c04\" returns successfully" Nov 12 20:46:53.046081 kubelet[2803]: I1112 20:46:53.046054 2803 topology_manager.go:215] "Topology Admit Handler" podUID="65542297-4416-40ad-a77b-a5716fbdcc81" podNamespace="calico-system" podName="calico-node-tpsz6" Nov 12 20:46:53.075893 kubelet[2803]: E1112 20:46:53.075808 2803 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="aa789dee-6d4d-4e09-8c29-cb02d5225385" containerName="install-cni" Nov 12 20:46:53.075893 kubelet[2803]: E1112 20:46:53.075849 2803 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="aa789dee-6d4d-4e09-8c29-cb02d5225385" containerName="calico-node" Nov 12 20:46:53.075893 kubelet[2803]: E1112 20:46:53.075860 2803 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="aa789dee-6d4d-4e09-8c29-cb02d5225385" containerName="flexvol-driver" Nov 12 20:46:53.075893 kubelet[2803]: E1112 20:46:53.075865 2803 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="aa789dee-6d4d-4e09-8c29-cb02d5225385" containerName="calico-node" Nov 12 20:46:53.075893 kubelet[2803]: E1112 20:46:53.075869 2803 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="aa789dee-6d4d-4e09-8c29-cb02d5225385" containerName="calico-node" Nov 12 20:46:53.075893 kubelet[2803]: E1112 20:46:53.075873 2803 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="aa789dee-6d4d-4e09-8c29-cb02d5225385" containerName="calico-node" Nov 12 20:46:53.095305 kubelet[2803]: I1112 20:46:53.095038 2803 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/aa789dee-6d4d-4e09-8c29-cb02d5225385-var-run-calico\") pod \"aa789dee-6d4d-4e09-8c29-cb02d5225385\" (UID: \"aa789dee-6d4d-4e09-8c29-cb02d5225385\") " Nov 12 20:46:53.095305 kubelet[2803]: I1112 20:46:53.095065 2803 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/aa789dee-6d4d-4e09-8c29-cb02d5225385-cni-bin-dir\") pod \"aa789dee-6d4d-4e09-8c29-cb02d5225385\" (UID: \"aa789dee-6d4d-4e09-8c29-cb02d5225385\") " Nov 12 20:46:53.095305 kubelet[2803]: I1112 20:46:53.095076 2803 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/aa789dee-6d4d-4e09-8c29-cb02d5225385-cni-net-dir\") pod \"aa789dee-6d4d-4e09-8c29-cb02d5225385\" (UID: \"aa789dee-6d4d-4e09-8c29-cb02d5225385\") " Nov 12 20:46:53.095305 kubelet[2803]: I1112 20:46:53.095090 2803 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/aa789dee-6d4d-4e09-8c29-cb02d5225385-flexvol-driver-host\") pod \"aa789dee-6d4d-4e09-8c29-cb02d5225385\" (UID: \"aa789dee-6d4d-4e09-8c29-cb02d5225385\") " Nov 12 20:46:53.095305 kubelet[2803]: I1112 20:46:53.095100 2803 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/aa789dee-6d4d-4e09-8c29-cb02d5225385-cni-log-dir\") pod \"aa789dee-6d4d-4e09-8c29-cb02d5225385\" (UID: \"aa789dee-6d4d-4e09-8c29-cb02d5225385\") " Nov 12 20:46:53.095305 kubelet[2803]: I1112 20:46:53.095114 2803 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa789dee-6d4d-4e09-8c29-cb02d5225385-xtables-lock\") pod \"aa789dee-6d4d-4e09-8c29-cb02d5225385\" (UID: \"aa789dee-6d4d-4e09-8c29-cb02d5225385\") " Nov 12 20:46:53.095482 kubelet[2803]: I1112 20:46:53.095125 2803 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/aa789dee-6d4d-4e09-8c29-cb02d5225385-var-lib-calico\") pod \"aa789dee-6d4d-4e09-8c29-cb02d5225385\" (UID: \"aa789dee-6d4d-4e09-8c29-cb02d5225385\") " Nov 12 20:46:53.095482 kubelet[2803]: I1112 20:46:53.095139 2803 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa789dee-6d4d-4e09-8c29-cb02d5225385-tigera-ca-bundle\") pod \"aa789dee-6d4d-4e09-8c29-cb02d5225385\" (UID: \"aa789dee-6d4d-4e09-8c29-cb02d5225385\") " Nov 12 20:46:53.095482 kubelet[2803]: I1112 20:46:53.095153 2803 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cr7gt\" (UniqueName: \"kubernetes.io/projected/aa789dee-6d4d-4e09-8c29-cb02d5225385-kube-api-access-cr7gt\") pod \"aa789dee-6d4d-4e09-8c29-cb02d5225385\" (UID: \"aa789dee-6d4d-4e09-8c29-cb02d5225385\") " Nov 12 20:46:53.095482 kubelet[2803]: I1112 20:46:53.095164 2803 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa789dee-6d4d-4e09-8c29-cb02d5225385-lib-modules\") pod \"aa789dee-6d4d-4e09-8c29-cb02d5225385\" (UID: \"aa789dee-6d4d-4e09-8c29-cb02d5225385\") " Nov 12 20:46:53.095482 kubelet[2803]: I1112 20:46:53.095176 2803 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/aa789dee-6d4d-4e09-8c29-cb02d5225385-policysync\") pod \"aa789dee-6d4d-4e09-8c29-cb02d5225385\" (UID: \"aa789dee-6d4d-4e09-8c29-cb02d5225385\") " Nov 12 20:46:53.095482 kubelet[2803]: I1112 20:46:53.095190 2803 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/aa789dee-6d4d-4e09-8c29-cb02d5225385-node-certs\") pod \"aa789dee-6d4d-4e09-8c29-cb02d5225385\" (UID: \"aa789dee-6d4d-4e09-8c29-cb02d5225385\") " Nov 12 20:46:53.110533 kubelet[2803]: I1112 20:46:53.110494 2803 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa789dee-6d4d-4e09-8c29-cb02d5225385" containerName="calico-node" Nov 12 20:46:53.110533 kubelet[2803]: I1112 20:46:53.110520 2803 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa789dee-6d4d-4e09-8c29-cb02d5225385" containerName="calico-node" Nov 12 20:46:53.110647 kubelet[2803]: I1112 20:46:53.110581 2803 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa789dee-6d4d-4e09-8c29-cb02d5225385" containerName="calico-node" Nov 12 20:46:53.110647 kubelet[2803]: I1112 20:46:53.110590 2803 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa789dee-6d4d-4e09-8c29-cb02d5225385" containerName="calico-node" Nov 12 20:46:53.116902 kubelet[2803]: I1112 20:46:53.114898 2803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa789dee-6d4d-4e09-8c29-cb02d5225385-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "aa789dee-6d4d-4e09-8c29-cb02d5225385" (UID: "aa789dee-6d4d-4e09-8c29-cb02d5225385"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:46:53.124876 kubelet[2803]: I1112 20:46:53.110996 2803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa789dee-6d4d-4e09-8c29-cb02d5225385-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "aa789dee-6d4d-4e09-8c29-cb02d5225385" (UID: "aa789dee-6d4d-4e09-8c29-cb02d5225385"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:46:53.124876 kubelet[2803]: I1112 20:46:53.124806 2803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa789dee-6d4d-4e09-8c29-cb02d5225385-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "aa789dee-6d4d-4e09-8c29-cb02d5225385" (UID: "aa789dee-6d4d-4e09-8c29-cb02d5225385"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:46:53.124876 kubelet[2803]: I1112 20:46:53.124821 2803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa789dee-6d4d-4e09-8c29-cb02d5225385-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "aa789dee-6d4d-4e09-8c29-cb02d5225385" (UID: "aa789dee-6d4d-4e09-8c29-cb02d5225385"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:46:53.124876 kubelet[2803]: I1112 20:46:53.124832 2803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa789dee-6d4d-4e09-8c29-cb02d5225385-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "aa789dee-6d4d-4e09-8c29-cb02d5225385" (UID: "aa789dee-6d4d-4e09-8c29-cb02d5225385"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:46:53.124876 kubelet[2803]: I1112 20:46:53.124842 2803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa789dee-6d4d-4e09-8c29-cb02d5225385-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "aa789dee-6d4d-4e09-8c29-cb02d5225385" (UID: "aa789dee-6d4d-4e09-8c29-cb02d5225385"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:46:53.125005 kubelet[2803]: I1112 20:46:53.124852 2803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa789dee-6d4d-4e09-8c29-cb02d5225385-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "aa789dee-6d4d-4e09-8c29-cb02d5225385" (UID: "aa789dee-6d4d-4e09-8c29-cb02d5225385"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:46:53.142699 kubelet[2803]: I1112 20:46:53.142674 2803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa789dee-6d4d-4e09-8c29-cb02d5225385-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "aa789dee-6d4d-4e09-8c29-cb02d5225385" (UID: "aa789dee-6d4d-4e09-8c29-cb02d5225385"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:46:53.143545 kubelet[2803]: I1112 20:46:53.142887 2803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa789dee-6d4d-4e09-8c29-cb02d5225385-policysync" (OuterVolumeSpecName: "policysync") pod "aa789dee-6d4d-4e09-8c29-cb02d5225385" (UID: "aa789dee-6d4d-4e09-8c29-cb02d5225385"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:46:53.148115 systemd[1]: Created slice kubepods-besteffort-pod65542297_4416_40ad_a77b_a5716fbdcc81.slice - libcontainer container kubepods-besteffort-pod65542297_4416_40ad_a77b_a5716fbdcc81.slice. Nov 12 20:46:53.161308 systemd[1]: var-lib-kubelet-pods-aa789dee\x2d6d4d\x2d4e09\x2d8c29\x2dcb02d5225385-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcr7gt.mount: Deactivated successfully. Nov 12 20:46:53.165534 systemd[1]: var-lib-kubelet-pods-aa789dee\x2d6d4d\x2d4e09\x2d8c29\x2dcb02d5225385-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Nov 12 20:46:53.165650 systemd[1]: var-lib-kubelet-pods-aa789dee\x2d6d4d\x2d4e09\x2d8c29\x2dcb02d5225385-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Nov 12 20:46:53.179619 kubelet[2803]: I1112 20:46:53.179582 2803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa789dee-6d4d-4e09-8c29-cb02d5225385-node-certs" (OuterVolumeSpecName: "node-certs") pod "aa789dee-6d4d-4e09-8c29-cb02d5225385" (UID: "aa789dee-6d4d-4e09-8c29-cb02d5225385"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 12 20:46:53.179721 kubelet[2803]: I1112 20:46:53.179703 2803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa789dee-6d4d-4e09-8c29-cb02d5225385-kube-api-access-cr7gt" (OuterVolumeSpecName: "kube-api-access-cr7gt") pod "aa789dee-6d4d-4e09-8c29-cb02d5225385" (UID: "aa789dee-6d4d-4e09-8c29-cb02d5225385"). InnerVolumeSpecName "kube-api-access-cr7gt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 20:46:53.180661 kubelet[2803]: I1112 20:46:53.180641 2803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa789dee-6d4d-4e09-8c29-cb02d5225385-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "aa789dee-6d4d-4e09-8c29-cb02d5225385" (UID: "aa789dee-6d4d-4e09-8c29-cb02d5225385"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 12 20:46:53.196060 kubelet[2803]: I1112 20:46:53.196020 2803 reconciler_common.go:300] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/aa789dee-6d4d-4e09-8c29-cb02d5225385-node-certs\") on node \"localhost\" DevicePath \"\"" Nov 12 20:46:53.196060 kubelet[2803]: I1112 20:46:53.196056 2803 reconciler_common.go:300] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/aa789dee-6d4d-4e09-8c29-cb02d5225385-cni-net-dir\") on node \"localhost\" DevicePath \"\"" Nov 12 20:46:53.196060 kubelet[2803]: I1112 20:46:53.196069 2803 reconciler_common.go:300] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/aa789dee-6d4d-4e09-8c29-cb02d5225385-var-run-calico\") on node \"localhost\" DevicePath \"\"" Nov 12 20:46:53.196216 kubelet[2803]: I1112 20:46:53.196080 2803 reconciler_common.go:300] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/aa789dee-6d4d-4e09-8c29-cb02d5225385-cni-bin-dir\") on node \"localhost\" DevicePath \"\"" Nov 12 20:46:53.196216 kubelet[2803]: I1112 20:46:53.196092 2803 reconciler_common.go:300] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/aa789dee-6d4d-4e09-8c29-cb02d5225385-flexvol-driver-host\") on node \"localhost\" DevicePath \"\"" Nov 12 20:46:53.196216 kubelet[2803]: I1112 20:46:53.196101 2803 reconciler_common.go:300] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/aa789dee-6d4d-4e09-8c29-cb02d5225385-cni-log-dir\") on node \"localhost\" DevicePath \"\"" Nov 12 20:46:53.196216 kubelet[2803]: I1112 20:46:53.196110 2803 reconciler_common.go:300] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/aa789dee-6d4d-4e09-8c29-cb02d5225385-var-lib-calico\") on node \"localhost\" DevicePath \"\"" Nov 12 20:46:53.196216 kubelet[2803]: I1112 20:46:53.196121 2803 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa789dee-6d4d-4e09-8c29-cb02d5225385-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 12 20:46:53.196216 kubelet[2803]: I1112 20:46:53.196131 2803 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa789dee-6d4d-4e09-8c29-cb02d5225385-xtables-lock\") on node \"localhost\" DevicePath \"\"" Nov 12 20:46:53.196216 kubelet[2803]: I1112 20:46:53.196139 2803 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa789dee-6d4d-4e09-8c29-cb02d5225385-lib-modules\") on node \"localhost\" DevicePath \"\"" Nov 12 20:46:53.196216 kubelet[2803]: I1112 20:46:53.196149 2803 reconciler_common.go:300] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/aa789dee-6d4d-4e09-8c29-cb02d5225385-policysync\") on node \"localhost\" DevicePath \"\"" Nov 12 20:46:53.196378 kubelet[2803]: I1112 20:46:53.196160 2803 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-cr7gt\" (UniqueName: \"kubernetes.io/projected/aa789dee-6d4d-4e09-8c29-cb02d5225385-kube-api-access-cr7gt\") on node \"localhost\" DevicePath \"\"" Nov 12 20:46:53.297357 kubelet[2803]: I1112 20:46:53.297320 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65542297-4416-40ad-a77b-a5716fbdcc81-tigera-ca-bundle\") pod \"calico-node-tpsz6\" (UID: \"65542297-4416-40ad-a77b-a5716fbdcc81\") " pod="calico-system/calico-node-tpsz6" Nov 12 20:46:53.297357 kubelet[2803]: I1112 20:46:53.297370 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/65542297-4416-40ad-a77b-a5716fbdcc81-flexvol-driver-host\") pod \"calico-node-tpsz6\" (UID: \"65542297-4416-40ad-a77b-a5716fbdcc81\") " pod="calico-system/calico-node-tpsz6" Nov 12 20:46:53.297635 kubelet[2803]: I1112 20:46:53.297390 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/65542297-4416-40ad-a77b-a5716fbdcc81-var-lib-calico\") pod \"calico-node-tpsz6\" (UID: \"65542297-4416-40ad-a77b-a5716fbdcc81\") " pod="calico-system/calico-node-tpsz6" Nov 12 20:46:53.297635 kubelet[2803]: I1112 20:46:53.297450 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/65542297-4416-40ad-a77b-a5716fbdcc81-lib-modules\") pod \"calico-node-tpsz6\" (UID: \"65542297-4416-40ad-a77b-a5716fbdcc81\") " pod="calico-system/calico-node-tpsz6" Nov 12 20:46:53.297635 kubelet[2803]: I1112 20:46:53.297481 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/65542297-4416-40ad-a77b-a5716fbdcc81-policysync\") pod \"calico-node-tpsz6\" (UID: \"65542297-4416-40ad-a77b-a5716fbdcc81\") " pod="calico-system/calico-node-tpsz6" Nov 12 20:46:53.297635 kubelet[2803]: I1112 20:46:53.297520 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/65542297-4416-40ad-a77b-a5716fbdcc81-var-run-calico\") pod \"calico-node-tpsz6\" (UID: \"65542297-4416-40ad-a77b-a5716fbdcc81\") " pod="calico-system/calico-node-tpsz6" Nov 12 20:46:53.297635 kubelet[2803]: I1112 20:46:53.297596 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/65542297-4416-40ad-a77b-a5716fbdcc81-cni-log-dir\") pod \"calico-node-tpsz6\" (UID: \"65542297-4416-40ad-a77b-a5716fbdcc81\") " pod="calico-system/calico-node-tpsz6" Nov 12 20:46:53.297740 kubelet[2803]: I1112 20:46:53.297654 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/65542297-4416-40ad-a77b-a5716fbdcc81-cni-net-dir\") pod \"calico-node-tpsz6\" (UID: \"65542297-4416-40ad-a77b-a5716fbdcc81\") " pod="calico-system/calico-node-tpsz6" Nov 12 20:46:53.297740 kubelet[2803]: I1112 20:46:53.297673 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/65542297-4416-40ad-a77b-a5716fbdcc81-node-certs\") pod \"calico-node-tpsz6\" (UID: \"65542297-4416-40ad-a77b-a5716fbdcc81\") " pod="calico-system/calico-node-tpsz6" Nov 12 20:46:53.297740 kubelet[2803]: I1112 20:46:53.297693 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/65542297-4416-40ad-a77b-a5716fbdcc81-xtables-lock\") pod \"calico-node-tpsz6\" (UID: \"65542297-4416-40ad-a77b-a5716fbdcc81\") " pod="calico-system/calico-node-tpsz6" Nov 12 20:46:53.297740 kubelet[2803]: I1112 20:46:53.297711 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/65542297-4416-40ad-a77b-a5716fbdcc81-cni-bin-dir\") pod \"calico-node-tpsz6\" (UID: \"65542297-4416-40ad-a77b-a5716fbdcc81\") " pod="calico-system/calico-node-tpsz6" Nov 12 20:46:53.297740 kubelet[2803]: I1112 20:46:53.297723 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tdj7\" (UniqueName: \"kubernetes.io/projected/65542297-4416-40ad-a77b-a5716fbdcc81-kube-api-access-8tdj7\") pod \"calico-node-tpsz6\" (UID: \"65542297-4416-40ad-a77b-a5716fbdcc81\") " pod="calico-system/calico-node-tpsz6" Nov 12 20:46:53.471464 containerd[1545]: time="2024-11-12T20:46:53.471433125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tpsz6,Uid:65542297-4416-40ad-a77b-a5716fbdcc81,Namespace:calico-system,Attempt:0,}" Nov 12 20:46:53.499796 containerd[1545]: time="2024-11-12T20:46:53.499656184Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:46:53.499796 containerd[1545]: time="2024-11-12T20:46:53.499699759Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:46:53.499796 containerd[1545]: time="2024-11-12T20:46:53.499730004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:46:53.500171 containerd[1545]: time="2024-11-12T20:46:53.499811077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:46:53.519721 systemd[1]: Started cri-containerd-04d3a6b9d37175ee04671f5712be671822b4462995f47e1374975f4d5976f14f.scope - libcontainer container 04d3a6b9d37175ee04671f5712be671822b4462995f47e1374975f4d5976f14f. Nov 12 20:46:53.547963 containerd[1545]: time="2024-11-12T20:46:53.547818804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tpsz6,Uid:65542297-4416-40ad-a77b-a5716fbdcc81,Namespace:calico-system,Attempt:0,} returns sandbox id \"04d3a6b9d37175ee04671f5712be671822b4462995f47e1374975f4d5976f14f\"" Nov 12 20:46:53.550782 containerd[1545]: time="2024-11-12T20:46:53.550739016Z" level=info msg="CreateContainer within sandbox \"04d3a6b9d37175ee04671f5712be671822b4462995f47e1374975f4d5976f14f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 12 20:46:53.601859 containerd[1545]: time="2024-11-12T20:46:53.601824639Z" level=info msg="CreateContainer within sandbox \"04d3a6b9d37175ee04671f5712be671822b4462995f47e1374975f4d5976f14f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"32492c1010925f6396f0225c6dc4d055369f0edff39dac6b1664007d56dcd182\"" Nov 12 20:46:53.602574 containerd[1545]: time="2024-11-12T20:46:53.602155422Z" level=info msg="StartContainer for \"32492c1010925f6396f0225c6dc4d055369f0edff39dac6b1664007d56dcd182\"" Nov 12 20:46:53.622648 systemd[1]: Started cri-containerd-32492c1010925f6396f0225c6dc4d055369f0edff39dac6b1664007d56dcd182.scope - libcontainer container 32492c1010925f6396f0225c6dc4d055369f0edff39dac6b1664007d56dcd182. Nov 12 20:46:53.654045 containerd[1545]: time="2024-11-12T20:46:53.654010863Z" level=info msg="StartContainer for \"32492c1010925f6396f0225c6dc4d055369f0edff39dac6b1664007d56dcd182\" returns successfully" Nov 12 20:46:53.746594 systemd[1]: Started sshd@8-139.178.70.104:22-139.178.68.195:32854.service - OpenSSH per-connection server daemon (139.178.68.195:32854). Nov 12 20:46:53.811653 kubelet[2803]: I1112 20:46:53.811625 2803 scope.go:117] "RemoveContainer" containerID="1cfd3fd6aae74eeb0bf5f1776bcbc817807f3c0996be481fff14cc9c7d30da66" Nov 12 20:46:53.813257 containerd[1545]: time="2024-11-12T20:46:53.813228840Z" level=info msg="RemoveContainer for \"1cfd3fd6aae74eeb0bf5f1776bcbc817807f3c0996be481fff14cc9c7d30da66\"" Nov 12 20:46:53.822311 systemd[1]: Removed slice kubepods-besteffort-podaa789dee_6d4d_4e09_8c29_cb02d5225385.slice - libcontainer container kubepods-besteffort-podaa789dee_6d4d_4e09_8c29_cb02d5225385.slice. Nov 12 20:46:53.873883 containerd[1545]: time="2024-11-12T20:46:53.873817765Z" level=info msg="RemoveContainer for \"1cfd3fd6aae74eeb0bf5f1776bcbc817807f3c0996be481fff14cc9c7d30da66\" returns successfully" Nov 12 20:46:53.874118 kubelet[2803]: I1112 20:46:53.874085 2803 scope.go:117] "RemoveContainer" containerID="f570b984c43152c15d3a482ff43e6c671b38ce0cced9a891b788b7416ed0b4d4" Nov 12 20:46:53.875029 containerd[1545]: time="2024-11-12T20:46:53.874953079Z" level=info msg="RemoveContainer for \"f570b984c43152c15d3a482ff43e6c671b38ce0cced9a891b788b7416ed0b4d4\"" Nov 12 20:46:53.899639 systemd[1]: cri-containerd-32492c1010925f6396f0225c6dc4d055369f0edff39dac6b1664007d56dcd182.scope: Deactivated successfully. Nov 12 20:46:53.915971 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32492c1010925f6396f0225c6dc4d055369f0edff39dac6b1664007d56dcd182-rootfs.mount: Deactivated successfully. Nov 12 20:46:53.936191 containerd[1545]: time="2024-11-12T20:46:53.936162713Z" level=info msg="RemoveContainer for \"f570b984c43152c15d3a482ff43e6c671b38ce0cced9a891b788b7416ed0b4d4\" returns successfully" Nov 12 20:46:53.936869 kubelet[2803]: I1112 20:46:53.936416 2803 scope.go:117] "RemoveContainer" containerID="5922089d4ea4d4772bc992a7b4845e6daf6f1473dc8fddfcffccae054317efbd" Nov 12 20:46:53.937466 containerd[1545]: time="2024-11-12T20:46:53.937439873Z" level=info msg="RemoveContainer for \"5922089d4ea4d4772bc992a7b4845e6daf6f1473dc8fddfcffccae054317efbd\"" Nov 12 20:46:53.970238 containerd[1545]: time="2024-11-12T20:46:53.970182417Z" level=info msg="RemoveContainer for \"5922089d4ea4d4772bc992a7b4845e6daf6f1473dc8fddfcffccae054317efbd\" returns successfully" Nov 12 20:46:54.102378 containerd[1545]: time="2024-11-12T20:46:54.102227755Z" level=info msg="shim disconnected" id=32492c1010925f6396f0225c6dc4d055369f0edff39dac6b1664007d56dcd182 namespace=k8s.io Nov 12 20:46:54.102378 containerd[1545]: time="2024-11-12T20:46:54.102272996Z" level=warning msg="cleaning up after shim disconnected" id=32492c1010925f6396f0225c6dc4d055369f0edff39dac6b1664007d56dcd182 namespace=k8s.io Nov 12 20:46:54.102378 containerd[1545]: time="2024-11-12T20:46:54.102281400Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:46:54.244612 sshd[4559]: Accepted publickey for core from 139.178.68.195 port 32854 ssh2: RSA SHA256:eW+66Zcd2Hcqsdn9w7YOca9/FmdLw/8eMbZ4A5lBUuE Nov 12 20:46:54.250459 sshd[4559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:46:54.263117 systemd-logind[1522]: New session 11 of user core. Nov 12 20:46:54.272661 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 12 20:46:54.476718 containerd[1545]: time="2024-11-12T20:46:54.476681990Z" level=info msg="StopPodSandbox for \"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\"" Nov 12 20:46:54.489020 kubelet[2803]: I1112 20:46:54.488988 2803 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="aa789dee-6d4d-4e09-8c29-cb02d5225385" path="/var/lib/kubelet/pods/aa789dee-6d4d-4e09-8c29-cb02d5225385/volumes" Nov 12 20:46:54.497779 containerd[1545]: time="2024-11-12T20:46:54.497739062Z" level=error msg="StopPodSandbox for \"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\" failed" error="failed to destroy network for sandbox \"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:54.498036 kubelet[2803]: E1112 20:46:54.497914 2803 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c" Nov 12 20:46:54.498036 kubelet[2803]: E1112 20:46:54.497946 2803 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c"} Nov 12 20:46:54.498036 kubelet[2803]: E1112 20:46:54.497972 2803 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b1afbb6d-8e27-4966-8b72-7f067d947668\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:46:54.498036 kubelet[2803]: E1112 20:46:54.497992 2803 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b1afbb6d-8e27-4966-8b72-7f067d947668\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d6985898d-bw6xq" podUID="b1afbb6d-8e27-4966-8b72-7f067d947668" Nov 12 20:46:54.818537 containerd[1545]: time="2024-11-12T20:46:54.818336716Z" level=info msg="CreateContainer within sandbox \"04d3a6b9d37175ee04671f5712be671822b4462995f47e1374975f4d5976f14f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 12 20:46:54.887056 containerd[1545]: time="2024-11-12T20:46:54.886415525Z" level=info msg="CreateContainer within sandbox \"04d3a6b9d37175ee04671f5712be671822b4462995f47e1374975f4d5976f14f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ad452aea23df096622dc95347a9e22075ca4cf3aaf8ecf3a45cfd80a03cfbd33\"" Nov 12 20:46:54.888289 containerd[1545]: time="2024-11-12T20:46:54.887279103Z" level=info msg="StartContainer for \"ad452aea23df096622dc95347a9e22075ca4cf3aaf8ecf3a45cfd80a03cfbd33\"" Nov 12 20:46:54.911313 systemd[1]: run-containerd-runc-k8s.io-ad452aea23df096622dc95347a9e22075ca4cf3aaf8ecf3a45cfd80a03cfbd33-runc.KycwBc.mount: Deactivated successfully. Nov 12 20:46:54.919711 systemd[1]: Started cri-containerd-ad452aea23df096622dc95347a9e22075ca4cf3aaf8ecf3a45cfd80a03cfbd33.scope - libcontainer container ad452aea23df096622dc95347a9e22075ca4cf3aaf8ecf3a45cfd80a03cfbd33. Nov 12 20:46:54.944009 containerd[1545]: time="2024-11-12T20:46:54.943974752Z" level=info msg="StartContainer for \"ad452aea23df096622dc95347a9e22075ca4cf3aaf8ecf3a45cfd80a03cfbd33\" returns successfully" Nov 12 20:46:55.113626 sshd[4559]: pam_unix(sshd:session): session closed for user core Nov 12 20:46:55.116400 systemd[1]: sshd@8-139.178.70.104:22-139.178.68.195:32854.service: Deactivated successfully. Nov 12 20:46:55.117969 systemd[1]: session-11.scope: Deactivated successfully. Nov 12 20:46:55.119258 systemd-logind[1522]: Session 11 logged out. Waiting for processes to exit. Nov 12 20:46:55.120240 systemd-logind[1522]: Removed session 11. Nov 12 20:46:56.690703 systemd[1]: cri-containerd-ad452aea23df096622dc95347a9e22075ca4cf3aaf8ecf3a45cfd80a03cfbd33.scope: Deactivated successfully. Nov 12 20:46:56.708668 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad452aea23df096622dc95347a9e22075ca4cf3aaf8ecf3a45cfd80a03cfbd33-rootfs.mount: Deactivated successfully. Nov 12 20:46:56.710303 containerd[1545]: time="2024-11-12T20:46:56.710248594Z" level=info msg="shim disconnected" id=ad452aea23df096622dc95347a9e22075ca4cf3aaf8ecf3a45cfd80a03cfbd33 namespace=k8s.io Nov 12 20:46:56.710303 containerd[1545]: time="2024-11-12T20:46:56.710292620Z" level=warning msg="cleaning up after shim disconnected" id=ad452aea23df096622dc95347a9e22075ca4cf3aaf8ecf3a45cfd80a03cfbd33 namespace=k8s.io Nov 12 20:46:56.710303 containerd[1545]: time="2024-11-12T20:46:56.710301082Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:46:56.857311 containerd[1545]: time="2024-11-12T20:46:56.857133954Z" level=info msg="CreateContainer within sandbox \"04d3a6b9d37175ee04671f5712be671822b4462995f47e1374975f4d5976f14f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 12 20:46:56.910414 containerd[1545]: time="2024-11-12T20:46:56.910345638Z" level=info msg="CreateContainer within sandbox \"04d3a6b9d37175ee04671f5712be671822b4462995f47e1374975f4d5976f14f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"04d9c51de1cbe360d6ac37593c0e436732f667a925429d84ccfe055cf57f6b3b\"" Nov 12 20:46:56.910942 containerd[1545]: time="2024-11-12T20:46:56.910803870Z" level=info msg="StartContainer for \"04d9c51de1cbe360d6ac37593c0e436732f667a925429d84ccfe055cf57f6b3b\"" Nov 12 20:46:56.929693 systemd[1]: Started cri-containerd-04d9c51de1cbe360d6ac37593c0e436732f667a925429d84ccfe055cf57f6b3b.scope - libcontainer container 04d9c51de1cbe360d6ac37593c0e436732f667a925429d84ccfe055cf57f6b3b. Nov 12 20:46:57.028470 containerd[1545]: time="2024-11-12T20:46:57.028240436Z" level=info msg="StartContainer for \"04d9c51de1cbe360d6ac37593c0e436732f667a925429d84ccfe055cf57f6b3b\" returns successfully" Nov 12 20:46:57.837297 kubelet[2803]: I1112 20:46:57.836472 2803 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-tpsz6" podStartSLOduration=4.836443523 podStartE2EDuration="4.836443523s" podCreationTimestamp="2024-11-12 20:46:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:46:57.836309136 +0000 UTC m=+87.494577820" watchObservedRunningTime="2024-11-12 20:46:57.836443523 +0000 UTC m=+87.494712179" Nov 12 20:46:58.477417 containerd[1545]: time="2024-11-12T20:46:58.476998162Z" level=info msg="StopPodSandbox for \"72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185\"" Nov 12 20:46:58.477417 containerd[1545]: time="2024-11-12T20:46:58.476998219Z" level=info msg="StopPodSandbox for \"9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36\"" Nov 12 20:46:58.481341 containerd[1545]: time="2024-11-12T20:46:58.477020259Z" level=info msg="StopPodSandbox for \"754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1\"" Nov 12 20:46:58.632635 containerd[1545]: 2024-11-12 20:46:58.598 [INFO][4807] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1" Nov 12 20:46:58.632635 containerd[1545]: 2024-11-12 20:46:58.599 [INFO][4807] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1" iface="eth0" netns="/var/run/netns/cni-659f9846-6935-8cee-2d8f-1b811da74d48" Nov 12 20:46:58.632635 containerd[1545]: 2024-11-12 20:46:58.599 [INFO][4807] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1" iface="eth0" netns="/var/run/netns/cni-659f9846-6935-8cee-2d8f-1b811da74d48" Nov 12 20:46:58.632635 containerd[1545]: 2024-11-12 20:46:58.600 [INFO][4807] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1" iface="eth0" netns="/var/run/netns/cni-659f9846-6935-8cee-2d8f-1b811da74d48" Nov 12 20:46:58.632635 containerd[1545]: 2024-11-12 20:46:58.600 [INFO][4807] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1" Nov 12 20:46:58.632635 containerd[1545]: 2024-11-12 20:46:58.600 [INFO][4807] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1" Nov 12 20:46:58.632635 containerd[1545]: 2024-11-12 20:46:58.624 [INFO][4825] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1" HandleID="k8s-pod-network.754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1" Workload="localhost-k8s-calico--apiserver--55d5bcd669--b7s55-eth0" Nov 12 20:46:58.632635 containerd[1545]: 2024-11-12 20:46:58.624 [INFO][4825] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:46:58.632635 containerd[1545]: 2024-11-12 20:46:58.624 [INFO][4825] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:46:58.632635 containerd[1545]: 2024-11-12 20:46:58.628 [WARNING][4825] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1" HandleID="k8s-pod-network.754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1" Workload="localhost-k8s-calico--apiserver--55d5bcd669--b7s55-eth0" Nov 12 20:46:58.632635 containerd[1545]: 2024-11-12 20:46:58.628 [INFO][4825] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1" HandleID="k8s-pod-network.754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1" Workload="localhost-k8s-calico--apiserver--55d5bcd669--b7s55-eth0" Nov 12 20:46:58.632635 containerd[1545]: 2024-11-12 20:46:58.629 [INFO][4825] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:46:58.632635 containerd[1545]: 2024-11-12 20:46:58.630 [INFO][4807] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1" Nov 12 20:46:58.638897 containerd[1545]: time="2024-11-12T20:46:58.632825276Z" level=info msg="TearDown network for sandbox \"754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1\" successfully" Nov 12 20:46:58.638897 containerd[1545]: time="2024-11-12T20:46:58.632844637Z" level=info msg="StopPodSandbox for \"754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1\" returns successfully" Nov 12 20:46:58.638897 containerd[1545]: time="2024-11-12T20:46:58.633257865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55d5bcd669-b7s55,Uid:4b0d351b-c7ca-4b0e-9343-66e8cb4acd5c,Namespace:calico-apiserver,Attempt:1,}" Nov 12 20:46:58.636766 systemd[1]: run-netns-cni\x2d659f9846\x2d6935\x2d8cee\x2d2d8f\x2d1b811da74d48.mount: Deactivated successfully. Nov 12 20:46:58.651003 containerd[1545]: 2024-11-12 20:46:58.604 [INFO][4806] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185" Nov 12 20:46:58.651003 containerd[1545]: 2024-11-12 20:46:58.605 [INFO][4806] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185" iface="eth0" netns="/var/run/netns/cni-903f9095-bfbe-5ea2-4380-ee3b1034f699" Nov 12 20:46:58.651003 containerd[1545]: 2024-11-12 20:46:58.605 [INFO][4806] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185" iface="eth0" netns="/var/run/netns/cni-903f9095-bfbe-5ea2-4380-ee3b1034f699" Nov 12 20:46:58.651003 containerd[1545]: 2024-11-12 20:46:58.605 [INFO][4806] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185" iface="eth0" netns="/var/run/netns/cni-903f9095-bfbe-5ea2-4380-ee3b1034f699" Nov 12 20:46:58.651003 containerd[1545]: 2024-11-12 20:46:58.606 [INFO][4806] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185" Nov 12 20:46:58.651003 containerd[1545]: 2024-11-12 20:46:58.606 [INFO][4806] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185" Nov 12 20:46:58.651003 containerd[1545]: 2024-11-12 20:46:58.638 [INFO][4829] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185" HandleID="k8s-pod-network.72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185" Workload="localhost-k8s-csi--node--driver--mr2hw-eth0" Nov 12 20:46:58.651003 containerd[1545]: 2024-11-12 20:46:58.638 [INFO][4829] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:46:58.651003 containerd[1545]: 2024-11-12 20:46:58.638 [INFO][4829] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:46:58.651003 containerd[1545]: 2024-11-12 20:46:58.645 [WARNING][4829] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185" HandleID="k8s-pod-network.72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185" Workload="localhost-k8s-csi--node--driver--mr2hw-eth0" Nov 12 20:46:58.651003 containerd[1545]: 2024-11-12 20:46:58.645 [INFO][4829] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185" HandleID="k8s-pod-network.72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185" Workload="localhost-k8s-csi--node--driver--mr2hw-eth0" Nov 12 20:46:58.651003 containerd[1545]: 2024-11-12 20:46:58.647 [INFO][4829] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:46:58.651003 containerd[1545]: 2024-11-12 20:46:58.649 [INFO][4806] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185" Nov 12 20:46:58.651443 containerd[1545]: time="2024-11-12T20:46:58.651115657Z" level=info msg="TearDown network for sandbox \"72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185\" successfully" Nov 12 20:46:58.651443 containerd[1545]: time="2024-11-12T20:46:58.651136952Z" level=info msg="StopPodSandbox for \"72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185\" returns successfully" Nov 12 20:46:58.652167 containerd[1545]: time="2024-11-12T20:46:58.652083830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mr2hw,Uid:5475df33-a25f-4c6b-acfc-caf320cb59b1,Namespace:calico-system,Attempt:1,}" Nov 12 20:46:58.653887 systemd[1]: run-netns-cni\x2d903f9095\x2dbfbe\x2d5ea2\x2d4380\x2dee3b1034f699.mount: Deactivated successfully. Nov 12 20:46:58.665508 containerd[1545]: 2024-11-12 20:46:58.610 [INFO][4802] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36" Nov 12 20:46:58.665508 containerd[1545]: 2024-11-12 20:46:58.610 [INFO][4802] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36" iface="eth0" netns="/var/run/netns/cni-3155c413-b356-f4f3-4be9-ec119bc26024" Nov 12 20:46:58.665508 containerd[1545]: 2024-11-12 20:46:58.611 [INFO][4802] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36" iface="eth0" netns="/var/run/netns/cni-3155c413-b356-f4f3-4be9-ec119bc26024" Nov 12 20:46:58.665508 containerd[1545]: 2024-11-12 20:46:58.611 [INFO][4802] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36" iface="eth0" netns="/var/run/netns/cni-3155c413-b356-f4f3-4be9-ec119bc26024" Nov 12 20:46:58.665508 containerd[1545]: 2024-11-12 20:46:58.611 [INFO][4802] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36" Nov 12 20:46:58.665508 containerd[1545]: 2024-11-12 20:46:58.611 [INFO][4802] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36" Nov 12 20:46:58.665508 containerd[1545]: 2024-11-12 20:46:58.644 [INFO][4833] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36" HandleID="k8s-pod-network.9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36" Workload="localhost-k8s-calico--apiserver--55d5bcd669--xs25j-eth0" Nov 12 20:46:58.665508 containerd[1545]: 2024-11-12 20:46:58.644 [INFO][4833] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:46:58.665508 containerd[1545]: 2024-11-12 20:46:58.649 [INFO][4833] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:46:58.665508 containerd[1545]: 2024-11-12 20:46:58.660 [WARNING][4833] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36" HandleID="k8s-pod-network.9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36" Workload="localhost-k8s-calico--apiserver--55d5bcd669--xs25j-eth0" Nov 12 20:46:58.665508 containerd[1545]: 2024-11-12 20:46:58.660 [INFO][4833] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36" HandleID="k8s-pod-network.9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36" Workload="localhost-k8s-calico--apiserver--55d5bcd669--xs25j-eth0" Nov 12 20:46:58.665508 containerd[1545]: 2024-11-12 20:46:58.662 [INFO][4833] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:46:58.665508 containerd[1545]: 2024-11-12 20:46:58.664 [INFO][4802] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36" Nov 12 20:46:58.665933 containerd[1545]: time="2024-11-12T20:46:58.665587448Z" level=info msg="TearDown network for sandbox \"9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36\" successfully" Nov 12 20:46:58.665933 containerd[1545]: time="2024-11-12T20:46:58.665607165Z" level=info msg="StopPodSandbox for \"9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36\" returns successfully" Nov 12 20:46:58.666229 containerd[1545]: time="2024-11-12T20:46:58.666102644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55d5bcd669-xs25j,Uid:7a262788-14a2-4491-b838-16c919aed65b,Namespace:calico-apiserver,Attempt:1,}" Nov 12 20:46:58.709903 systemd[1]: run-netns-cni\x2d3155c413\x2db356\x2df4f3\x2d4be9\x2dec119bc26024.mount: Deactivated successfully. Nov 12 20:46:58.823769 systemd-networkd[1465]: cali608b09d8ff9: Link UP Nov 12 20:46:58.823877 systemd-networkd[1465]: cali608b09d8ff9: Gained carrier Nov 12 20:46:58.836605 containerd[1545]: 2024-11-12 20:46:58.694 [INFO][4844] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 12 20:46:58.836605 containerd[1545]: 2024-11-12 20:46:58.717 [INFO][4844] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--55d5bcd669--b7s55-eth0 calico-apiserver-55d5bcd669- calico-apiserver 4b0d351b-c7ca-4b0e-9343-66e8cb4acd5c 984 0 2024-11-12 20:45:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:55d5bcd669 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-55d5bcd669-b7s55 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali608b09d8ff9 [] []}} ContainerID="2a9c33f2196c46b46f900bf369312a32557c64398ccc05ab20e8f3631bdc76c3" Namespace="calico-apiserver" Pod="calico-apiserver-55d5bcd669-b7s55" WorkloadEndpoint="localhost-k8s-calico--apiserver--55d5bcd669--b7s55-" Nov 12 20:46:58.836605 containerd[1545]: 2024-11-12 20:46:58.717 [INFO][4844] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2a9c33f2196c46b46f900bf369312a32557c64398ccc05ab20e8f3631bdc76c3" Namespace="calico-apiserver" Pod="calico-apiserver-55d5bcd669-b7s55" WorkloadEndpoint="localhost-k8s-calico--apiserver--55d5bcd669--b7s55-eth0" Nov 12 20:46:58.836605 containerd[1545]: 2024-11-12 20:46:58.735 [INFO][4855] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2a9c33f2196c46b46f900bf369312a32557c64398ccc05ab20e8f3631bdc76c3" HandleID="k8s-pod-network.2a9c33f2196c46b46f900bf369312a32557c64398ccc05ab20e8f3631bdc76c3" Workload="localhost-k8s-calico--apiserver--55d5bcd669--b7s55-eth0" Nov 12 20:46:58.836605 containerd[1545]: 2024-11-12 20:46:58.742 [INFO][4855] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2a9c33f2196c46b46f900bf369312a32557c64398ccc05ab20e8f3631bdc76c3" HandleID="k8s-pod-network.2a9c33f2196c46b46f900bf369312a32557c64398ccc05ab20e8f3631bdc76c3" Workload="localhost-k8s-calico--apiserver--55d5bcd669--b7s55-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290b70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-55d5bcd669-b7s55", "timestamp":"2024-11-12 20:46:58.735850159 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:46:58.836605 containerd[1545]: 2024-11-12 20:46:58.742 [INFO][4855] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:46:58.836605 containerd[1545]: 2024-11-12 20:46:58.742 [INFO][4855] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:46:58.836605 containerd[1545]: 2024-11-12 20:46:58.742 [INFO][4855] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 20:46:58.836605 containerd[1545]: 2024-11-12 20:46:58.744 [INFO][4855] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2a9c33f2196c46b46f900bf369312a32557c64398ccc05ab20e8f3631bdc76c3" host="localhost" Nov 12 20:46:58.836605 containerd[1545]: 2024-11-12 20:46:58.749 [INFO][4855] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 20:46:58.836605 containerd[1545]: 2024-11-12 20:46:58.753 [INFO][4855] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 20:46:58.836605 containerd[1545]: 2024-11-12 20:46:58.759 [INFO][4855] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 20:46:58.836605 containerd[1545]: 2024-11-12 20:46:58.760 [INFO][4855] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 20:46:58.836605 containerd[1545]: 2024-11-12 20:46:58.760 [INFO][4855] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2a9c33f2196c46b46f900bf369312a32557c64398ccc05ab20e8f3631bdc76c3" host="localhost" Nov 12 20:46:58.836605 containerd[1545]: 2024-11-12 20:46:58.761 [INFO][4855] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2a9c33f2196c46b46f900bf369312a32557c64398ccc05ab20e8f3631bdc76c3 Nov 12 20:46:58.836605 containerd[1545]: 2024-11-12 20:46:58.776 [INFO][4855] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2a9c33f2196c46b46f900bf369312a32557c64398ccc05ab20e8f3631bdc76c3" host="localhost" Nov 12 20:46:58.836605 containerd[1545]: 2024-11-12 20:46:58.782 [INFO][4855] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.2a9c33f2196c46b46f900bf369312a32557c64398ccc05ab20e8f3631bdc76c3" host="localhost" Nov 12 20:46:58.836605 containerd[1545]: 2024-11-12 20:46:58.782 [INFO][4855] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.2a9c33f2196c46b46f900bf369312a32557c64398ccc05ab20e8f3631bdc76c3" host="localhost" Nov 12 20:46:58.836605 containerd[1545]: 2024-11-12 20:46:58.782 [INFO][4855] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:46:58.836605 containerd[1545]: 2024-11-12 20:46:58.782 [INFO][4855] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="2a9c33f2196c46b46f900bf369312a32557c64398ccc05ab20e8f3631bdc76c3" HandleID="k8s-pod-network.2a9c33f2196c46b46f900bf369312a32557c64398ccc05ab20e8f3631bdc76c3" Workload="localhost-k8s-calico--apiserver--55d5bcd669--b7s55-eth0" Nov 12 20:46:58.837043 containerd[1545]: 2024-11-12 20:46:58.786 [INFO][4844] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2a9c33f2196c46b46f900bf369312a32557c64398ccc05ab20e8f3631bdc76c3" Namespace="calico-apiserver" Pod="calico-apiserver-55d5bcd669-b7s55" WorkloadEndpoint="localhost-k8s-calico--apiserver--55d5bcd669--b7s55-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55d5bcd669--b7s55-eth0", GenerateName:"calico-apiserver-55d5bcd669-", Namespace:"calico-apiserver", SelfLink:"", UID:"4b0d351b-c7ca-4b0e-9343-66e8cb4acd5c", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 45, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55d5bcd669", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-55d5bcd669-b7s55", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali608b09d8ff9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:46:58.837043 containerd[1545]: 2024-11-12 20:46:58.786 [INFO][4844] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="2a9c33f2196c46b46f900bf369312a32557c64398ccc05ab20e8f3631bdc76c3" Namespace="calico-apiserver" Pod="calico-apiserver-55d5bcd669-b7s55" WorkloadEndpoint="localhost-k8s-calico--apiserver--55d5bcd669--b7s55-eth0" Nov 12 20:46:58.837043 containerd[1545]: 2024-11-12 20:46:58.787 [INFO][4844] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali608b09d8ff9 ContainerID="2a9c33f2196c46b46f900bf369312a32557c64398ccc05ab20e8f3631bdc76c3" Namespace="calico-apiserver" Pod="calico-apiserver-55d5bcd669-b7s55" WorkloadEndpoint="localhost-k8s-calico--apiserver--55d5bcd669--b7s55-eth0" Nov 12 20:46:58.837043 containerd[1545]: 2024-11-12 20:46:58.816 [INFO][4844] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2a9c33f2196c46b46f900bf369312a32557c64398ccc05ab20e8f3631bdc76c3" Namespace="calico-apiserver" Pod="calico-apiserver-55d5bcd669-b7s55" WorkloadEndpoint="localhost-k8s-calico--apiserver--55d5bcd669--b7s55-eth0" Nov 12 20:46:58.837043 containerd[1545]: 2024-11-12 20:46:58.817 [INFO][4844] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2a9c33f2196c46b46f900bf369312a32557c64398ccc05ab20e8f3631bdc76c3" Namespace="calico-apiserver" Pod="calico-apiserver-55d5bcd669-b7s55" WorkloadEndpoint="localhost-k8s-calico--apiserver--55d5bcd669--b7s55-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55d5bcd669--b7s55-eth0", GenerateName:"calico-apiserver-55d5bcd669-", Namespace:"calico-apiserver", SelfLink:"", UID:"4b0d351b-c7ca-4b0e-9343-66e8cb4acd5c", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 45, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55d5bcd669", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2a9c33f2196c46b46f900bf369312a32557c64398ccc05ab20e8f3631bdc76c3", Pod:"calico-apiserver-55d5bcd669-b7s55", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali608b09d8ff9", MAC:"5a:16:d4:b6:6e:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:46:58.837043 containerd[1545]: 2024-11-12 20:46:58.835 [INFO][4844] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2a9c33f2196c46b46f900bf369312a32557c64398ccc05ab20e8f3631bdc76c3" Namespace="calico-apiserver" Pod="calico-apiserver-55d5bcd669-b7s55" WorkloadEndpoint="localhost-k8s-calico--apiserver--55d5bcd669--b7s55-eth0" Nov 12 20:46:58.881092 systemd-networkd[1465]: cali3ee0f2f150a: Link UP Nov 12 20:46:58.881722 systemd-networkd[1465]: cali3ee0f2f150a: Gained carrier Nov 12 20:46:58.886482 containerd[1545]: time="2024-11-12T20:46:58.886413315Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:46:58.886659 containerd[1545]: time="2024-11-12T20:46:58.886470707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:46:58.886659 containerd[1545]: time="2024-11-12T20:46:58.886482652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:46:58.886659 containerd[1545]: time="2024-11-12T20:46:58.886541309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:46:58.901690 systemd[1]: Started cri-containerd-2a9c33f2196c46b46f900bf369312a32557c64398ccc05ab20e8f3631bdc76c3.scope - libcontainer container 2a9c33f2196c46b46f900bf369312a32557c64398ccc05ab20e8f3631bdc76c3. Nov 12 20:46:58.914865 systemd-resolved[1417]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 20:46:58.938914 containerd[1545]: 2024-11-12 20:46:58.763 [INFO][4861] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 12 20:46:58.938914 containerd[1545]: 2024-11-12 20:46:58.780 [INFO][4861] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--mr2hw-eth0 csi-node-driver- calico-system 5475df33-a25f-4c6b-acfc-caf320cb59b1 985 0 2024-11-12 20:45:51 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:64dd8495dc k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-mr2hw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3ee0f2f150a [] []}} ContainerID="ab729ee86cedc296b3f86a769c080aa16056b071c49adfbfb7173392f85c8100" Namespace="calico-system" Pod="csi-node-driver-mr2hw" WorkloadEndpoint="localhost-k8s-csi--node--driver--mr2hw-" Nov 12 20:46:58.938914 containerd[1545]: 2024-11-12 20:46:58.780 [INFO][4861] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ab729ee86cedc296b3f86a769c080aa16056b071c49adfbfb7173392f85c8100" Namespace="calico-system" Pod="csi-node-driver-mr2hw" WorkloadEndpoint="localhost-k8s-csi--node--driver--mr2hw-eth0" Nov 12 20:46:58.938914 containerd[1545]: 2024-11-12 20:46:58.806 [INFO][4884] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ab729ee86cedc296b3f86a769c080aa16056b071c49adfbfb7173392f85c8100" HandleID="k8s-pod-network.ab729ee86cedc296b3f86a769c080aa16056b071c49adfbfb7173392f85c8100" Workload="localhost-k8s-csi--node--driver--mr2hw-eth0" Nov 12 20:46:58.938914 containerd[1545]: 2024-11-12 20:46:58.834 [INFO][4884] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ab729ee86cedc296b3f86a769c080aa16056b071c49adfbfb7173392f85c8100" HandleID="k8s-pod-network.ab729ee86cedc296b3f86a769c080aa16056b071c49adfbfb7173392f85c8100" Workload="localhost-k8s-csi--node--driver--mr2hw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031b440), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-mr2hw", "timestamp":"2024-11-12 20:46:58.806717792 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:46:58.938914 containerd[1545]: 2024-11-12 20:46:58.834 [INFO][4884] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:46:58.938914 containerd[1545]: 2024-11-12 20:46:58.835 [INFO][4884] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:46:58.938914 containerd[1545]: 2024-11-12 20:46:58.835 [INFO][4884] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 20:46:58.938914 containerd[1545]: 2024-11-12 20:46:58.836 [INFO][4884] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ab729ee86cedc296b3f86a769c080aa16056b071c49adfbfb7173392f85c8100" host="localhost" Nov 12 20:46:58.938914 containerd[1545]: 2024-11-12 20:46:58.840 [INFO][4884] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 20:46:58.938914 containerd[1545]: 2024-11-12 20:46:58.845 [INFO][4884] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 20:46:58.938914 containerd[1545]: 2024-11-12 20:46:58.847 [INFO][4884] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 20:46:58.938914 containerd[1545]: 2024-11-12 20:46:58.848 [INFO][4884] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 20:46:58.938914 containerd[1545]: 2024-11-12 20:46:58.848 [INFO][4884] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ab729ee86cedc296b3f86a769c080aa16056b071c49adfbfb7173392f85c8100" host="localhost" Nov 12 20:46:58.938914 containerd[1545]: 2024-11-12 20:46:58.849 [INFO][4884] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ab729ee86cedc296b3f86a769c080aa16056b071c49adfbfb7173392f85c8100 Nov 12 20:46:58.938914 containerd[1545]: 2024-11-12 20:46:58.861 [INFO][4884] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ab729ee86cedc296b3f86a769c080aa16056b071c49adfbfb7173392f85c8100" host="localhost" Nov 12 20:46:58.938914 containerd[1545]: 2024-11-12 20:46:58.866 [INFO][4884] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.ab729ee86cedc296b3f86a769c080aa16056b071c49adfbfb7173392f85c8100" host="localhost" Nov 12 20:46:58.938914 containerd[1545]: 2024-11-12 20:46:58.866 [INFO][4884] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.ab729ee86cedc296b3f86a769c080aa16056b071c49adfbfb7173392f85c8100" host="localhost" Nov 12 20:46:58.938914 containerd[1545]: 2024-11-12 20:46:58.866 [INFO][4884] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:46:58.938914 containerd[1545]: 2024-11-12 20:46:58.866 [INFO][4884] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="ab729ee86cedc296b3f86a769c080aa16056b071c49adfbfb7173392f85c8100" HandleID="k8s-pod-network.ab729ee86cedc296b3f86a769c080aa16056b071c49adfbfb7173392f85c8100" Workload="localhost-k8s-csi--node--driver--mr2hw-eth0" Nov 12 20:46:58.952837 containerd[1545]: 2024-11-12 20:46:58.870 [INFO][4861] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ab729ee86cedc296b3f86a769c080aa16056b071c49adfbfb7173392f85c8100" Namespace="calico-system" Pod="csi-node-driver-mr2hw" WorkloadEndpoint="localhost-k8s-csi--node--driver--mr2hw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mr2hw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5475df33-a25f-4c6b-acfc-caf320cb59b1", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 45, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-mr2hw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3ee0f2f150a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:46:58.952837 containerd[1545]: 2024-11-12 20:46:58.877 [INFO][4861] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="ab729ee86cedc296b3f86a769c080aa16056b071c49adfbfb7173392f85c8100" Namespace="calico-system" Pod="csi-node-driver-mr2hw" WorkloadEndpoint="localhost-k8s-csi--node--driver--mr2hw-eth0" Nov 12 20:46:58.952837 containerd[1545]: 2024-11-12 20:46:58.877 [INFO][4861] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3ee0f2f150a ContainerID="ab729ee86cedc296b3f86a769c080aa16056b071c49adfbfb7173392f85c8100" Namespace="calico-system" Pod="csi-node-driver-mr2hw" WorkloadEndpoint="localhost-k8s-csi--node--driver--mr2hw-eth0" Nov 12 20:46:58.952837 containerd[1545]: 2024-11-12 20:46:58.881 [INFO][4861] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ab729ee86cedc296b3f86a769c080aa16056b071c49adfbfb7173392f85c8100" Namespace="calico-system" Pod="csi-node-driver-mr2hw" WorkloadEndpoint="localhost-k8s-csi--node--driver--mr2hw-eth0" Nov 12 20:46:58.952837 containerd[1545]: 2024-11-12 20:46:58.882 [INFO][4861] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ab729ee86cedc296b3f86a769c080aa16056b071c49adfbfb7173392f85c8100" Namespace="calico-system" Pod="csi-node-driver-mr2hw" WorkloadEndpoint="localhost-k8s-csi--node--driver--mr2hw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mr2hw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5475df33-a25f-4c6b-acfc-caf320cb59b1", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 45, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ab729ee86cedc296b3f86a769c080aa16056b071c49adfbfb7173392f85c8100", Pod:"csi-node-driver-mr2hw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3ee0f2f150a", MAC:"f6:07:a8:e2:ca:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:46:58.952837 containerd[1545]: 2024-11-12 20:46:58.928 [INFO][4861] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ab729ee86cedc296b3f86a769c080aa16056b071c49adfbfb7173392f85c8100" Namespace="calico-system" Pod="csi-node-driver-mr2hw" WorkloadEndpoint="localhost-k8s-csi--node--driver--mr2hw-eth0" Nov 12 20:46:58.952837 containerd[1545]: time="2024-11-12T20:46:58.943690816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55d5bcd669-b7s55,Uid:4b0d351b-c7ca-4b0e-9343-66e8cb4acd5c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"2a9c33f2196c46b46f900bf369312a32557c64398ccc05ab20e8f3631bdc76c3\"" Nov 12 20:46:58.952837 containerd[1545]: time="2024-11-12T20:46:58.945114799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 12 20:46:58.940206 systemd-networkd[1465]: calibea72b4365e: Link UP Nov 12 20:46:58.941524 systemd-networkd[1465]: calibea72b4365e: Gained carrier Nov 12 20:46:58.966624 containerd[1545]: 2024-11-12 20:46:58.779 [INFO][4872] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 12 20:46:58.966624 containerd[1545]: 2024-11-12 20:46:58.787 [INFO][4872] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--55d5bcd669--xs25j-eth0 calico-apiserver-55d5bcd669- calico-apiserver 7a262788-14a2-4491-b838-16c919aed65b 986 0 2024-11-12 20:45:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:55d5bcd669 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-55d5bcd669-xs25j eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibea72b4365e [] []}} ContainerID="512d912e665391348ef57996f66de05a9ce30e998f321b149df1996c3acdbc83" Namespace="calico-apiserver" Pod="calico-apiserver-55d5bcd669-xs25j" WorkloadEndpoint="localhost-k8s-calico--apiserver--55d5bcd669--xs25j-" Nov 12 20:46:58.966624 containerd[1545]: 2024-11-12 20:46:58.787 [INFO][4872] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="512d912e665391348ef57996f66de05a9ce30e998f321b149df1996c3acdbc83" Namespace="calico-apiserver" Pod="calico-apiserver-55d5bcd669-xs25j" WorkloadEndpoint="localhost-k8s-calico--apiserver--55d5bcd669--xs25j-eth0" Nov 12 20:46:58.966624 containerd[1545]: 2024-11-12 20:46:58.820 [INFO][4889] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="512d912e665391348ef57996f66de05a9ce30e998f321b149df1996c3acdbc83" HandleID="k8s-pod-network.512d912e665391348ef57996f66de05a9ce30e998f321b149df1996c3acdbc83" Workload="localhost-k8s-calico--apiserver--55d5bcd669--xs25j-eth0" Nov 12 20:46:58.966624 containerd[1545]: 2024-11-12 20:46:58.838 [INFO][4889] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="512d912e665391348ef57996f66de05a9ce30e998f321b149df1996c3acdbc83" HandleID="k8s-pod-network.512d912e665391348ef57996f66de05a9ce30e998f321b149df1996c3acdbc83" Workload="localhost-k8s-calico--apiserver--55d5bcd669--xs25j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003198f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-55d5bcd669-xs25j", "timestamp":"2024-11-12 20:46:58.820815587 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:46:58.966624 containerd[1545]: 2024-11-12 20:46:58.839 [INFO][4889] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:46:58.966624 containerd[1545]: 2024-11-12 20:46:58.866 [INFO][4889] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:46:58.966624 containerd[1545]: 2024-11-12 20:46:58.866 [INFO][4889] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 20:46:58.966624 containerd[1545]: 2024-11-12 20:46:58.868 [INFO][4889] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.512d912e665391348ef57996f66de05a9ce30e998f321b149df1996c3acdbc83" host="localhost" Nov 12 20:46:58.966624 containerd[1545]: 2024-11-12 20:46:58.871 [INFO][4889] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 20:46:58.966624 containerd[1545]: 2024-11-12 20:46:58.873 [INFO][4889] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 20:46:58.966624 containerd[1545]: 2024-11-12 20:46:58.874 [INFO][4889] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 20:46:58.966624 containerd[1545]: 2024-11-12 20:46:58.876 [INFO][4889] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 20:46:58.966624 containerd[1545]: 2024-11-12 20:46:58.876 [INFO][4889] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.512d912e665391348ef57996f66de05a9ce30e998f321b149df1996c3acdbc83" host="localhost" Nov 12 20:46:58.966624 containerd[1545]: 2024-11-12 20:46:58.877 [INFO][4889] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.512d912e665391348ef57996f66de05a9ce30e998f321b149df1996c3acdbc83 Nov 12 20:46:58.966624 containerd[1545]: 2024-11-12 20:46:58.888 [INFO][4889] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.512d912e665391348ef57996f66de05a9ce30e998f321b149df1996c3acdbc83" host="localhost" Nov 12 20:46:58.966624 containerd[1545]: 2024-11-12 20:46:58.906 [INFO][4889] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.512d912e665391348ef57996f66de05a9ce30e998f321b149df1996c3acdbc83" host="localhost" Nov 12 20:46:58.966624 containerd[1545]: 2024-11-12 20:46:58.906 [INFO][4889] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.512d912e665391348ef57996f66de05a9ce30e998f321b149df1996c3acdbc83" host="localhost" Nov 12 20:46:58.966624 containerd[1545]: 2024-11-12 20:46:58.906 [INFO][4889] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:46:58.966624 containerd[1545]: 2024-11-12 20:46:58.906 [INFO][4889] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="512d912e665391348ef57996f66de05a9ce30e998f321b149df1996c3acdbc83" HandleID="k8s-pod-network.512d912e665391348ef57996f66de05a9ce30e998f321b149df1996c3acdbc83" Workload="localhost-k8s-calico--apiserver--55d5bcd669--xs25j-eth0" Nov 12 20:46:58.967826 containerd[1545]: 2024-11-12 20:46:58.933 [INFO][4872] cni-plugin/k8s.go 386: Populated endpoint ContainerID="512d912e665391348ef57996f66de05a9ce30e998f321b149df1996c3acdbc83" Namespace="calico-apiserver" Pod="calico-apiserver-55d5bcd669-xs25j" WorkloadEndpoint="localhost-k8s-calico--apiserver--55d5bcd669--xs25j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55d5bcd669--xs25j-eth0", GenerateName:"calico-apiserver-55d5bcd669-", Namespace:"calico-apiserver", SelfLink:"", UID:"7a262788-14a2-4491-b838-16c919aed65b", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 45, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55d5bcd669", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-55d5bcd669-xs25j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibea72b4365e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:46:58.967826 containerd[1545]: 2024-11-12 20:46:58.933 [INFO][4872] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="512d912e665391348ef57996f66de05a9ce30e998f321b149df1996c3acdbc83" Namespace="calico-apiserver" Pod="calico-apiserver-55d5bcd669-xs25j" WorkloadEndpoint="localhost-k8s-calico--apiserver--55d5bcd669--xs25j-eth0" Nov 12 20:46:58.967826 containerd[1545]: 2024-11-12 20:46:58.933 [INFO][4872] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibea72b4365e ContainerID="512d912e665391348ef57996f66de05a9ce30e998f321b149df1996c3acdbc83" Namespace="calico-apiserver" Pod="calico-apiserver-55d5bcd669-xs25j" WorkloadEndpoint="localhost-k8s-calico--apiserver--55d5bcd669--xs25j-eth0" Nov 12 20:46:58.967826 containerd[1545]: 2024-11-12 20:46:58.943 [INFO][4872] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="512d912e665391348ef57996f66de05a9ce30e998f321b149df1996c3acdbc83" Namespace="calico-apiserver" Pod="calico-apiserver-55d5bcd669-xs25j" WorkloadEndpoint="localhost-k8s-calico--apiserver--55d5bcd669--xs25j-eth0" Nov 12 20:46:58.967826 containerd[1545]: 2024-11-12 20:46:58.945 [INFO][4872] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="512d912e665391348ef57996f66de05a9ce30e998f321b149df1996c3acdbc83" Namespace="calico-apiserver" Pod="calico-apiserver-55d5bcd669-xs25j" WorkloadEndpoint="localhost-k8s-calico--apiserver--55d5bcd669--xs25j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55d5bcd669--xs25j-eth0", GenerateName:"calico-apiserver-55d5bcd669-", Namespace:"calico-apiserver", SelfLink:"", UID:"7a262788-14a2-4491-b838-16c919aed65b", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 45, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55d5bcd669", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"512d912e665391348ef57996f66de05a9ce30e998f321b149df1996c3acdbc83", Pod:"calico-apiserver-55d5bcd669-xs25j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibea72b4365e", MAC:"72:0e:a8:0f:5d:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:46:58.967826 containerd[1545]: 2024-11-12 20:46:58.957 [INFO][4872] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="512d912e665391348ef57996f66de05a9ce30e998f321b149df1996c3acdbc83" Namespace="calico-apiserver" Pod="calico-apiserver-55d5bcd669-xs25j" WorkloadEndpoint="localhost-k8s-calico--apiserver--55d5bcd669--xs25j-eth0" Nov 12 20:46:58.977801 containerd[1545]: time="2024-11-12T20:46:58.976898252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:46:58.977801 containerd[1545]: time="2024-11-12T20:46:58.976930629Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:46:58.977801 containerd[1545]: time="2024-11-12T20:46:58.976937631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:46:58.977801 containerd[1545]: time="2024-11-12T20:46:58.976987120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:46:58.990573 containerd[1545]: time="2024-11-12T20:46:58.990348734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:46:58.990573 containerd[1545]: time="2024-11-12T20:46:58.990387294Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:46:58.990573 containerd[1545]: time="2024-11-12T20:46:58.990394484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:46:58.990573 containerd[1545]: time="2024-11-12T20:46:58.990454703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:46:58.991689 systemd[1]: Started cri-containerd-ab729ee86cedc296b3f86a769c080aa16056b071c49adfbfb7173392f85c8100.scope - libcontainer container ab729ee86cedc296b3f86a769c080aa16056b071c49adfbfb7173392f85c8100. Nov 12 20:46:59.001819 systemd-resolved[1417]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 20:46:59.004688 systemd[1]: Started cri-containerd-512d912e665391348ef57996f66de05a9ce30e998f321b149df1996c3acdbc83.scope - libcontainer container 512d912e665391348ef57996f66de05a9ce30e998f321b149df1996c3acdbc83. Nov 12 20:46:59.012263 containerd[1545]: time="2024-11-12T20:46:59.012238238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mr2hw,Uid:5475df33-a25f-4c6b-acfc-caf320cb59b1,Namespace:calico-system,Attempt:1,} returns sandbox id \"ab729ee86cedc296b3f86a769c080aa16056b071c49adfbfb7173392f85c8100\"" Nov 12 20:46:59.017965 systemd-resolved[1417]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 20:46:59.048119 containerd[1545]: time="2024-11-12T20:46:59.047920607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55d5bcd669-xs25j,Uid:7a262788-14a2-4491-b838-16c919aed65b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"512d912e665391348ef57996f66de05a9ce30e998f321b149df1996c3acdbc83\"" Nov 12 20:46:59.440136 kernel: bpftool[5191]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 12 20:46:59.854649 systemd-networkd[1465]: vxlan.calico: Link UP Nov 12 20:46:59.854654 systemd-networkd[1465]: vxlan.calico: Gained carrier Nov 12 20:47:00.095628 systemd-networkd[1465]: cali3ee0f2f150a: Gained IPv6LL Nov 12 20:47:00.120142 systemd[1]: Started sshd@9-139.178.70.104:22-139.178.68.195:54808.service - OpenSSH per-connection server daemon (139.178.68.195:54808). Nov 12 20:47:00.160398 systemd-networkd[1465]: calibea72b4365e: Gained IPv6LL Nov 12 20:47:00.215403 sshd[5251]: Accepted publickey for core from 139.178.68.195 port 54808 ssh2: RSA SHA256:eW+66Zcd2Hcqsdn9w7YOca9/FmdLw/8eMbZ4A5lBUuE Nov 12 20:47:00.215940 sshd[5251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:47:00.219381 systemd-logind[1522]: New session 12 of user core. Nov 12 20:47:00.225661 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 12 20:47:00.479734 systemd-networkd[1465]: cali608b09d8ff9: Gained IPv6LL Nov 12 20:47:00.769433 sshd[5251]: pam_unix(sshd:session): session closed for user core Nov 12 20:47:00.776784 systemd[1]: sshd@9-139.178.70.104:22-139.178.68.195:54808.service: Deactivated successfully. Nov 12 20:47:00.778050 systemd[1]: session-12.scope: Deactivated successfully. Nov 12 20:47:00.779065 systemd-logind[1522]: Session 12 logged out. Waiting for processes to exit. Nov 12 20:47:00.783927 systemd[1]: Started sshd@10-139.178.70.104:22-139.178.68.195:54812.service - OpenSSH per-connection server daemon (139.178.68.195:54812). Nov 12 20:47:00.785926 systemd-logind[1522]: Removed session 12. Nov 12 20:47:00.820670 sshd[5294]: Accepted publickey for core from 139.178.68.195 port 54812 ssh2: RSA SHA256:eW+66Zcd2Hcqsdn9w7YOca9/FmdLw/8eMbZ4A5lBUuE Nov 12 20:47:00.821488 sshd[5294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:47:00.825196 systemd-logind[1522]: New session 13 of user core. Nov 12 20:47:00.830669 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 12 20:47:01.003734 sshd[5294]: pam_unix(sshd:session): session closed for user core Nov 12 20:47:01.011902 systemd[1]: sshd@10-139.178.70.104:22-139.178.68.195:54812.service: Deactivated successfully. Nov 12 20:47:01.014278 systemd[1]: session-13.scope: Deactivated successfully. Nov 12 20:47:01.015953 systemd-logind[1522]: Session 13 logged out. Waiting for processes to exit. Nov 12 20:47:01.023304 systemd[1]: Started sshd@11-139.178.70.104:22-139.178.68.195:54824.service - OpenSSH per-connection server daemon (139.178.68.195:54824). Nov 12 20:47:01.024275 systemd-logind[1522]: Removed session 13. Nov 12 20:47:01.083122 sshd[5305]: Accepted publickey for core from 139.178.68.195 port 54824 ssh2: RSA SHA256:eW+66Zcd2Hcqsdn9w7YOca9/FmdLw/8eMbZ4A5lBUuE Nov 12 20:47:01.084342 sshd[5305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:47:01.088877 systemd-logind[1522]: New session 14 of user core. Nov 12 20:47:01.093694 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 12 20:47:01.272152 sshd[5305]: pam_unix(sshd:session): session closed for user core Nov 12 20:47:01.287984 systemd[1]: sshd@11-139.178.70.104:22-139.178.68.195:54824.service: Deactivated successfully. Nov 12 20:47:01.289292 systemd[1]: session-14.scope: Deactivated successfully. Nov 12 20:47:01.290124 systemd-logind[1522]: Session 14 logged out. Waiting for processes to exit. Nov 12 20:47:01.291043 systemd-logind[1522]: Removed session 14. Nov 12 20:47:01.383745 containerd[1545]: time="2024-11-12T20:47:01.331309278Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=41963930" Nov 12 20:47:01.398777 containerd[1545]: time="2024-11-12T20:47:01.398734484Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"43457038\" in 2.446609855s" Nov 12 20:47:01.398983 containerd[1545]: time="2024-11-12T20:47:01.398901735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\"" Nov 12 20:47:01.402891 containerd[1545]: time="2024-11-12T20:47:01.402661371Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:01.411078 containerd[1545]: time="2024-11-12T20:47:01.410432579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\"" Nov 12 20:47:01.417976 containerd[1545]: time="2024-11-12T20:47:01.417952980Z" level=info msg="ImageCreate event name:\"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:01.424167 containerd[1545]: time="2024-11-12T20:47:01.423914609Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:01.427003 containerd[1545]: time="2024-11-12T20:47:01.426986921Z" level=info msg="CreateContainer within sandbox \"2a9c33f2196c46b46f900bf369312a32557c64398ccc05ab20e8f3631bdc76c3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 20:47:01.479057 containerd[1545]: time="2024-11-12T20:47:01.479034733Z" level=info msg="StopPodSandbox for \"7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537\"" Nov 12 20:47:01.493434 containerd[1545]: time="2024-11-12T20:47:01.493404826Z" level=info msg="CreateContainer within sandbox \"2a9c33f2196c46b46f900bf369312a32557c64398ccc05ab20e8f3631bdc76c3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"282fe92555a6d10b747920dd41de985f381465edcda91271f380c4b3116eeed4\"" Nov 12 20:47:01.494604 containerd[1545]: time="2024-11-12T20:47:01.494591092Z" level=info msg="StartContainer for \"282fe92555a6d10b747920dd41de985f381465edcda91271f380c4b3116eeed4\"" Nov 12 20:47:01.674702 systemd[1]: run-containerd-runc-k8s.io-282fe92555a6d10b747920dd41de985f381465edcda91271f380c4b3116eeed4-runc.BrOOTY.mount: Deactivated successfully. Nov 12 20:47:01.679776 systemd[1]: Started cri-containerd-282fe92555a6d10b747920dd41de985f381465edcda91271f380c4b3116eeed4.scope - libcontainer container 282fe92555a6d10b747920dd41de985f381465edcda91271f380c4b3116eeed4. Nov 12 20:47:01.715229 containerd[1545]: 2024-11-12 20:47:01.676 [INFO][5337] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537" Nov 12 20:47:01.715229 containerd[1545]: 2024-11-12 20:47:01.678 [INFO][5337] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537" iface="eth0" netns="/var/run/netns/cni-30712d9c-559b-7179-014a-652e6e475d3e" Nov 12 20:47:01.715229 containerd[1545]: 2024-11-12 20:47:01.678 [INFO][5337] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537" iface="eth0" netns="/var/run/netns/cni-30712d9c-559b-7179-014a-652e6e475d3e" Nov 12 20:47:01.715229 containerd[1545]: 2024-11-12 20:47:01.678 [INFO][5337] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537" iface="eth0" netns="/var/run/netns/cni-30712d9c-559b-7179-014a-652e6e475d3e" Nov 12 20:47:01.715229 containerd[1545]: 2024-11-12 20:47:01.678 [INFO][5337] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537" Nov 12 20:47:01.715229 containerd[1545]: 2024-11-12 20:47:01.678 [INFO][5337] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537" Nov 12 20:47:01.715229 containerd[1545]: 2024-11-12 20:47:01.701 [INFO][5358] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537" HandleID="k8s-pod-network.7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537" Workload="localhost-k8s-coredns--76f75df574--jx2wl-eth0" Nov 12 20:47:01.715229 containerd[1545]: 2024-11-12 20:47:01.702 [INFO][5358] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:47:01.715229 containerd[1545]: 2024-11-12 20:47:01.702 [INFO][5358] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:47:01.715229 containerd[1545]: 2024-11-12 20:47:01.708 [WARNING][5358] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537" HandleID="k8s-pod-network.7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537" Workload="localhost-k8s-coredns--76f75df574--jx2wl-eth0" Nov 12 20:47:01.715229 containerd[1545]: 2024-11-12 20:47:01.708 [INFO][5358] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537" HandleID="k8s-pod-network.7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537" Workload="localhost-k8s-coredns--76f75df574--jx2wl-eth0" Nov 12 20:47:01.715229 containerd[1545]: 2024-11-12 20:47:01.712 [INFO][5358] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:47:01.715229 containerd[1545]: 2024-11-12 20:47:01.714 [INFO][5337] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537" Nov 12 20:47:01.716955 containerd[1545]: time="2024-11-12T20:47:01.716872025Z" level=info msg="TearDown network for sandbox \"7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537\" successfully" Nov 12 20:47:01.716955 containerd[1545]: time="2024-11-12T20:47:01.716894841Z" level=info msg="StopPodSandbox for \"7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537\" returns successfully" Nov 12 20:47:01.717607 containerd[1545]: time="2024-11-12T20:47:01.717436679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-jx2wl,Uid:2cb5bf3f-a6f0-4cca-8e64-700b92fbd244,Namespace:kube-system,Attempt:1,}" Nov 12 20:47:01.719868 systemd[1]: run-netns-cni\x2d30712d9c\x2d559b\x2d7179\x2d014a\x2d652e6e475d3e.mount: Deactivated successfully. Nov 12 20:47:01.739644 containerd[1545]: time="2024-11-12T20:47:01.739598782Z" level=info msg="StartContainer for \"282fe92555a6d10b747920dd41de985f381465edcda91271f380c4b3116eeed4\" returns successfully" Nov 12 20:47:01.760699 systemd-networkd[1465]: vxlan.calico: Gained IPv6LL Nov 12 20:47:01.882087 systemd-networkd[1465]: calic78ff469fdb: Link UP Nov 12 20:47:01.882488 systemd-networkd[1465]: calic78ff469fdb: Gained carrier Nov 12 20:47:01.908294 kubelet[2803]: I1112 20:47:01.908268 2803 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-55d5bcd669-b7s55" podStartSLOduration=69.453748417 podStartE2EDuration="1m11.908073579s" podCreationTimestamp="2024-11-12 20:45:50 +0000 UTC" firstStartedPulling="2024-11-12 20:46:58.944859141 +0000 UTC m=+88.603127795" lastFinishedPulling="2024-11-12 20:47:01.399184301 +0000 UTC m=+91.057452957" observedRunningTime="2024-11-12 20:47:01.885900913 +0000 UTC m=+91.544169571" watchObservedRunningTime="2024-11-12 20:47:01.908073579 +0000 UTC m=+91.566342238" Nov 12 20:47:01.910896 containerd[1545]: 2024-11-12 20:47:01.782 [INFO][5384] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--jx2wl-eth0 coredns-76f75df574- kube-system 2cb5bf3f-a6f0-4cca-8e64-700b92fbd244 1032 0 2024-11-12 20:45:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-jx2wl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic78ff469fdb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b2c60dc5d68ca03a4fa2adbdc6fd255c61fcaae09a7cd38dcea601dc36e24a7e" Namespace="kube-system" Pod="coredns-76f75df574-jx2wl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--jx2wl-" Nov 12 20:47:01.910896 containerd[1545]: 2024-11-12 20:47:01.783 [INFO][5384] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b2c60dc5d68ca03a4fa2adbdc6fd255c61fcaae09a7cd38dcea601dc36e24a7e" Namespace="kube-system" Pod="coredns-76f75df574-jx2wl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--jx2wl-eth0" Nov 12 20:47:01.910896 containerd[1545]: 2024-11-12 20:47:01.801 [INFO][5396] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b2c60dc5d68ca03a4fa2adbdc6fd255c61fcaae09a7cd38dcea601dc36e24a7e" HandleID="k8s-pod-network.b2c60dc5d68ca03a4fa2adbdc6fd255c61fcaae09a7cd38dcea601dc36e24a7e" Workload="localhost-k8s-coredns--76f75df574--jx2wl-eth0" Nov 12 20:47:01.910896 containerd[1545]: 2024-11-12 20:47:01.807 [INFO][5396] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b2c60dc5d68ca03a4fa2adbdc6fd255c61fcaae09a7cd38dcea601dc36e24a7e" HandleID="k8s-pod-network.b2c60dc5d68ca03a4fa2adbdc6fd255c61fcaae09a7cd38dcea601dc36e24a7e" Workload="localhost-k8s-coredns--76f75df574--jx2wl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290b70), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-jx2wl", "timestamp":"2024-11-12 20:47:01.801014583 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:47:01.910896 containerd[1545]: 2024-11-12 20:47:01.807 [INFO][5396] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:47:01.910896 containerd[1545]: 2024-11-12 20:47:01.807 [INFO][5396] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:47:01.910896 containerd[1545]: 2024-11-12 20:47:01.807 [INFO][5396] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 20:47:01.910896 containerd[1545]: 2024-11-12 20:47:01.809 [INFO][5396] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b2c60dc5d68ca03a4fa2adbdc6fd255c61fcaae09a7cd38dcea601dc36e24a7e" host="localhost" Nov 12 20:47:01.910896 containerd[1545]: 2024-11-12 20:47:01.811 [INFO][5396] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 20:47:01.910896 containerd[1545]: 2024-11-12 20:47:01.814 [INFO][5396] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 20:47:01.910896 containerd[1545]: 2024-11-12 20:47:01.815 [INFO][5396] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 20:47:01.910896 containerd[1545]: 2024-11-12 20:47:01.821 [INFO][5396] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 20:47:01.910896 containerd[1545]: 2024-11-12 20:47:01.821 [INFO][5396] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b2c60dc5d68ca03a4fa2adbdc6fd255c61fcaae09a7cd38dcea601dc36e24a7e" host="localhost" Nov 12 20:47:01.910896 containerd[1545]: 2024-11-12 20:47:01.826 [INFO][5396] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b2c60dc5d68ca03a4fa2adbdc6fd255c61fcaae09a7cd38dcea601dc36e24a7e Nov 12 20:47:01.910896 containerd[1545]: 2024-11-12 20:47:01.848 [INFO][5396] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b2c60dc5d68ca03a4fa2adbdc6fd255c61fcaae09a7cd38dcea601dc36e24a7e" host="localhost" Nov 12 20:47:01.910896 containerd[1545]: 2024-11-12 20:47:01.879 [INFO][5396] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.b2c60dc5d68ca03a4fa2adbdc6fd255c61fcaae09a7cd38dcea601dc36e24a7e" host="localhost" Nov 12 20:47:01.910896 containerd[1545]: 2024-11-12 20:47:01.879 [INFO][5396] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.b2c60dc5d68ca03a4fa2adbdc6fd255c61fcaae09a7cd38dcea601dc36e24a7e" host="localhost" Nov 12 20:47:01.910896 containerd[1545]: 2024-11-12 20:47:01.879 [INFO][5396] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:47:01.910896 containerd[1545]: 2024-11-12 20:47:01.879 [INFO][5396] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="b2c60dc5d68ca03a4fa2adbdc6fd255c61fcaae09a7cd38dcea601dc36e24a7e" HandleID="k8s-pod-network.b2c60dc5d68ca03a4fa2adbdc6fd255c61fcaae09a7cd38dcea601dc36e24a7e" Workload="localhost-k8s-coredns--76f75df574--jx2wl-eth0" Nov 12 20:47:01.912417 containerd[1545]: 2024-11-12 20:47:01.880 [INFO][5384] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b2c60dc5d68ca03a4fa2adbdc6fd255c61fcaae09a7cd38dcea601dc36e24a7e" Namespace="kube-system" Pod="coredns-76f75df574-jx2wl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--jx2wl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--jx2wl-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"2cb5bf3f-a6f0-4cca-8e64-700b92fbd244", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 45, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-jx2wl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic78ff469fdb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:47:01.912417 containerd[1545]: 2024-11-12 20:47:01.880 [INFO][5384] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="b2c60dc5d68ca03a4fa2adbdc6fd255c61fcaae09a7cd38dcea601dc36e24a7e" Namespace="kube-system" Pod="coredns-76f75df574-jx2wl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--jx2wl-eth0" Nov 12 20:47:01.912417 containerd[1545]: 2024-11-12 20:47:01.881 [INFO][5384] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic78ff469fdb ContainerID="b2c60dc5d68ca03a4fa2adbdc6fd255c61fcaae09a7cd38dcea601dc36e24a7e" Namespace="kube-system" Pod="coredns-76f75df574-jx2wl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--jx2wl-eth0" Nov 12 20:47:01.912417 containerd[1545]: 2024-11-12 20:47:01.882 [INFO][5384] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b2c60dc5d68ca03a4fa2adbdc6fd255c61fcaae09a7cd38dcea601dc36e24a7e" Namespace="kube-system" Pod="coredns-76f75df574-jx2wl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--jx2wl-eth0" Nov 12 20:47:01.912417 containerd[1545]: 2024-11-12 20:47:01.883 [INFO][5384] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b2c60dc5d68ca03a4fa2adbdc6fd255c61fcaae09a7cd38dcea601dc36e24a7e" Namespace="kube-system" Pod="coredns-76f75df574-jx2wl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--jx2wl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--jx2wl-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"2cb5bf3f-a6f0-4cca-8e64-700b92fbd244", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 45, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b2c60dc5d68ca03a4fa2adbdc6fd255c61fcaae09a7cd38dcea601dc36e24a7e", Pod:"coredns-76f75df574-jx2wl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic78ff469fdb", MAC:"4e:63:9b:76:30:41", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:47:01.912417 containerd[1545]: 2024-11-12 20:47:01.908 [INFO][5384] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b2c60dc5d68ca03a4fa2adbdc6fd255c61fcaae09a7cd38dcea601dc36e24a7e" Namespace="kube-system" Pod="coredns-76f75df574-jx2wl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--jx2wl-eth0" Nov 12 20:47:01.946007 containerd[1545]: time="2024-11-12T20:47:01.945466839Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:47:01.946007 containerd[1545]: time="2024-11-12T20:47:01.945933339Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:47:01.946953 containerd[1545]: time="2024-11-12T20:47:01.945986825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:47:01.947124 containerd[1545]: time="2024-11-12T20:47:01.947073258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:47:01.963670 systemd[1]: Started cri-containerd-b2c60dc5d68ca03a4fa2adbdc6fd255c61fcaae09a7cd38dcea601dc36e24a7e.scope - libcontainer container b2c60dc5d68ca03a4fa2adbdc6fd255c61fcaae09a7cd38dcea601dc36e24a7e. Nov 12 20:47:01.972885 systemd-resolved[1417]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 20:47:02.000566 containerd[1545]: time="2024-11-12T20:47:02.000325104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-jx2wl,Uid:2cb5bf3f-a6f0-4cca-8e64-700b92fbd244,Namespace:kube-system,Attempt:1,} returns sandbox id \"b2c60dc5d68ca03a4fa2adbdc6fd255c61fcaae09a7cd38dcea601dc36e24a7e\"" Nov 12 20:47:02.008562 containerd[1545]: time="2024-11-12T20:47:02.008405762Z" level=info msg="CreateContainer within sandbox \"b2c60dc5d68ca03a4fa2adbdc6fd255c61fcaae09a7cd38dcea601dc36e24a7e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 20:47:02.665500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount806329894.mount: Deactivated successfully. Nov 12 20:47:02.671132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1211107843.mount: Deactivated successfully. Nov 12 20:47:02.672665 containerd[1545]: time="2024-11-12T20:47:02.672642967Z" level=info msg="CreateContainer within sandbox \"b2c60dc5d68ca03a4fa2adbdc6fd255c61fcaae09a7cd38dcea601dc36e24a7e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2c9b4803b4f101111c6a846fac492d741857d84f69bc6677f7ffb7c9b05fc082\"" Nov 12 20:47:02.674368 containerd[1545]: time="2024-11-12T20:47:02.673092213Z" level=info msg="StartContainer for \"2c9b4803b4f101111c6a846fac492d741857d84f69bc6677f7ffb7c9b05fc082\"" Nov 12 20:47:02.699670 systemd[1]: Started cri-containerd-2c9b4803b4f101111c6a846fac492d741857d84f69bc6677f7ffb7c9b05fc082.scope - libcontainer container 2c9b4803b4f101111c6a846fac492d741857d84f69bc6677f7ffb7c9b05fc082. Nov 12 20:47:02.722566 containerd[1545]: time="2024-11-12T20:47:02.722225694Z" level=info msg="StartContainer for \"2c9b4803b4f101111c6a846fac492d741857d84f69bc6677f7ffb7c9b05fc082\" returns successfully" Nov 12 20:47:03.039738 systemd-networkd[1465]: calic78ff469fdb: Gained IPv6LL Nov 12 20:47:03.281010 containerd[1545]: time="2024-11-12T20:47:03.280976441Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:03.286156 containerd[1545]: time="2024-11-12T20:47:03.286127730Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.0: active requests=0, bytes read=7902635" Nov 12 20:47:03.291796 containerd[1545]: time="2024-11-12T20:47:03.291745693Z" level=info msg="ImageCreate event name:\"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:03.303050 containerd[1545]: time="2024-11-12T20:47:03.302999462Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.0\" with image id \"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\", size \"9395727\" in 1.892536375s" Nov 12 20:47:03.303050 containerd[1545]: time="2024-11-12T20:47:03.303030305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\" returns image reference \"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\"" Nov 12 20:47:03.303178 containerd[1545]: time="2024-11-12T20:47:03.303012767Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:03.303746 containerd[1545]: time="2024-11-12T20:47:03.303636412Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 12 20:47:03.316771 containerd[1545]: time="2024-11-12T20:47:03.316701887Z" level=info msg="CreateContainer within sandbox \"ab729ee86cedc296b3f86a769c080aa16056b071c49adfbfb7173392f85c8100\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Nov 12 20:47:03.372586 containerd[1545]: time="2024-11-12T20:47:03.372555722Z" level=info msg="CreateContainer within sandbox \"ab729ee86cedc296b3f86a769c080aa16056b071c49adfbfb7173392f85c8100\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a11839259a556c824052dd786b329681a818da3ae38800f4467e54003ffe2381\"" Nov 12 20:47:03.372966 containerd[1545]: time="2024-11-12T20:47:03.372887243Z" level=info msg="StartContainer for \"a11839259a556c824052dd786b329681a818da3ae38800f4467e54003ffe2381\"" Nov 12 20:47:03.393761 systemd[1]: Started cri-containerd-a11839259a556c824052dd786b329681a818da3ae38800f4467e54003ffe2381.scope - libcontainer container a11839259a556c824052dd786b329681a818da3ae38800f4467e54003ffe2381. Nov 12 20:47:03.418514 containerd[1545]: time="2024-11-12T20:47:03.418445183Z" level=info msg="StartContainer for \"a11839259a556c824052dd786b329681a818da3ae38800f4467e54003ffe2381\" returns successfully" Nov 12 20:47:03.762373 containerd[1545]: time="2024-11-12T20:47:03.762336777Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:03.773239 containerd[1545]: time="2024-11-12T20:47:03.773067650Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=77" Nov 12 20:47:03.782192 containerd[1545]: time="2024-11-12T20:47:03.775188169Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"43457038\" in 471.535453ms" Nov 12 20:47:03.782192 containerd[1545]: time="2024-11-12T20:47:03.775205593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\"" Nov 12 20:47:03.782192 containerd[1545]: time="2024-11-12T20:47:03.775523092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\"" Nov 12 20:47:03.787892 containerd[1545]: time="2024-11-12T20:47:03.787820837Z" level=info msg="CreateContainer within sandbox \"512d912e665391348ef57996f66de05a9ce30e998f321b149df1996c3acdbc83\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 20:47:03.883469 containerd[1545]: time="2024-11-12T20:47:03.883374432Z" level=info msg="CreateContainer within sandbox \"512d912e665391348ef57996f66de05a9ce30e998f321b149df1996c3acdbc83\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5092d6a6199d05292646f97cf2741b2ee62a4721e7bf054f83187c351dbafd1e\"" Nov 12 20:47:03.884307 containerd[1545]: time="2024-11-12T20:47:03.884290719Z" level=info msg="StartContainer for \"5092d6a6199d05292646f97cf2741b2ee62a4721e7bf054f83187c351dbafd1e\"" Nov 12 20:47:03.915676 systemd[1]: Started cri-containerd-5092d6a6199d05292646f97cf2741b2ee62a4721e7bf054f83187c351dbafd1e.scope - libcontainer container 5092d6a6199d05292646f97cf2741b2ee62a4721e7bf054f83187c351dbafd1e. Nov 12 20:47:03.923182 kubelet[2803]: I1112 20:47:03.923161 2803 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-jx2wl" podStartSLOduration=79.923133016 podStartE2EDuration="1m19.923133016s" podCreationTimestamp="2024-11-12 20:45:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:47:02.869801 +0000 UTC m=+92.528069683" watchObservedRunningTime="2024-11-12 20:47:03.923133016 +0000 UTC m=+93.581401681" Nov 12 20:47:03.961123 containerd[1545]: time="2024-11-12T20:47:03.961045071Z" level=info msg="StartContainer for \"5092d6a6199d05292646f97cf2741b2ee62a4721e7bf054f83187c351dbafd1e\" returns successfully" Nov 12 20:47:04.476316 containerd[1545]: time="2024-11-12T20:47:04.476101449Z" level=info msg="StopPodSandbox for \"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\"" Nov 12 20:47:04.556156 containerd[1545]: 2024-11-12 20:47:04.535 [INFO][5590] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3" Nov 12 20:47:04.556156 containerd[1545]: 2024-11-12 20:47:04.535 [INFO][5590] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3" iface="eth0" netns="/var/run/netns/cni-9593b6d2-ac9e-240b-4a17-04e39502ed33" Nov 12 20:47:04.556156 containerd[1545]: 2024-11-12 20:47:04.535 [INFO][5590] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3" iface="eth0" netns="/var/run/netns/cni-9593b6d2-ac9e-240b-4a17-04e39502ed33" Nov 12 20:47:04.556156 containerd[1545]: 2024-11-12 20:47:04.536 [INFO][5590] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3" iface="eth0" netns="/var/run/netns/cni-9593b6d2-ac9e-240b-4a17-04e39502ed33" Nov 12 20:47:04.556156 containerd[1545]: 2024-11-12 20:47:04.536 [INFO][5590] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3" Nov 12 20:47:04.556156 containerd[1545]: 2024-11-12 20:47:04.536 [INFO][5590] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3" Nov 12 20:47:04.556156 containerd[1545]: 2024-11-12 20:47:04.549 [INFO][5596] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3" HandleID="k8s-pod-network.a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3" Workload="localhost-k8s-coredns--76f75df574--q4n22-eth0" Nov 12 20:47:04.556156 containerd[1545]: 2024-11-12 20:47:04.549 [INFO][5596] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:47:04.556156 containerd[1545]: 2024-11-12 20:47:04.549 [INFO][5596] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:47:04.556156 containerd[1545]: 2024-11-12 20:47:04.552 [WARNING][5596] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3" HandleID="k8s-pod-network.a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3" Workload="localhost-k8s-coredns--76f75df574--q4n22-eth0" Nov 12 20:47:04.556156 containerd[1545]: 2024-11-12 20:47:04.552 [INFO][5596] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3" HandleID="k8s-pod-network.a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3" Workload="localhost-k8s-coredns--76f75df574--q4n22-eth0" Nov 12 20:47:04.556156 containerd[1545]: 2024-11-12 20:47:04.553 [INFO][5596] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:47:04.556156 containerd[1545]: 2024-11-12 20:47:04.554 [INFO][5590] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3" Nov 12 20:47:04.557386 containerd[1545]: time="2024-11-12T20:47:04.556494273Z" level=info msg="TearDown network for sandbox \"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\" successfully" Nov 12 20:47:04.557386 containerd[1545]: time="2024-11-12T20:47:04.556513262Z" level=info msg="StopPodSandbox for \"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\" returns successfully" Nov 12 20:47:04.557975 systemd[1]: run-netns-cni\x2d9593b6d2\x2dac9e\x2d240b\x2d4a17\x2d04e39502ed33.mount: Deactivated successfully. Nov 12 20:47:04.558786 containerd[1545]: time="2024-11-12T20:47:04.558747984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-q4n22,Uid:92ebb2b6-a9b5-4cb2-9d42-b7d671fa5e35,Namespace:kube-system,Attempt:1,}" Nov 12 20:47:04.670911 systemd-networkd[1465]: cali4ff5eda038a: Link UP Nov 12 20:47:04.671544 systemd-networkd[1465]: cali4ff5eda038a: Gained carrier Nov 12 20:47:04.701430 containerd[1545]: 2024-11-12 20:47:04.600 [INFO][5603] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--q4n22-eth0 coredns-76f75df574- kube-system 92ebb2b6-a9b5-4cb2-9d42-b7d671fa5e35 1079 0 2024-11-12 20:45:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-q4n22 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4ff5eda038a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="1933d9b7771906c0a055c0d07ab1b7203b7890ac037d364bd0ba222fe6e4cfb1" Namespace="kube-system" Pod="coredns-76f75df574-q4n22" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--q4n22-" Nov 12 20:47:04.701430 containerd[1545]: 2024-11-12 20:47:04.600 [INFO][5603] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1933d9b7771906c0a055c0d07ab1b7203b7890ac037d364bd0ba222fe6e4cfb1" Namespace="kube-system" Pod="coredns-76f75df574-q4n22" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--q4n22-eth0" Nov 12 20:47:04.701430 containerd[1545]: 2024-11-12 20:47:04.628 [INFO][5614] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1933d9b7771906c0a055c0d07ab1b7203b7890ac037d364bd0ba222fe6e4cfb1" HandleID="k8s-pod-network.1933d9b7771906c0a055c0d07ab1b7203b7890ac037d364bd0ba222fe6e4cfb1" Workload="localhost-k8s-coredns--76f75df574--q4n22-eth0" Nov 12 20:47:04.701430 containerd[1545]: 2024-11-12 20:47:04.642 [INFO][5614] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1933d9b7771906c0a055c0d07ab1b7203b7890ac037d364bd0ba222fe6e4cfb1" HandleID="k8s-pod-network.1933d9b7771906c0a055c0d07ab1b7203b7890ac037d364bd0ba222fe6e4cfb1" Workload="localhost-k8s-coredns--76f75df574--q4n22-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031ada0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-q4n22", "timestamp":"2024-11-12 20:47:04.628076772 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:47:04.701430 containerd[1545]: 2024-11-12 20:47:04.642 [INFO][5614] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:47:04.701430 containerd[1545]: 2024-11-12 20:47:04.642 [INFO][5614] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:47:04.701430 containerd[1545]: 2024-11-12 20:47:04.642 [INFO][5614] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 20:47:04.701430 containerd[1545]: 2024-11-12 20:47:04.643 [INFO][5614] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1933d9b7771906c0a055c0d07ab1b7203b7890ac037d364bd0ba222fe6e4cfb1" host="localhost" Nov 12 20:47:04.701430 containerd[1545]: 2024-11-12 20:47:04.645 [INFO][5614] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 20:47:04.701430 containerd[1545]: 2024-11-12 20:47:04.650 [INFO][5614] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 20:47:04.701430 containerd[1545]: 2024-11-12 20:47:04.651 [INFO][5614] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 20:47:04.701430 containerd[1545]: 2024-11-12 20:47:04.652 [INFO][5614] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 20:47:04.701430 containerd[1545]: 2024-11-12 20:47:04.652 [INFO][5614] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1933d9b7771906c0a055c0d07ab1b7203b7890ac037d364bd0ba222fe6e4cfb1" host="localhost" Nov 12 20:47:04.701430 containerd[1545]: 2024-11-12 20:47:04.653 [INFO][5614] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1933d9b7771906c0a055c0d07ab1b7203b7890ac037d364bd0ba222fe6e4cfb1 Nov 12 20:47:04.701430 containerd[1545]: 2024-11-12 20:47:04.657 [INFO][5614] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1933d9b7771906c0a055c0d07ab1b7203b7890ac037d364bd0ba222fe6e4cfb1" host="localhost" Nov 12 20:47:04.701430 containerd[1545]: 2024-11-12 20:47:04.667 [INFO][5614] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.1933d9b7771906c0a055c0d07ab1b7203b7890ac037d364bd0ba222fe6e4cfb1" host="localhost" Nov 12 20:47:04.701430 containerd[1545]: 2024-11-12 20:47:04.667 [INFO][5614] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.1933d9b7771906c0a055c0d07ab1b7203b7890ac037d364bd0ba222fe6e4cfb1" host="localhost" Nov 12 20:47:04.701430 containerd[1545]: 2024-11-12 20:47:04.667 [INFO][5614] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:47:04.701430 containerd[1545]: 2024-11-12 20:47:04.667 [INFO][5614] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="1933d9b7771906c0a055c0d07ab1b7203b7890ac037d364bd0ba222fe6e4cfb1" HandleID="k8s-pod-network.1933d9b7771906c0a055c0d07ab1b7203b7890ac037d364bd0ba222fe6e4cfb1" Workload="localhost-k8s-coredns--76f75df574--q4n22-eth0" Nov 12 20:47:04.710176 containerd[1545]: 2024-11-12 20:47:04.668 [INFO][5603] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1933d9b7771906c0a055c0d07ab1b7203b7890ac037d364bd0ba222fe6e4cfb1" Namespace="kube-system" Pod="coredns-76f75df574-q4n22" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--q4n22-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--q4n22-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"92ebb2b6-a9b5-4cb2-9d42-b7d671fa5e35", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 45, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-q4n22", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4ff5eda038a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:47:04.710176 containerd[1545]: 2024-11-12 20:47:04.668 [INFO][5603] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="1933d9b7771906c0a055c0d07ab1b7203b7890ac037d364bd0ba222fe6e4cfb1" Namespace="kube-system" Pod="coredns-76f75df574-q4n22" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--q4n22-eth0" Nov 12 20:47:04.710176 containerd[1545]: 2024-11-12 20:47:04.668 [INFO][5603] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4ff5eda038a ContainerID="1933d9b7771906c0a055c0d07ab1b7203b7890ac037d364bd0ba222fe6e4cfb1" Namespace="kube-system" Pod="coredns-76f75df574-q4n22" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--q4n22-eth0" Nov 12 20:47:04.710176 containerd[1545]: 2024-11-12 20:47:04.671 [INFO][5603] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1933d9b7771906c0a055c0d07ab1b7203b7890ac037d364bd0ba222fe6e4cfb1" Namespace="kube-system" Pod="coredns-76f75df574-q4n22" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--q4n22-eth0" Nov 12 20:47:04.710176 containerd[1545]: 2024-11-12 20:47:04.671 [INFO][5603] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1933d9b7771906c0a055c0d07ab1b7203b7890ac037d364bd0ba222fe6e4cfb1" Namespace="kube-system" Pod="coredns-76f75df574-q4n22" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--q4n22-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--q4n22-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"92ebb2b6-a9b5-4cb2-9d42-b7d671fa5e35", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 45, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1933d9b7771906c0a055c0d07ab1b7203b7890ac037d364bd0ba222fe6e4cfb1", Pod:"coredns-76f75df574-q4n22", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4ff5eda038a", MAC:"3a:c2:89:7c:d2:9f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:47:04.710176 containerd[1545]: 2024-11-12 20:47:04.698 [INFO][5603] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1933d9b7771906c0a055c0d07ab1b7203b7890ac037d364bd0ba222fe6e4cfb1" Namespace="kube-system" Pod="coredns-76f75df574-q4n22" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--q4n22-eth0" Nov 12 20:47:04.803087 containerd[1545]: time="2024-11-12T20:47:04.802889924Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:47:04.803087 containerd[1545]: time="2024-11-12T20:47:04.802938276Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:47:04.803087 containerd[1545]: time="2024-11-12T20:47:04.802952074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:47:04.814342 containerd[1545]: time="2024-11-12T20:47:04.803009569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:47:04.821880 systemd[1]: Started cri-containerd-1933d9b7771906c0a055c0d07ab1b7203b7890ac037d364bd0ba222fe6e4cfb1.scope - libcontainer container 1933d9b7771906c0a055c0d07ab1b7203b7890ac037d364bd0ba222fe6e4cfb1. Nov 12 20:47:04.833823 systemd-resolved[1417]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 20:47:04.858509 containerd[1545]: time="2024-11-12T20:47:04.858482816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-q4n22,Uid:92ebb2b6-a9b5-4cb2-9d42-b7d671fa5e35,Namespace:kube-system,Attempt:1,} returns sandbox id \"1933d9b7771906c0a055c0d07ab1b7203b7890ac037d364bd0ba222fe6e4cfb1\"" Nov 12 20:47:04.875595 containerd[1545]: time="2024-11-12T20:47:04.875488978Z" level=info msg="CreateContainer within sandbox \"1933d9b7771906c0a055c0d07ab1b7203b7890ac037d364bd0ba222fe6e4cfb1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 20:47:04.917288 kubelet[2803]: I1112 20:47:04.917206 2803 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-55d5bcd669-xs25j" podStartSLOduration=70.190294923 podStartE2EDuration="1m14.916981405s" podCreationTimestamp="2024-11-12 20:45:50 +0000 UTC" firstStartedPulling="2024-11-12 20:46:59.048710163 +0000 UTC m=+88.706978818" lastFinishedPulling="2024-11-12 20:47:03.775396645 +0000 UTC m=+93.433665300" observedRunningTime="2024-11-12 20:47:04.882144756 +0000 UTC m=+94.540413423" watchObservedRunningTime="2024-11-12 20:47:04.916981405 +0000 UTC m=+94.575250064" Nov 12 20:47:04.920733 containerd[1545]: time="2024-11-12T20:47:04.920702177Z" level=info msg="CreateContainer within sandbox \"1933d9b7771906c0a055c0d07ab1b7203b7890ac037d364bd0ba222fe6e4cfb1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d8c7fb3f93605a0bfd8004f37fb28ad32e008759aaf7baa3cf92d83ba9e9ded0\"" Nov 12 20:47:04.921251 containerd[1545]: time="2024-11-12T20:47:04.921232771Z" level=info msg="StartContainer for \"d8c7fb3f93605a0bfd8004f37fb28ad32e008759aaf7baa3cf92d83ba9e9ded0\"" Nov 12 20:47:04.945688 systemd[1]: Started cri-containerd-d8c7fb3f93605a0bfd8004f37fb28ad32e008759aaf7baa3cf92d83ba9e9ded0.scope - libcontainer container d8c7fb3f93605a0bfd8004f37fb28ad32e008759aaf7baa3cf92d83ba9e9ded0. Nov 12 20:47:04.967668 containerd[1545]: time="2024-11-12T20:47:04.967641161Z" level=info msg="StartContainer for \"d8c7fb3f93605a0bfd8004f37fb28ad32e008759aaf7baa3cf92d83ba9e9ded0\" returns successfully" Nov 12 20:47:05.801170 containerd[1545]: time="2024-11-12T20:47:05.800745513Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:05.805302 containerd[1545]: time="2024-11-12T20:47:05.805275827Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0: active requests=0, bytes read=10501080" Nov 12 20:47:05.805690 containerd[1545]: time="2024-11-12T20:47:05.805658379Z" level=info msg="ImageCreate event name:\"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:05.806772 containerd[1545]: time="2024-11-12T20:47:05.806748278Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:05.807386 containerd[1545]: time="2024-11-12T20:47:05.807138610Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" with image id \"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\", size \"11994124\" in 2.031597892s" Nov 12 20:47:05.807386 containerd[1545]: time="2024-11-12T20:47:05.807160091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" returns image reference \"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\"" Nov 12 20:47:05.808271 containerd[1545]: time="2024-11-12T20:47:05.808245708Z" level=info msg="CreateContainer within sandbox \"ab729ee86cedc296b3f86a769c080aa16056b071c49adfbfb7173392f85c8100\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Nov 12 20:47:05.823470 containerd[1545]: time="2024-11-12T20:47:05.823446382Z" level=info msg="CreateContainer within sandbox \"ab729ee86cedc296b3f86a769c080aa16056b071c49adfbfb7173392f85c8100\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"be66b6d247feb0c2460703b75d9637e7712dc99c2d61e411e207f415ac1b210f\"" Nov 12 20:47:05.824390 containerd[1545]: time="2024-11-12T20:47:05.824340444Z" level=info msg="StartContainer for \"be66b6d247feb0c2460703b75d9637e7712dc99c2d61e411e207f415ac1b210f\"" Nov 12 20:47:05.847639 systemd[1]: Started cri-containerd-be66b6d247feb0c2460703b75d9637e7712dc99c2d61e411e207f415ac1b210f.scope - libcontainer container be66b6d247feb0c2460703b75d9637e7712dc99c2d61e411e207f415ac1b210f. Nov 12 20:47:05.871584 containerd[1545]: time="2024-11-12T20:47:05.871477324Z" level=info msg="StartContainer for \"be66b6d247feb0c2460703b75d9637e7712dc99c2d61e411e207f415ac1b210f\" returns successfully" Nov 12 20:47:05.903049 kubelet[2803]: I1112 20:47:05.902845 2803 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-q4n22" podStartSLOduration=81.902816403 podStartE2EDuration="1m21.902816403s" podCreationTimestamp="2024-11-12 20:45:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:47:05.888140484 +0000 UTC m=+95.546409149" watchObservedRunningTime="2024-11-12 20:47:05.902816403 +0000 UTC m=+95.561085068" Nov 12 20:47:06.280791 systemd[1]: Started sshd@12-139.178.70.104:22-139.178.68.195:43168.service - OpenSSH per-connection server daemon (139.178.68.195:43168). Nov 12 20:47:06.347887 sshd[5770]: Accepted publickey for core from 139.178.68.195 port 43168 ssh2: RSA SHA256:eW+66Zcd2Hcqsdn9w7YOca9/FmdLw/8eMbZ4A5lBUuE Nov 12 20:47:06.349539 sshd[5770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:47:06.352265 systemd-logind[1522]: New session 15 of user core. Nov 12 20:47:06.358825 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 12 20:47:06.476506 containerd[1545]: time="2024-11-12T20:47:06.476475760Z" level=info msg="StopPodSandbox for \"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\"" Nov 12 20:47:06.560075 systemd-networkd[1465]: cali4ff5eda038a: Gained IPv6LL Nov 12 20:47:06.563694 containerd[1545]: 2024-11-12 20:47:06.525 [INFO][5790] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c" Nov 12 20:47:06.563694 containerd[1545]: 2024-11-12 20:47:06.526 [INFO][5790] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c" iface="eth0" netns="/var/run/netns/cni-06c5b582-9516-62ee-bf45-789afe60e661" Nov 12 20:47:06.563694 containerd[1545]: 2024-11-12 20:47:06.526 [INFO][5790] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c" iface="eth0" netns="/var/run/netns/cni-06c5b582-9516-62ee-bf45-789afe60e661" Nov 12 20:47:06.563694 containerd[1545]: 2024-11-12 20:47:06.527 [INFO][5790] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c" iface="eth0" netns="/var/run/netns/cni-06c5b582-9516-62ee-bf45-789afe60e661" Nov 12 20:47:06.563694 containerd[1545]: 2024-11-12 20:47:06.527 [INFO][5790] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c" Nov 12 20:47:06.563694 containerd[1545]: 2024-11-12 20:47:06.527 [INFO][5790] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c" Nov 12 20:47:06.563694 containerd[1545]: 2024-11-12 20:47:06.550 [INFO][5799] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c" HandleID="k8s-pod-network.7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c" Workload="localhost-k8s-calico--kube--controllers--6d6985898d--bw6xq-eth0" Nov 12 20:47:06.563694 containerd[1545]: 2024-11-12 20:47:06.550 [INFO][5799] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:47:06.563694 containerd[1545]: 2024-11-12 20:47:06.550 [INFO][5799] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:47:06.563694 containerd[1545]: 2024-11-12 20:47:06.556 [WARNING][5799] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c" HandleID="k8s-pod-network.7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c" Workload="localhost-k8s-calico--kube--controllers--6d6985898d--bw6xq-eth0" Nov 12 20:47:06.563694 containerd[1545]: 2024-11-12 20:47:06.556 [INFO][5799] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c" HandleID="k8s-pod-network.7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c" Workload="localhost-k8s-calico--kube--controllers--6d6985898d--bw6xq-eth0" Nov 12 20:47:06.563694 containerd[1545]: 2024-11-12 20:47:06.557 [INFO][5799] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:47:06.563694 containerd[1545]: 2024-11-12 20:47:06.561 [INFO][5790] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c" Nov 12 20:47:06.566642 containerd[1545]: time="2024-11-12T20:47:06.564281039Z" level=info msg="TearDown network for sandbox \"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\" successfully" Nov 12 20:47:06.566642 containerd[1545]: time="2024-11-12T20:47:06.564298040Z" level=info msg="StopPodSandbox for \"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\" returns successfully" Nov 12 20:47:06.566642 containerd[1545]: time="2024-11-12T20:47:06.565169994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d6985898d-bw6xq,Uid:b1afbb6d-8e27-4966-8b72-7f067d947668,Namespace:calico-system,Attempt:1,}" Nov 12 20:47:06.566879 systemd[1]: run-netns-cni\x2d06c5b582\x2d9516\x2d62ee\x2dbf45\x2d789afe60e661.mount: Deactivated successfully. Nov 12 20:47:06.718258 systemd-networkd[1465]: cali3008767504b: Link UP Nov 12 20:47:06.720074 systemd-networkd[1465]: cali3008767504b: Gained carrier Nov 12 20:47:06.733873 containerd[1545]: 2024-11-12 20:47:06.614 [INFO][5805] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6d6985898d--bw6xq-eth0 calico-kube-controllers-6d6985898d- calico-system b1afbb6d-8e27-4966-8b72-7f067d947668 1130 0 2024-11-12 20:45:51 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6d6985898d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6d6985898d-bw6xq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3008767504b [] []}} ContainerID="7e45d61eb87090b076cb81a9bc0e0487438dc52cf0c01c02d1cf905bb7ae7d1b" Namespace="calico-system" Pod="calico-kube-controllers-6d6985898d-bw6xq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d6985898d--bw6xq-" Nov 12 20:47:06.733873 containerd[1545]: 2024-11-12 20:47:06.614 [INFO][5805] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7e45d61eb87090b076cb81a9bc0e0487438dc52cf0c01c02d1cf905bb7ae7d1b" Namespace="calico-system" Pod="calico-kube-controllers-6d6985898d-bw6xq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d6985898d--bw6xq-eth0" Nov 12 20:47:06.733873 containerd[1545]: 2024-11-12 20:47:06.639 [INFO][5817] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7e45d61eb87090b076cb81a9bc0e0487438dc52cf0c01c02d1cf905bb7ae7d1b" HandleID="k8s-pod-network.7e45d61eb87090b076cb81a9bc0e0487438dc52cf0c01c02d1cf905bb7ae7d1b" Workload="localhost-k8s-calico--kube--controllers--6d6985898d--bw6xq-eth0" Nov 12 20:47:06.733873 containerd[1545]: 2024-11-12 20:47:06.682 [INFO][5817] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7e45d61eb87090b076cb81a9bc0e0487438dc52cf0c01c02d1cf905bb7ae7d1b" HandleID="k8s-pod-network.7e45d61eb87090b076cb81a9bc0e0487438dc52cf0c01c02d1cf905bb7ae7d1b" Workload="localhost-k8s-calico--kube--controllers--6d6985898d--bw6xq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318df0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6d6985898d-bw6xq", "timestamp":"2024-11-12 20:47:06.639473346 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:47:06.733873 containerd[1545]: 2024-11-12 20:47:06.682 [INFO][5817] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:47:06.733873 containerd[1545]: 2024-11-12 20:47:06.682 [INFO][5817] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:47:06.733873 containerd[1545]: 2024-11-12 20:47:06.682 [INFO][5817] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 20:47:06.733873 containerd[1545]: 2024-11-12 20:47:06.684 [INFO][5817] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7e45d61eb87090b076cb81a9bc0e0487438dc52cf0c01c02d1cf905bb7ae7d1b" host="localhost" Nov 12 20:47:06.733873 containerd[1545]: 2024-11-12 20:47:06.686 [INFO][5817] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 20:47:06.733873 containerd[1545]: 2024-11-12 20:47:06.687 [INFO][5817] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 20:47:06.733873 containerd[1545]: 2024-11-12 20:47:06.689 [INFO][5817] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 20:47:06.733873 containerd[1545]: 2024-11-12 20:47:06.690 [INFO][5817] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 20:47:06.733873 containerd[1545]: 2024-11-12 20:47:06.690 [INFO][5817] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7e45d61eb87090b076cb81a9bc0e0487438dc52cf0c01c02d1cf905bb7ae7d1b" host="localhost" Nov 12 20:47:06.733873 containerd[1545]: 2024-11-12 20:47:06.691 [INFO][5817] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7e45d61eb87090b076cb81a9bc0e0487438dc52cf0c01c02d1cf905bb7ae7d1b Nov 12 20:47:06.733873 containerd[1545]: 2024-11-12 20:47:06.694 [INFO][5817] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7e45d61eb87090b076cb81a9bc0e0487438dc52cf0c01c02d1cf905bb7ae7d1b" host="localhost" Nov 12 20:47:06.733873 containerd[1545]: 2024-11-12 20:47:06.712 [INFO][5817] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.7e45d61eb87090b076cb81a9bc0e0487438dc52cf0c01c02d1cf905bb7ae7d1b" host="localhost" Nov 12 20:47:06.733873 containerd[1545]: 2024-11-12 20:47:06.712 [INFO][5817] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.7e45d61eb87090b076cb81a9bc0e0487438dc52cf0c01c02d1cf905bb7ae7d1b" host="localhost" Nov 12 20:47:06.733873 containerd[1545]: 2024-11-12 20:47:06.712 [INFO][5817] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:47:06.733873 containerd[1545]: 2024-11-12 20:47:06.712 [INFO][5817] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="7e45d61eb87090b076cb81a9bc0e0487438dc52cf0c01c02d1cf905bb7ae7d1b" HandleID="k8s-pod-network.7e45d61eb87090b076cb81a9bc0e0487438dc52cf0c01c02d1cf905bb7ae7d1b" Workload="localhost-k8s-calico--kube--controllers--6d6985898d--bw6xq-eth0" Nov 12 20:47:06.739282 containerd[1545]: 2024-11-12 20:47:06.714 [INFO][5805] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7e45d61eb87090b076cb81a9bc0e0487438dc52cf0c01c02d1cf905bb7ae7d1b" Namespace="calico-system" Pod="calico-kube-controllers-6d6985898d-bw6xq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d6985898d--bw6xq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d6985898d--bw6xq-eth0", GenerateName:"calico-kube-controllers-6d6985898d-", Namespace:"calico-system", SelfLink:"", UID:"b1afbb6d-8e27-4966-8b72-7f067d947668", ResourceVersion:"1130", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 45, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d6985898d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6d6985898d-bw6xq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3008767504b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:47:06.739282 containerd[1545]: 2024-11-12 20:47:06.714 [INFO][5805] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="7e45d61eb87090b076cb81a9bc0e0487438dc52cf0c01c02d1cf905bb7ae7d1b" Namespace="calico-system" Pod="calico-kube-controllers-6d6985898d-bw6xq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d6985898d--bw6xq-eth0" Nov 12 20:47:06.739282 containerd[1545]: 2024-11-12 20:47:06.714 [INFO][5805] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3008767504b ContainerID="7e45d61eb87090b076cb81a9bc0e0487438dc52cf0c01c02d1cf905bb7ae7d1b" Namespace="calico-system" Pod="calico-kube-controllers-6d6985898d-bw6xq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d6985898d--bw6xq-eth0" Nov 12 20:47:06.739282 containerd[1545]: 2024-11-12 20:47:06.718 [INFO][5805] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7e45d61eb87090b076cb81a9bc0e0487438dc52cf0c01c02d1cf905bb7ae7d1b" Namespace="calico-system" Pod="calico-kube-controllers-6d6985898d-bw6xq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d6985898d--bw6xq-eth0" Nov 12 20:47:06.739282 containerd[1545]: 2024-11-12 20:47:06.721 [INFO][5805] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7e45d61eb87090b076cb81a9bc0e0487438dc52cf0c01c02d1cf905bb7ae7d1b" Namespace="calico-system" Pod="calico-kube-controllers-6d6985898d-bw6xq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d6985898d--bw6xq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d6985898d--bw6xq-eth0", GenerateName:"calico-kube-controllers-6d6985898d-", Namespace:"calico-system", SelfLink:"", UID:"b1afbb6d-8e27-4966-8b72-7f067d947668", ResourceVersion:"1130", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 45, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d6985898d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7e45d61eb87090b076cb81a9bc0e0487438dc52cf0c01c02d1cf905bb7ae7d1b", Pod:"calico-kube-controllers-6d6985898d-bw6xq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3008767504b", MAC:"ee:b1:0c:27:20:67", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:47:06.739282 containerd[1545]: 2024-11-12 20:47:06.729 [INFO][5805] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7e45d61eb87090b076cb81a9bc0e0487438dc52cf0c01c02d1cf905bb7ae7d1b" Namespace="calico-system" Pod="calico-kube-controllers-6d6985898d-bw6xq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d6985898d--bw6xq-eth0" Nov 12 20:47:06.773342 containerd[1545]: time="2024-11-12T20:47:06.772812880Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:47:06.773342 containerd[1545]: time="2024-11-12T20:47:06.772874649Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:47:06.773342 containerd[1545]: time="2024-11-12T20:47:06.772896559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:47:06.773342 containerd[1545]: time="2024-11-12T20:47:06.772954223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:47:06.815643 systemd[1]: Started cri-containerd-7e45d61eb87090b076cb81a9bc0e0487438dc52cf0c01c02d1cf905bb7ae7d1b.scope - libcontainer container 7e45d61eb87090b076cb81a9bc0e0487438dc52cf0c01c02d1cf905bb7ae7d1b. Nov 12 20:47:06.837272 systemd-resolved[1417]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 20:47:06.889568 containerd[1545]: time="2024-11-12T20:47:06.889514269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d6985898d-bw6xq,Uid:b1afbb6d-8e27-4966-8b72-7f067d947668,Namespace:calico-system,Attempt:1,} returns sandbox id \"7e45d61eb87090b076cb81a9bc0e0487438dc52cf0c01c02d1cf905bb7ae7d1b\"" Nov 12 20:47:06.892595 containerd[1545]: time="2024-11-12T20:47:06.892572850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\"" Nov 12 20:47:07.067325 sshd[5770]: pam_unix(sshd:session): session closed for user core Nov 12 20:47:07.070925 systemd[1]: sshd@12-139.178.70.104:22-139.178.68.195:43168.service: Deactivated successfully. Nov 12 20:47:07.072271 systemd[1]: session-15.scope: Deactivated successfully. Nov 12 20:47:07.073877 systemd-logind[1522]: Session 15 logged out. Waiting for processes to exit. Nov 12 20:47:07.077355 systemd-logind[1522]: Removed session 15. Nov 12 20:47:07.081386 kubelet[2803]: I1112 20:47:07.081341 2803 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Nov 12 20:47:07.086230 kubelet[2803]: I1112 20:47:07.086192 2803 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Nov 12 20:47:07.565917 systemd[1]: run-containerd-runc-k8s.io-7e45d61eb87090b076cb81a9bc0e0487438dc52cf0c01c02d1cf905bb7ae7d1b-runc.UwS07Z.mount: Deactivated successfully. Nov 12 20:47:08.031766 systemd-networkd[1465]: cali3008767504b: Gained IPv6LL Nov 12 20:47:08.615392 containerd[1545]: time="2024-11-12T20:47:08.615321955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:08.616003 containerd[1545]: time="2024-11-12T20:47:08.615969825Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.0: active requests=0, bytes read=34152461" Nov 12 20:47:08.616649 containerd[1545]: time="2024-11-12T20:47:08.616318454Z" level=info msg="ImageCreate event name:\"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:08.617287 containerd[1545]: time="2024-11-12T20:47:08.617262724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:08.617847 containerd[1545]: time="2024-11-12T20:47:08.617703327Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" with image id \"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\", size \"35645521\" in 1.725108537s" Nov 12 20:47:08.617847 containerd[1545]: time="2024-11-12T20:47:08.617723209Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" returns image reference \"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\"" Nov 12 20:47:08.711102 containerd[1545]: time="2024-11-12T20:47:08.711045727Z" level=info msg="CreateContainer within sandbox \"7e45d61eb87090b076cb81a9bc0e0487438dc52cf0c01c02d1cf905bb7ae7d1b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Nov 12 20:47:08.715896 containerd[1545]: time="2024-11-12T20:47:08.715840851Z" level=info msg="CreateContainer within sandbox \"7e45d61eb87090b076cb81a9bc0e0487438dc52cf0c01c02d1cf905bb7ae7d1b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"98a56af6aab853a1a17f3ed4290f1b151dbd7e82e082fdfc2cc8545cdbb63b86\"" Nov 12 20:47:08.716951 containerd[1545]: time="2024-11-12T20:47:08.716579356Z" level=info msg="StartContainer for \"98a56af6aab853a1a17f3ed4290f1b151dbd7e82e082fdfc2cc8545cdbb63b86\"" Nov 12 20:47:08.740637 systemd[1]: Started cri-containerd-98a56af6aab853a1a17f3ed4290f1b151dbd7e82e082fdfc2cc8545cdbb63b86.scope - libcontainer container 98a56af6aab853a1a17f3ed4290f1b151dbd7e82e082fdfc2cc8545cdbb63b86. Nov 12 20:47:08.774019 containerd[1545]: time="2024-11-12T20:47:08.773988206Z" level=info msg="StartContainer for \"98a56af6aab853a1a17f3ed4290f1b151dbd7e82e082fdfc2cc8545cdbb63b86\" returns successfully" Nov 12 20:47:09.012100 kubelet[2803]: I1112 20:47:09.012026 2803 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6d6985898d-bw6xq" podStartSLOduration=76.226548822 podStartE2EDuration="1m17.963596149s" podCreationTimestamp="2024-11-12 20:45:51 +0000 UTC" firstStartedPulling="2024-11-12 20:47:06.891039309 +0000 UTC m=+96.549307964" lastFinishedPulling="2024-11-12 20:47:08.628086637 +0000 UTC m=+98.286355291" observedRunningTime="2024-11-12 20:47:08.942787781 +0000 UTC m=+98.601056445" watchObservedRunningTime="2024-11-12 20:47:08.963596149 +0000 UTC m=+98.621864822" Nov 12 20:47:09.017003 kubelet[2803]: I1112 20:47:09.016982 2803 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-mr2hw" podStartSLOduration=71.222793503 podStartE2EDuration="1m18.016929107s" podCreationTimestamp="2024-11-12 20:45:51 +0000 UTC" firstStartedPulling="2024-11-12 20:46:59.013182728 +0000 UTC m=+88.671451382" lastFinishedPulling="2024-11-12 20:47:05.807318331 +0000 UTC m=+95.465586986" observedRunningTime="2024-11-12 20:47:06.900756268 +0000 UTC m=+96.559024932" watchObservedRunningTime="2024-11-12 20:47:09.016929107 +0000 UTC m=+98.675197772" Nov 12 20:47:09.693221 systemd[1]: run-containerd-runc-k8s.io-98a56af6aab853a1a17f3ed4290f1b151dbd7e82e082fdfc2cc8545cdbb63b86-runc.IYEhyq.mount: Deactivated successfully. Nov 12 20:47:12.081210 systemd[1]: Started sshd@13-139.178.70.104:22-139.178.68.195:43172.service - OpenSSH per-connection server daemon (139.178.68.195:43172). Nov 12 20:47:12.218344 sshd[5961]: Accepted publickey for core from 139.178.68.195 port 43172 ssh2: RSA SHA256:eW+66Zcd2Hcqsdn9w7YOca9/FmdLw/8eMbZ4A5lBUuE Nov 12 20:47:12.219803 sshd[5961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:47:12.223838 systemd-logind[1522]: New session 16 of user core. Nov 12 20:47:12.228719 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 12 20:47:12.771917 sshd[5961]: pam_unix(sshd:session): session closed for user core Nov 12 20:47:12.774350 systemd-logind[1522]: Session 16 logged out. Waiting for processes to exit. Nov 12 20:47:12.774773 systemd[1]: sshd@13-139.178.70.104:22-139.178.68.195:43172.service: Deactivated successfully. Nov 12 20:47:12.776219 systemd[1]: session-16.scope: Deactivated successfully. Nov 12 20:47:12.776930 systemd-logind[1522]: Removed session 16. Nov 12 20:47:17.781006 systemd[1]: Started sshd@14-139.178.70.104:22-139.178.68.195:58232.service - OpenSSH per-connection server daemon (139.178.68.195:58232). Nov 12 20:47:17.835654 sshd[5980]: Accepted publickey for core from 139.178.68.195 port 58232 ssh2: RSA SHA256:eW+66Zcd2Hcqsdn9w7YOca9/FmdLw/8eMbZ4A5lBUuE Nov 12 20:47:17.837160 sshd[5980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:47:17.839807 systemd-logind[1522]: New session 17 of user core. Nov 12 20:47:17.848682 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 12 20:47:18.025142 sshd[5980]: pam_unix(sshd:session): session closed for user core Nov 12 20:47:18.027179 systemd[1]: sshd@14-139.178.70.104:22-139.178.68.195:58232.service: Deactivated successfully. Nov 12 20:47:18.028305 systemd[1]: session-17.scope: Deactivated successfully. Nov 12 20:47:18.028772 systemd-logind[1522]: Session 17 logged out. Waiting for processes to exit. Nov 12 20:47:18.029306 systemd-logind[1522]: Removed session 17. Nov 12 20:47:23.033845 systemd[1]: Started sshd@15-139.178.70.104:22-139.178.68.195:58238.service - OpenSSH per-connection server daemon (139.178.68.195:58238). Nov 12 20:47:23.121346 sshd[6003]: Accepted publickey for core from 139.178.68.195 port 58238 ssh2: RSA SHA256:eW+66Zcd2Hcqsdn9w7YOca9/FmdLw/8eMbZ4A5lBUuE Nov 12 20:47:23.122167 sshd[6003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:47:23.125274 systemd-logind[1522]: New session 18 of user core. Nov 12 20:47:23.131641 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 12 20:47:23.252738 sshd[6003]: pam_unix(sshd:session): session closed for user core Nov 12 20:47:23.260375 systemd[1]: Started sshd@16-139.178.70.104:22-139.178.68.195:58246.service - OpenSSH per-connection server daemon (139.178.68.195:58246). Nov 12 20:47:23.261112 systemd[1]: sshd@15-139.178.70.104:22-139.178.68.195:58238.service: Deactivated successfully. Nov 12 20:47:23.264703 systemd[1]: session-18.scope: Deactivated successfully. Nov 12 20:47:23.267602 systemd-logind[1522]: Session 18 logged out. Waiting for processes to exit. Nov 12 20:47:23.269391 systemd-logind[1522]: Removed session 18. Nov 12 20:47:23.304508 sshd[6014]: Accepted publickey for core from 139.178.68.195 port 58246 ssh2: RSA SHA256:eW+66Zcd2Hcqsdn9w7YOca9/FmdLw/8eMbZ4A5lBUuE Nov 12 20:47:23.305140 sshd[6014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:47:23.309256 systemd-logind[1522]: New session 19 of user core. Nov 12 20:47:23.314896 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 12 20:47:23.917441 sshd[6014]: pam_unix(sshd:session): session closed for user core Nov 12 20:47:23.928921 systemd[1]: Started sshd@17-139.178.70.104:22-139.178.68.195:58250.service - OpenSSH per-connection server daemon (139.178.68.195:58250). Nov 12 20:47:23.929216 systemd[1]: sshd@16-139.178.70.104:22-139.178.68.195:58246.service: Deactivated successfully. Nov 12 20:47:23.931883 systemd[1]: session-19.scope: Deactivated successfully. Nov 12 20:47:23.934161 systemd-logind[1522]: Session 19 logged out. Waiting for processes to exit. Nov 12 20:47:23.937636 systemd-logind[1522]: Removed session 19. Nov 12 20:47:23.998470 sshd[6046]: Accepted publickey for core from 139.178.68.195 port 58250 ssh2: RSA SHA256:eW+66Zcd2Hcqsdn9w7YOca9/FmdLw/8eMbZ4A5lBUuE Nov 12 20:47:24.000514 sshd[6046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:47:24.006468 systemd-logind[1522]: New session 20 of user core. Nov 12 20:47:24.011002 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 12 20:47:25.611140 sshd[6046]: pam_unix(sshd:session): session closed for user core Nov 12 20:47:25.637663 systemd[1]: sshd@17-139.178.70.104:22-139.178.68.195:58250.service: Deactivated successfully. Nov 12 20:47:25.638934 systemd[1]: session-20.scope: Deactivated successfully. Nov 12 20:47:25.641849 systemd-logind[1522]: Session 20 logged out. Waiting for processes to exit. Nov 12 20:47:25.652821 systemd[1]: Started sshd@18-139.178.70.104:22-139.178.68.195:57282.service - OpenSSH per-connection server daemon (139.178.68.195:57282). Nov 12 20:47:25.653896 systemd-logind[1522]: Removed session 20. Nov 12 20:47:25.740671 sshd[6067]: Accepted publickey for core from 139.178.68.195 port 57282 ssh2: RSA SHA256:eW+66Zcd2Hcqsdn9w7YOca9/FmdLw/8eMbZ4A5lBUuE Nov 12 20:47:25.741962 sshd[6067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:47:25.747053 systemd-logind[1522]: New session 21 of user core. Nov 12 20:47:25.751770 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 12 20:47:26.549710 sshd[6067]: pam_unix(sshd:session): session closed for user core Nov 12 20:47:26.556431 systemd[1]: sshd@18-139.178.70.104:22-139.178.68.195:57282.service: Deactivated successfully. Nov 12 20:47:26.557781 systemd[1]: session-21.scope: Deactivated successfully. Nov 12 20:47:26.558809 systemd-logind[1522]: Session 21 logged out. Waiting for processes to exit. Nov 12 20:47:26.564183 systemd[1]: Started sshd@19-139.178.70.104:22-139.178.68.195:57290.service - OpenSSH per-connection server daemon (139.178.68.195:57290). Nov 12 20:47:26.566112 systemd-logind[1522]: Removed session 21. Nov 12 20:47:26.641319 sshd[6081]: Accepted publickey for core from 139.178.68.195 port 57290 ssh2: RSA SHA256:eW+66Zcd2Hcqsdn9w7YOca9/FmdLw/8eMbZ4A5lBUuE Nov 12 20:47:26.642360 sshd[6081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:47:26.645592 systemd-logind[1522]: New session 22 of user core. Nov 12 20:47:26.654670 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 12 20:47:27.055247 sshd[6081]: pam_unix(sshd:session): session closed for user core Nov 12 20:47:27.056886 systemd[1]: sshd@19-139.178.70.104:22-139.178.68.195:57290.service: Deactivated successfully. Nov 12 20:47:27.058223 systemd[1]: session-22.scope: Deactivated successfully. Nov 12 20:47:27.059226 systemd-logind[1522]: Session 22 logged out. Waiting for processes to exit. Nov 12 20:47:27.060395 systemd-logind[1522]: Removed session 22. Nov 12 20:47:30.604876 containerd[1545]: time="2024-11-12T20:47:30.604726908Z" level=info msg="StopPodSandbox for \"9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36\"" Nov 12 20:47:31.092492 containerd[1545]: 2024-11-12 20:47:30.888 [WARNING][6112] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55d5bcd669--xs25j-eth0", GenerateName:"calico-apiserver-55d5bcd669-", Namespace:"calico-apiserver", SelfLink:"", UID:"7a262788-14a2-4491-b838-16c919aed65b", ResourceVersion:"1091", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 45, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55d5bcd669", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"512d912e665391348ef57996f66de05a9ce30e998f321b149df1996c3acdbc83", Pod:"calico-apiserver-55d5bcd669-xs25j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibea72b4365e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:47:31.092492 containerd[1545]: 2024-11-12 20:47:30.891 [INFO][6112] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36" Nov 12 20:47:31.092492 containerd[1545]: 2024-11-12 20:47:30.891 [INFO][6112] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36" iface="eth0" netns="" Nov 12 20:47:31.092492 containerd[1545]: 2024-11-12 20:47:30.891 [INFO][6112] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36" Nov 12 20:47:31.092492 containerd[1545]: 2024-11-12 20:47:30.891 [INFO][6112] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36" Nov 12 20:47:31.092492 containerd[1545]: 2024-11-12 20:47:31.076 [INFO][6118] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36" HandleID="k8s-pod-network.9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36" Workload="localhost-k8s-calico--apiserver--55d5bcd669--xs25j-eth0" Nov 12 20:47:31.092492 containerd[1545]: 2024-11-12 20:47:31.078 [INFO][6118] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:47:31.092492 containerd[1545]: 2024-11-12 20:47:31.079 [INFO][6118] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:47:31.092492 containerd[1545]: 2024-11-12 20:47:31.088 [WARNING][6118] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36" HandleID="k8s-pod-network.9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36" Workload="localhost-k8s-calico--apiserver--55d5bcd669--xs25j-eth0" Nov 12 20:47:31.092492 containerd[1545]: 2024-11-12 20:47:31.088 [INFO][6118] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36" HandleID="k8s-pod-network.9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36" Workload="localhost-k8s-calico--apiserver--55d5bcd669--xs25j-eth0" Nov 12 20:47:31.092492 containerd[1545]: 2024-11-12 20:47:31.089 [INFO][6118] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:47:31.092492 containerd[1545]: 2024-11-12 20:47:31.091 [INFO][6112] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36" Nov 12 20:47:31.097719 containerd[1545]: time="2024-11-12T20:47:31.096146437Z" level=info msg="TearDown network for sandbox \"9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36\" successfully" Nov 12 20:47:31.097719 containerd[1545]: time="2024-11-12T20:47:31.096169409Z" level=info msg="StopPodSandbox for \"9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36\" returns successfully" Nov 12 20:47:31.187231 containerd[1545]: time="2024-11-12T20:47:31.187075214Z" level=info msg="RemovePodSandbox for \"9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36\"" Nov 12 20:47:31.187231 containerd[1545]: time="2024-11-12T20:47:31.187103674Z" level=info msg="Forcibly stopping sandbox \"9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36\"" Nov 12 20:47:31.237702 containerd[1545]: 2024-11-12 20:47:31.212 [WARNING][6136] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55d5bcd669--xs25j-eth0", GenerateName:"calico-apiserver-55d5bcd669-", Namespace:"calico-apiserver", SelfLink:"", UID:"7a262788-14a2-4491-b838-16c919aed65b", ResourceVersion:"1091", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 45, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55d5bcd669", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"512d912e665391348ef57996f66de05a9ce30e998f321b149df1996c3acdbc83", Pod:"calico-apiserver-55d5bcd669-xs25j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibea72b4365e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:47:31.237702 containerd[1545]: 2024-11-12 20:47:31.212 [INFO][6136] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36" Nov 12 20:47:31.237702 containerd[1545]: 2024-11-12 20:47:31.212 [INFO][6136] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36" iface="eth0" netns="" Nov 12 20:47:31.237702 containerd[1545]: 2024-11-12 20:47:31.212 [INFO][6136] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36" Nov 12 20:47:31.237702 containerd[1545]: 2024-11-12 20:47:31.212 [INFO][6136] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36" Nov 12 20:47:31.237702 containerd[1545]: 2024-11-12 20:47:31.230 [INFO][6142] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36" HandleID="k8s-pod-network.9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36" Workload="localhost-k8s-calico--apiserver--55d5bcd669--xs25j-eth0" Nov 12 20:47:31.237702 containerd[1545]: 2024-11-12 20:47:31.230 [INFO][6142] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:47:31.237702 containerd[1545]: 2024-11-12 20:47:31.230 [INFO][6142] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:47:31.237702 containerd[1545]: 2024-11-12 20:47:31.233 [WARNING][6142] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36" HandleID="k8s-pod-network.9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36" Workload="localhost-k8s-calico--apiserver--55d5bcd669--xs25j-eth0" Nov 12 20:47:31.237702 containerd[1545]: 2024-11-12 20:47:31.233 [INFO][6142] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36" HandleID="k8s-pod-network.9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36" Workload="localhost-k8s-calico--apiserver--55d5bcd669--xs25j-eth0" Nov 12 20:47:31.237702 containerd[1545]: 2024-11-12 20:47:31.234 [INFO][6142] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:47:31.237702 containerd[1545]: 2024-11-12 20:47:31.236 [INFO][6136] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36" Nov 12 20:47:31.239006 containerd[1545]: time="2024-11-12T20:47:31.237723472Z" level=info msg="TearDown network for sandbox \"9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36\" successfully" Nov 12 20:47:31.244263 containerd[1545]: time="2024-11-12T20:47:31.244243962Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:47:31.250023 containerd[1545]: time="2024-11-12T20:47:31.250004628Z" level=info msg="RemovePodSandbox \"9027cc83f9acfbae79fefdc9661c3cf6e11827c858710b4cb4972e4949470f36\" returns successfully" Nov 12 20:47:31.257578 containerd[1545]: time="2024-11-12T20:47:31.257545823Z" level=info msg="StopPodSandbox for \"7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537\"" Nov 12 20:47:31.305820 containerd[1545]: 2024-11-12 20:47:31.283 [WARNING][6160] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--jx2wl-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"2cb5bf3f-a6f0-4cca-8e64-700b92fbd244", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 45, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b2c60dc5d68ca03a4fa2adbdc6fd255c61fcaae09a7cd38dcea601dc36e24a7e", Pod:"coredns-76f75df574-jx2wl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic78ff469fdb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:47:31.305820 containerd[1545]: 2024-11-12 20:47:31.283 [INFO][6160] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537" Nov 12 20:47:31.305820 containerd[1545]: 2024-11-12 20:47:31.283 [INFO][6160] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537" iface="eth0" netns="" Nov 12 20:47:31.305820 containerd[1545]: 2024-11-12 20:47:31.284 [INFO][6160] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537" Nov 12 20:47:31.305820 containerd[1545]: 2024-11-12 20:47:31.284 [INFO][6160] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537" Nov 12 20:47:31.305820 containerd[1545]: 2024-11-12 20:47:31.297 [INFO][6166] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537" HandleID="k8s-pod-network.7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537" Workload="localhost-k8s-coredns--76f75df574--jx2wl-eth0" Nov 12 20:47:31.305820 containerd[1545]: 2024-11-12 20:47:31.297 [INFO][6166] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:47:31.305820 containerd[1545]: 2024-11-12 20:47:31.297 [INFO][6166] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:47:31.305820 containerd[1545]: 2024-11-12 20:47:31.301 [WARNING][6166] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537" HandleID="k8s-pod-network.7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537" Workload="localhost-k8s-coredns--76f75df574--jx2wl-eth0" Nov 12 20:47:31.305820 containerd[1545]: 2024-11-12 20:47:31.301 [INFO][6166] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537" HandleID="k8s-pod-network.7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537" Workload="localhost-k8s-coredns--76f75df574--jx2wl-eth0" Nov 12 20:47:31.305820 containerd[1545]: 2024-11-12 20:47:31.303 [INFO][6166] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:47:31.305820 containerd[1545]: 2024-11-12 20:47:31.304 [INFO][6160] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537" Nov 12 20:47:31.307743 containerd[1545]: time="2024-11-12T20:47:31.305825741Z" level=info msg="TearDown network for sandbox \"7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537\" successfully" Nov 12 20:47:31.307743 containerd[1545]: time="2024-11-12T20:47:31.305840332Z" level=info msg="StopPodSandbox for \"7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537\" returns successfully" Nov 12 20:47:31.307743 containerd[1545]: time="2024-11-12T20:47:31.306105723Z" level=info msg="RemovePodSandbox for \"7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537\"" Nov 12 20:47:31.307743 containerd[1545]: time="2024-11-12T20:47:31.306121297Z" level=info msg="Forcibly stopping sandbox \"7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537\"" Nov 12 20:47:31.350775 containerd[1545]: 2024-11-12 20:47:31.329 [WARNING][6185] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--jx2wl-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"2cb5bf3f-a6f0-4cca-8e64-700b92fbd244", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 45, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b2c60dc5d68ca03a4fa2adbdc6fd255c61fcaae09a7cd38dcea601dc36e24a7e", Pod:"coredns-76f75df574-jx2wl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic78ff469fdb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:47:31.350775 containerd[1545]: 2024-11-12 20:47:31.329 [INFO][6185] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537" Nov 12 20:47:31.350775 containerd[1545]: 2024-11-12 20:47:31.329 [INFO][6185] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537" iface="eth0" netns="" Nov 12 20:47:31.350775 containerd[1545]: 2024-11-12 20:47:31.329 [INFO][6185] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537" Nov 12 20:47:31.350775 containerd[1545]: 2024-11-12 20:47:31.329 [INFO][6185] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537" Nov 12 20:47:31.350775 containerd[1545]: 2024-11-12 20:47:31.344 [INFO][6191] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537" HandleID="k8s-pod-network.7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537" Workload="localhost-k8s-coredns--76f75df574--jx2wl-eth0" Nov 12 20:47:31.350775 containerd[1545]: 2024-11-12 20:47:31.344 [INFO][6191] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:47:31.350775 containerd[1545]: 2024-11-12 20:47:31.344 [INFO][6191] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:47:31.350775 containerd[1545]: 2024-11-12 20:47:31.347 [WARNING][6191] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537" HandleID="k8s-pod-network.7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537" Workload="localhost-k8s-coredns--76f75df574--jx2wl-eth0" Nov 12 20:47:31.350775 containerd[1545]: 2024-11-12 20:47:31.347 [INFO][6191] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537" HandleID="k8s-pod-network.7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537" Workload="localhost-k8s-coredns--76f75df574--jx2wl-eth0" Nov 12 20:47:31.350775 containerd[1545]: 2024-11-12 20:47:31.348 [INFO][6191] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:47:31.350775 containerd[1545]: 2024-11-12 20:47:31.349 [INFO][6185] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537" Nov 12 20:47:31.350775 containerd[1545]: time="2024-11-12T20:47:31.350651017Z" level=info msg="TearDown network for sandbox \"7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537\" successfully" Nov 12 20:47:31.353027 containerd[1545]: time="2024-11-12T20:47:31.352345889Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:47:31.353027 containerd[1545]: time="2024-11-12T20:47:31.352397619Z" level=info msg="RemovePodSandbox \"7b317cad3560380735695c7fd95c8bdc301446619da2617ff0f078f27a15a537\" returns successfully" Nov 12 20:47:31.353027 containerd[1545]: time="2024-11-12T20:47:31.352824185Z" level=info msg="StopPodSandbox for \"72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185\"" Nov 12 20:47:31.404391 containerd[1545]: 2024-11-12 20:47:31.378 [WARNING][6209] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mr2hw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5475df33-a25f-4c6b-acfc-caf320cb59b1", ResourceVersion:"1135", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 45, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ab729ee86cedc296b3f86a769c080aa16056b071c49adfbfb7173392f85c8100", Pod:"csi-node-driver-mr2hw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3ee0f2f150a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:47:31.404391 containerd[1545]: 2024-11-12 20:47:31.379 [INFO][6209] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185" Nov 12 20:47:31.404391 containerd[1545]: 2024-11-12 20:47:31.379 [INFO][6209] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185" iface="eth0" netns="" Nov 12 20:47:31.404391 containerd[1545]: 2024-11-12 20:47:31.379 [INFO][6209] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185" Nov 12 20:47:31.404391 containerd[1545]: 2024-11-12 20:47:31.379 [INFO][6209] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185" Nov 12 20:47:31.404391 containerd[1545]: 2024-11-12 20:47:31.393 [INFO][6216] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185" HandleID="k8s-pod-network.72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185" Workload="localhost-k8s-csi--node--driver--mr2hw-eth0" Nov 12 20:47:31.404391 containerd[1545]: 2024-11-12 20:47:31.394 [INFO][6216] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:47:31.404391 containerd[1545]: 2024-11-12 20:47:31.394 [INFO][6216] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:47:31.404391 containerd[1545]: 2024-11-12 20:47:31.398 [WARNING][6216] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185" HandleID="k8s-pod-network.72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185" Workload="localhost-k8s-csi--node--driver--mr2hw-eth0" Nov 12 20:47:31.404391 containerd[1545]: 2024-11-12 20:47:31.399 [INFO][6216] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185" HandleID="k8s-pod-network.72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185" Workload="localhost-k8s-csi--node--driver--mr2hw-eth0" Nov 12 20:47:31.404391 containerd[1545]: 2024-11-12 20:47:31.400 [INFO][6216] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:47:31.404391 containerd[1545]: 2024-11-12 20:47:31.402 [INFO][6209] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185" Nov 12 20:47:31.404391 containerd[1545]: time="2024-11-12T20:47:31.404277993Z" level=info msg="TearDown network for sandbox \"72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185\" successfully" Nov 12 20:47:31.404391 containerd[1545]: time="2024-11-12T20:47:31.404293723Z" level=info msg="StopPodSandbox for \"72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185\" returns successfully" Nov 12 20:47:31.406934 containerd[1545]: time="2024-11-12T20:47:31.406889750Z" level=info msg="RemovePodSandbox for \"72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185\"" Nov 12 20:47:31.407028 containerd[1545]: time="2024-11-12T20:47:31.407013599Z" level=info msg="Forcibly stopping sandbox \"72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185\"" Nov 12 20:47:31.450764 containerd[1545]: 2024-11-12 20:47:31.428 [WARNING][6234] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mr2hw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5475df33-a25f-4c6b-acfc-caf320cb59b1", ResourceVersion:"1135", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 45, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ab729ee86cedc296b3f86a769c080aa16056b071c49adfbfb7173392f85c8100", Pod:"csi-node-driver-mr2hw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3ee0f2f150a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:47:31.450764 containerd[1545]: 2024-11-12 20:47:31.428 [INFO][6234] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185" Nov 12 20:47:31.450764 containerd[1545]: 2024-11-12 20:47:31.428 [INFO][6234] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185" iface="eth0" netns="" Nov 12 20:47:31.450764 containerd[1545]: 2024-11-12 20:47:31.428 [INFO][6234] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185" Nov 12 20:47:31.450764 containerd[1545]: 2024-11-12 20:47:31.428 [INFO][6234] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185" Nov 12 20:47:31.450764 containerd[1545]: 2024-11-12 20:47:31.443 [INFO][6240] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185" HandleID="k8s-pod-network.72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185" Workload="localhost-k8s-csi--node--driver--mr2hw-eth0" Nov 12 20:47:31.450764 containerd[1545]: 2024-11-12 20:47:31.443 [INFO][6240] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:47:31.450764 containerd[1545]: 2024-11-12 20:47:31.443 [INFO][6240] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:47:31.450764 containerd[1545]: 2024-11-12 20:47:31.447 [WARNING][6240] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185" HandleID="k8s-pod-network.72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185" Workload="localhost-k8s-csi--node--driver--mr2hw-eth0" Nov 12 20:47:31.450764 containerd[1545]: 2024-11-12 20:47:31.447 [INFO][6240] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185" HandleID="k8s-pod-network.72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185" Workload="localhost-k8s-csi--node--driver--mr2hw-eth0" Nov 12 20:47:31.450764 containerd[1545]: 2024-11-12 20:47:31.448 [INFO][6240] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:47:31.450764 containerd[1545]: 2024-11-12 20:47:31.449 [INFO][6234] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185" Nov 12 20:47:31.451074 containerd[1545]: time="2024-11-12T20:47:31.450804172Z" level=info msg="TearDown network for sandbox \"72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185\" successfully" Nov 12 20:47:31.468582 containerd[1545]: time="2024-11-12T20:47:31.468526090Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:47:31.468680 containerd[1545]: time="2024-11-12T20:47:31.468598491Z" level=info msg="RemovePodSandbox \"72c64765a146dec34fc7d1311cf764ab121bba16ba3095bedd3fd3712a9d2185\" returns successfully" Nov 12 20:47:31.468983 containerd[1545]: time="2024-11-12T20:47:31.468967349Z" level=info msg="StopPodSandbox for \"754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1\"" Nov 12 20:47:31.511983 containerd[1545]: 2024-11-12 20:47:31.490 [WARNING][6258] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55d5bcd669--b7s55-eth0", GenerateName:"calico-apiserver-55d5bcd669-", Namespace:"calico-apiserver", SelfLink:"", UID:"4b0d351b-c7ca-4b0e-9343-66e8cb4acd5c", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 45, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55d5bcd669", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2a9c33f2196c46b46f900bf369312a32557c64398ccc05ab20e8f3631bdc76c3", Pod:"calico-apiserver-55d5bcd669-b7s55", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali608b09d8ff9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:47:31.511983 containerd[1545]: 2024-11-12 20:47:31.491 [INFO][6258] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1" Nov 12 20:47:31.511983 containerd[1545]: 2024-11-12 20:47:31.491 [INFO][6258] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1" iface="eth0" netns="" Nov 12 20:47:31.511983 containerd[1545]: 2024-11-12 20:47:31.491 [INFO][6258] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1" Nov 12 20:47:31.511983 containerd[1545]: 2024-11-12 20:47:31.491 [INFO][6258] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1" Nov 12 20:47:31.511983 containerd[1545]: 2024-11-12 20:47:31.505 [INFO][6264] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1" HandleID="k8s-pod-network.754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1" Workload="localhost-k8s-calico--apiserver--55d5bcd669--b7s55-eth0" Nov 12 20:47:31.511983 containerd[1545]: 2024-11-12 20:47:31.505 [INFO][6264] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:47:31.511983 containerd[1545]: 2024-11-12 20:47:31.505 [INFO][6264] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:47:31.511983 containerd[1545]: 2024-11-12 20:47:31.509 [WARNING][6264] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1" HandleID="k8s-pod-network.754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1" Workload="localhost-k8s-calico--apiserver--55d5bcd669--b7s55-eth0" Nov 12 20:47:31.511983 containerd[1545]: 2024-11-12 20:47:31.509 [INFO][6264] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1" HandleID="k8s-pod-network.754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1" Workload="localhost-k8s-calico--apiserver--55d5bcd669--b7s55-eth0" Nov 12 20:47:31.511983 containerd[1545]: 2024-11-12 20:47:31.509 [INFO][6264] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:47:31.511983 containerd[1545]: 2024-11-12 20:47:31.510 [INFO][6258] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1" Nov 12 20:47:31.513380 containerd[1545]: time="2024-11-12T20:47:31.512055347Z" level=info msg="TearDown network for sandbox \"754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1\" successfully" Nov 12 20:47:31.513380 containerd[1545]: time="2024-11-12T20:47:31.512071216Z" level=info msg="StopPodSandbox for \"754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1\" returns successfully" Nov 12 20:47:31.513380 containerd[1545]: time="2024-11-12T20:47:31.512579590Z" level=info msg="RemovePodSandbox for \"754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1\"" Nov 12 20:47:31.513380 containerd[1545]: time="2024-11-12T20:47:31.512595728Z" level=info msg="Forcibly stopping sandbox \"754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1\"" Nov 12 20:47:31.557582 containerd[1545]: 2024-11-12 20:47:31.533 [WARNING][6282] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55d5bcd669--b7s55-eth0", GenerateName:"calico-apiserver-55d5bcd669-", Namespace:"calico-apiserver", SelfLink:"", UID:"4b0d351b-c7ca-4b0e-9343-66e8cb4acd5c", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 45, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55d5bcd669", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2a9c33f2196c46b46f900bf369312a32557c64398ccc05ab20e8f3631bdc76c3", Pod:"calico-apiserver-55d5bcd669-b7s55", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali608b09d8ff9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:47:31.557582 containerd[1545]: 2024-11-12 20:47:31.533 [INFO][6282] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1" Nov 12 20:47:31.557582 containerd[1545]: 2024-11-12 20:47:31.533 [INFO][6282] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1" iface="eth0" netns="" Nov 12 20:47:31.557582 containerd[1545]: 2024-11-12 20:47:31.533 [INFO][6282] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1" Nov 12 20:47:31.557582 containerd[1545]: 2024-11-12 20:47:31.533 [INFO][6282] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1" Nov 12 20:47:31.557582 containerd[1545]: 2024-11-12 20:47:31.548 [INFO][6288] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1" HandleID="k8s-pod-network.754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1" Workload="localhost-k8s-calico--apiserver--55d5bcd669--b7s55-eth0" Nov 12 20:47:31.557582 containerd[1545]: 2024-11-12 20:47:31.549 [INFO][6288] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:47:31.557582 containerd[1545]: 2024-11-12 20:47:31.549 [INFO][6288] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:47:31.557582 containerd[1545]: 2024-11-12 20:47:31.554 [WARNING][6288] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1" HandleID="k8s-pod-network.754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1" Workload="localhost-k8s-calico--apiserver--55d5bcd669--b7s55-eth0" Nov 12 20:47:31.557582 containerd[1545]: 2024-11-12 20:47:31.554 [INFO][6288] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1" HandleID="k8s-pod-network.754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1" Workload="localhost-k8s-calico--apiserver--55d5bcd669--b7s55-eth0" Nov 12 20:47:31.557582 containerd[1545]: 2024-11-12 20:47:31.555 [INFO][6288] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:47:31.557582 containerd[1545]: 2024-11-12 20:47:31.556 [INFO][6282] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1" Nov 12 20:47:31.557582 containerd[1545]: time="2024-11-12T20:47:31.557415604Z" level=info msg="TearDown network for sandbox \"754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1\" successfully" Nov 12 20:47:31.559642 containerd[1545]: time="2024-11-12T20:47:31.559524999Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:47:31.559642 containerd[1545]: time="2024-11-12T20:47:31.559569698Z" level=info msg="RemovePodSandbox \"754067543d23668234a76379e706ce67b69222a71241e8a12ed199bc12712bc1\" returns successfully" Nov 12 20:47:31.560002 containerd[1545]: time="2024-11-12T20:47:31.559867212Z" level=info msg="StopPodSandbox for \"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\"" Nov 12 20:47:31.603762 containerd[1545]: 2024-11-12 20:47:31.582 [WARNING][6306] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--q4n22-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"92ebb2b6-a9b5-4cb2-9d42-b7d671fa5e35", ResourceVersion:"1120", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 45, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1933d9b7771906c0a055c0d07ab1b7203b7890ac037d364bd0ba222fe6e4cfb1", Pod:"coredns-76f75df574-q4n22", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4ff5eda038a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:47:31.603762 containerd[1545]: 2024-11-12 20:47:31.582 [INFO][6306] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3" Nov 12 20:47:31.603762 containerd[1545]: 2024-11-12 20:47:31.582 [INFO][6306] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3" iface="eth0" netns="" Nov 12 20:47:31.603762 containerd[1545]: 2024-11-12 20:47:31.582 [INFO][6306] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3" Nov 12 20:47:31.603762 containerd[1545]: 2024-11-12 20:47:31.582 [INFO][6306] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3" Nov 12 20:47:31.603762 containerd[1545]: 2024-11-12 20:47:31.596 [INFO][6312] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3" HandleID="k8s-pod-network.a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3" Workload="localhost-k8s-coredns--76f75df574--q4n22-eth0" Nov 12 20:47:31.603762 containerd[1545]: 2024-11-12 20:47:31.596 [INFO][6312] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:47:31.603762 containerd[1545]: 2024-11-12 20:47:31.596 [INFO][6312] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:47:31.603762 containerd[1545]: 2024-11-12 20:47:31.600 [WARNING][6312] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3" HandleID="k8s-pod-network.a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3" Workload="localhost-k8s-coredns--76f75df574--q4n22-eth0" Nov 12 20:47:31.603762 containerd[1545]: 2024-11-12 20:47:31.600 [INFO][6312] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3" HandleID="k8s-pod-network.a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3" Workload="localhost-k8s-coredns--76f75df574--q4n22-eth0" Nov 12 20:47:31.603762 containerd[1545]: 2024-11-12 20:47:31.601 [INFO][6312] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:47:31.603762 containerd[1545]: 2024-11-12 20:47:31.602 [INFO][6306] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3" Nov 12 20:47:31.603762 containerd[1545]: time="2024-11-12T20:47:31.603743274Z" level=info msg="TearDown network for sandbox \"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\" successfully" Nov 12 20:47:31.603762 containerd[1545]: time="2024-11-12T20:47:31.603759491Z" level=info msg="StopPodSandbox for \"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\" returns successfully" Nov 12 20:47:31.606235 containerd[1545]: time="2024-11-12T20:47:31.604033826Z" level=info msg="RemovePodSandbox for \"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\"" Nov 12 20:47:31.606235 containerd[1545]: time="2024-11-12T20:47:31.604055171Z" level=info msg="Forcibly stopping sandbox \"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\"" Nov 12 20:47:31.652436 containerd[1545]: 2024-11-12 20:47:31.627 [WARNING][6330] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--q4n22-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"92ebb2b6-a9b5-4cb2-9d42-b7d671fa5e35", ResourceVersion:"1120", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 45, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1933d9b7771906c0a055c0d07ab1b7203b7890ac037d364bd0ba222fe6e4cfb1", Pod:"coredns-76f75df574-q4n22", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4ff5eda038a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:47:31.652436 containerd[1545]: 2024-11-12 20:47:31.627 [INFO][6330] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3" Nov 12 20:47:31.652436 containerd[1545]: 2024-11-12 20:47:31.627 [INFO][6330] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3" iface="eth0" netns="" Nov 12 20:47:31.652436 containerd[1545]: 2024-11-12 20:47:31.627 [INFO][6330] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3" Nov 12 20:47:31.652436 containerd[1545]: 2024-11-12 20:47:31.627 [INFO][6330] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3" Nov 12 20:47:31.652436 containerd[1545]: 2024-11-12 20:47:31.645 [INFO][6336] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3" HandleID="k8s-pod-network.a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3" Workload="localhost-k8s-coredns--76f75df574--q4n22-eth0" Nov 12 20:47:31.652436 containerd[1545]: 2024-11-12 20:47:31.645 [INFO][6336] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:47:31.652436 containerd[1545]: 2024-11-12 20:47:31.645 [INFO][6336] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:47:31.652436 containerd[1545]: 2024-11-12 20:47:31.649 [WARNING][6336] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3" HandleID="k8s-pod-network.a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3" Workload="localhost-k8s-coredns--76f75df574--q4n22-eth0" Nov 12 20:47:31.652436 containerd[1545]: 2024-11-12 20:47:31.649 [INFO][6336] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3" HandleID="k8s-pod-network.a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3" Workload="localhost-k8s-coredns--76f75df574--q4n22-eth0" Nov 12 20:47:31.652436 containerd[1545]: 2024-11-12 20:47:31.650 [INFO][6336] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:47:31.652436 containerd[1545]: 2024-11-12 20:47:31.651 [INFO][6330] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3" Nov 12 20:47:31.652861 containerd[1545]: time="2024-11-12T20:47:31.652449673Z" level=info msg="TearDown network for sandbox \"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\" successfully" Nov 12 20:47:31.653799 containerd[1545]: time="2024-11-12T20:47:31.653776972Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:47:31.653831 containerd[1545]: time="2024-11-12T20:47:31.653811640Z" level=info msg="RemovePodSandbox \"a42ab51fbf5c026dc255f6696f96d9448de542271d28beb17e4a001adbf9fbf3\" returns successfully" Nov 12 20:47:31.654209 containerd[1545]: time="2024-11-12T20:47:31.654153189Z" level=info msg="StopPodSandbox for \"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\"" Nov 12 20:47:31.711005 containerd[1545]: 2024-11-12 20:47:31.690 [WARNING][6355] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d6985898d--bw6xq-eth0", GenerateName:"calico-kube-controllers-6d6985898d-", Namespace:"calico-system", SelfLink:"", UID:"b1afbb6d-8e27-4966-8b72-7f067d947668", ResourceVersion:"1150", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 45, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d6985898d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7e45d61eb87090b076cb81a9bc0e0487438dc52cf0c01c02d1cf905bb7ae7d1b", Pod:"calico-kube-controllers-6d6985898d-bw6xq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3008767504b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:47:31.711005 containerd[1545]: 2024-11-12 20:47:31.690 [INFO][6355] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c" Nov 12 20:47:31.711005 containerd[1545]: 2024-11-12 20:47:31.690 [INFO][6355] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c" iface="eth0" netns="" Nov 12 20:47:31.711005 containerd[1545]: 2024-11-12 20:47:31.690 [INFO][6355] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c" Nov 12 20:47:31.711005 containerd[1545]: 2024-11-12 20:47:31.690 [INFO][6355] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c" Nov 12 20:47:31.711005 containerd[1545]: 2024-11-12 20:47:31.704 [INFO][6362] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c" HandleID="k8s-pod-network.7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c" Workload="localhost-k8s-calico--kube--controllers--6d6985898d--bw6xq-eth0" Nov 12 20:47:31.711005 containerd[1545]: 2024-11-12 20:47:31.704 [INFO][6362] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:47:31.711005 containerd[1545]: 2024-11-12 20:47:31.704 [INFO][6362] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:47:31.711005 containerd[1545]: 2024-11-12 20:47:31.708 [WARNING][6362] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c" HandleID="k8s-pod-network.7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c" Workload="localhost-k8s-calico--kube--controllers--6d6985898d--bw6xq-eth0" Nov 12 20:47:31.711005 containerd[1545]: 2024-11-12 20:47:31.708 [INFO][6362] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c" HandleID="k8s-pod-network.7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c" Workload="localhost-k8s-calico--kube--controllers--6d6985898d--bw6xq-eth0" Nov 12 20:47:31.711005 containerd[1545]: 2024-11-12 20:47:31.708 [INFO][6362] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:47:31.711005 containerd[1545]: 2024-11-12 20:47:31.709 [INFO][6355] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c" Nov 12 20:47:31.711839 containerd[1545]: time="2024-11-12T20:47:31.711019911Z" level=info msg="TearDown network for sandbox \"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\" successfully" Nov 12 20:47:31.711839 containerd[1545]: time="2024-11-12T20:47:31.711034585Z" level=info msg="StopPodSandbox for \"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\" returns successfully" Nov 12 20:47:31.711839 containerd[1545]: time="2024-11-12T20:47:31.711567240Z" level=info msg="RemovePodSandbox for \"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\"" Nov 12 20:47:31.711839 containerd[1545]: time="2024-11-12T20:47:31.711583846Z" level=info msg="Forcibly stopping sandbox \"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\"" Nov 12 20:47:31.754573 containerd[1545]: 2024-11-12 20:47:31.733 [WARNING][6380] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d6985898d--bw6xq-eth0", GenerateName:"calico-kube-controllers-6d6985898d-", Namespace:"calico-system", SelfLink:"", UID:"b1afbb6d-8e27-4966-8b72-7f067d947668", ResourceVersion:"1150", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 45, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d6985898d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7e45d61eb87090b076cb81a9bc0e0487438dc52cf0c01c02d1cf905bb7ae7d1b", Pod:"calico-kube-controllers-6d6985898d-bw6xq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3008767504b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:47:31.754573 containerd[1545]: 2024-11-12 20:47:31.733 [INFO][6380] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c" Nov 12 20:47:31.754573 containerd[1545]: 2024-11-12 20:47:31.733 [INFO][6380] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c" iface="eth0" netns="" Nov 12 20:47:31.754573 containerd[1545]: 2024-11-12 20:47:31.733 [INFO][6380] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c" Nov 12 20:47:31.754573 containerd[1545]: 2024-11-12 20:47:31.733 [INFO][6380] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c" Nov 12 20:47:31.754573 containerd[1545]: 2024-11-12 20:47:31.747 [INFO][6386] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c" HandleID="k8s-pod-network.7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c" Workload="localhost-k8s-calico--kube--controllers--6d6985898d--bw6xq-eth0" Nov 12 20:47:31.754573 containerd[1545]: 2024-11-12 20:47:31.747 [INFO][6386] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:47:31.754573 containerd[1545]: 2024-11-12 20:47:31.747 [INFO][6386] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:47:31.754573 containerd[1545]: 2024-11-12 20:47:31.751 [WARNING][6386] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c" HandleID="k8s-pod-network.7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c" Workload="localhost-k8s-calico--kube--controllers--6d6985898d--bw6xq-eth0" Nov 12 20:47:31.754573 containerd[1545]: 2024-11-12 20:47:31.751 [INFO][6386] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c" HandleID="k8s-pod-network.7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c" Workload="localhost-k8s-calico--kube--controllers--6d6985898d--bw6xq-eth0" Nov 12 20:47:31.754573 containerd[1545]: 2024-11-12 20:47:31.752 [INFO][6386] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:47:31.754573 containerd[1545]: 2024-11-12 20:47:31.753 [INFO][6380] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c" Nov 12 20:47:31.754573 containerd[1545]: time="2024-11-12T20:47:31.754482641Z" level=info msg="TearDown network for sandbox \"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\" successfully" Nov 12 20:47:31.756126 containerd[1545]: time="2024-11-12T20:47:31.756108457Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:47:31.756179 containerd[1545]: time="2024-11-12T20:47:31.756164167Z" level=info msg="RemovePodSandbox \"7831634092c2eee60c5763ae2c16d1eb7528460e5d4a0093a7d5199e8ef6f84c\" returns successfully" Nov 12 20:47:31.756529 containerd[1545]: time="2024-11-12T20:47:31.756472565Z" level=info msg="StopPodSandbox for \"c419678050f6a4a4f1dc960586bd674d1e2a760bc5a95cc86016d4e5083c9c04\"" Nov 12 20:47:31.756625 containerd[1545]: time="2024-11-12T20:47:31.756610701Z" level=info msg="TearDown network for sandbox \"c419678050f6a4a4f1dc960586bd674d1e2a760bc5a95cc86016d4e5083c9c04\" successfully" Nov 12 20:47:31.756625 containerd[1545]: time="2024-11-12T20:47:31.756622410Z" level=info msg="StopPodSandbox for \"c419678050f6a4a4f1dc960586bd674d1e2a760bc5a95cc86016d4e5083c9c04\" returns successfully" Nov 12 20:47:31.760926 containerd[1545]: time="2024-11-12T20:47:31.760881607Z" level=info msg="RemovePodSandbox for \"c419678050f6a4a4f1dc960586bd674d1e2a760bc5a95cc86016d4e5083c9c04\"" Nov 12 20:47:31.760926 containerd[1545]: time="2024-11-12T20:47:31.760896977Z" level=info msg="Forcibly stopping sandbox \"c419678050f6a4a4f1dc960586bd674d1e2a760bc5a95cc86016d4e5083c9c04\"" Nov 12 20:47:31.761000 containerd[1545]: time="2024-11-12T20:47:31.760938750Z" level=info msg="TearDown network for sandbox \"c419678050f6a4a4f1dc960586bd674d1e2a760bc5a95cc86016d4e5083c9c04\" successfully" Nov 12 20:47:31.762499 containerd[1545]: time="2024-11-12T20:47:31.762484496Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c419678050f6a4a4f1dc960586bd674d1e2a760bc5a95cc86016d4e5083c9c04\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:47:31.762538 containerd[1545]: time="2024-11-12T20:47:31.762520211Z" level=info msg="RemovePodSandbox \"c419678050f6a4a4f1dc960586bd674d1e2a760bc5a95cc86016d4e5083c9c04\" returns successfully" Nov 12 20:47:32.065341 systemd[1]: Started sshd@20-139.178.70.104:22-139.178.68.195:57302.service - OpenSSH per-connection server daemon (139.178.68.195:57302). Nov 12 20:47:32.164422 sshd[6393]: Accepted publickey for core from 139.178.68.195 port 57302 ssh2: RSA SHA256:eW+66Zcd2Hcqsdn9w7YOca9/FmdLw/8eMbZ4A5lBUuE Nov 12 20:47:32.165883 sshd[6393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:47:32.169309 systemd-logind[1522]: New session 23 of user core. Nov 12 20:47:32.178781 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 12 20:47:32.617705 sshd[6393]: pam_unix(sshd:session): session closed for user core Nov 12 20:47:32.619378 systemd[1]: sshd@20-139.178.70.104:22-139.178.68.195:57302.service: Deactivated successfully. Nov 12 20:47:32.620651 systemd[1]: session-23.scope: Deactivated successfully. Nov 12 20:47:32.621158 systemd-logind[1522]: Session 23 logged out. Waiting for processes to exit. Nov 12 20:47:32.622054 systemd-logind[1522]: Removed session 23. Nov 12 20:47:37.630259 systemd[1]: Started sshd@21-139.178.70.104:22-139.178.68.195:38092.service - OpenSSH per-connection server daemon (139.178.68.195:38092). Nov 12 20:47:37.682771 sshd[6426]: Accepted publickey for core from 139.178.68.195 port 38092 ssh2: RSA SHA256:eW+66Zcd2Hcqsdn9w7YOca9/FmdLw/8eMbZ4A5lBUuE Nov 12 20:47:37.683762 sshd[6426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:47:37.687247 systemd-logind[1522]: New session 24 of user core. Nov 12 20:47:37.695707 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 12 20:47:38.012525 sshd[6426]: pam_unix(sshd:session): session closed for user core Nov 12 20:47:38.016375 systemd[1]: sshd@21-139.178.70.104:22-139.178.68.195:38092.service: Deactivated successfully. Nov 12 20:47:38.019485 systemd[1]: session-24.scope: Deactivated successfully. Nov 12 20:47:38.020736 systemd-logind[1522]: Session 24 logged out. Waiting for processes to exit. Nov 12 20:47:38.022930 systemd-logind[1522]: Removed session 24. Nov 12 20:47:43.019444 systemd[1]: Started sshd@22-139.178.70.104:22-139.178.68.195:38100.service - OpenSSH per-connection server daemon (139.178.68.195:38100). Nov 12 20:47:43.069604 sshd[6445]: Accepted publickey for core from 139.178.68.195 port 38100 ssh2: RSA SHA256:eW+66Zcd2Hcqsdn9w7YOca9/FmdLw/8eMbZ4A5lBUuE Nov 12 20:47:43.070358 sshd[6445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:47:43.073102 systemd-logind[1522]: New session 25 of user core. Nov 12 20:47:43.076640 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 12 20:47:43.177066 sshd[6445]: pam_unix(sshd:session): session closed for user core Nov 12 20:47:43.179415 systemd[1]: sshd@22-139.178.70.104:22-139.178.68.195:38100.service: Deactivated successfully. Nov 12 20:47:43.180468 systemd[1]: session-25.scope: Deactivated successfully. Nov 12 20:47:43.180890 systemd-logind[1522]: Session 25 logged out. Waiting for processes to exit. Nov 12 20:47:43.181469 systemd-logind[1522]: Removed session 25.