Jan 29 11:15:07.738484 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:29:54 -00 2025 Jan 29 11:15:07.738502 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 29 11:15:07.738508 kernel: Disabled fast string operations Jan 29 11:15:07.738512 kernel: BIOS-provided physical RAM map: Jan 29 11:15:07.738516 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Jan 29 11:15:07.738520 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Jan 29 11:15:07.738526 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Jan 29 11:15:07.738531 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Jan 29 11:15:07.738535 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Jan 29 11:15:07.738539 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Jan 29 11:15:07.738543 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Jan 29 11:15:07.738547 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Jan 29 11:15:07.738552 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Jan 29 11:15:07.738556 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jan 29 11:15:07.738562 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Jan 29 11:15:07.738567 kernel: NX (Execute Disable) protection: active Jan 29 11:15:07.738572 kernel: APIC: Static calls initialized Jan 29 11:15:07.738576 kernel: SMBIOS 2.7 present. Jan 29 11:15:07.738581 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Jan 29 11:15:07.738586 kernel: vmware: hypercall mode: 0x00 Jan 29 11:15:07.738591 kernel: Hypervisor detected: VMware Jan 29 11:15:07.738596 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Jan 29 11:15:07.738602 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Jan 29 11:15:07.738607 kernel: vmware: using clock offset of 4204245664 ns Jan 29 11:15:07.738612 kernel: tsc: Detected 3408.000 MHz processor Jan 29 11:15:07.738617 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 11:15:07.738622 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 11:15:07.738627 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Jan 29 11:15:07.738632 kernel: total RAM covered: 3072M Jan 29 11:15:07.738636 kernel: Found optimal setting for mtrr clean up Jan 29 11:15:07.738642 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Jan 29 11:15:07.738647 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Jan 29 11:15:07.738653 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 11:15:07.738657 kernel: Using GB pages for direct mapping Jan 29 11:15:07.738662 kernel: ACPI: Early table checksum verification disabled Jan 29 11:15:07.738667 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Jan 29 11:15:07.738672 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Jan 29 11:15:07.738677 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Jan 29 11:15:07.738682 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Jan 29 11:15:07.738687 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jan 29 11:15:07.738694 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jan 29 11:15:07.738699 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Jan 29 11:15:07.738704 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Jan 29 11:15:07.738710 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Jan 29 11:15:07.738715 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Jan 29 11:15:07.738720 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Jan 29 11:15:07.738726 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Jan 29 11:15:07.738731 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Jan 29 11:15:07.738736 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Jan 29 11:15:07.738741 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jan 29 11:15:07.738746 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jan 29 11:15:07.738751 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Jan 29 11:15:07.738757 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Jan 29 11:15:07.738762 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Jan 29 11:15:07.738767 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Jan 29 11:15:07.738773 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Jan 29 11:15:07.738778 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Jan 29 11:15:07.738783 kernel: system APIC only can use physical flat Jan 29 11:15:07.738788 kernel: APIC: Switched APIC routing to: physical flat Jan 29 11:15:07.738793 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 29 11:15:07.738798 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 29 11:15:07.738803 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 29 11:15:07.738808 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 29 11:15:07.738813 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 29 11:15:07.738818 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 29 11:15:07.738824 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 29 11:15:07.738829 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 29 11:15:07.738834 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Jan 29 11:15:07.738839 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Jan 29 11:15:07.738844 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Jan 29 11:15:07.738849 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Jan 29 11:15:07.738854 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Jan 29 11:15:07.738859 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Jan 29 11:15:07.738864 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Jan 29 11:15:07.738869 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Jan 29 11:15:07.738875 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Jan 29 11:15:07.738880 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Jan 29 11:15:07.738884 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Jan 29 11:15:07.738889 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Jan 29 11:15:07.738894 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Jan 29 11:15:07.738899 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Jan 29 11:15:07.738904 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Jan 29 11:15:07.738909 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Jan 29 11:15:07.738914 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Jan 29 11:15:07.738919 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Jan 29 11:15:07.738925 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Jan 29 11:15:07.738930 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Jan 29 11:15:07.738935 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Jan 29 11:15:07.738940 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Jan 29 11:15:07.738945 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Jan 29 11:15:07.738950 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Jan 29 11:15:07.738955 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Jan 29 11:15:07.738960 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Jan 29 11:15:07.738965 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Jan 29 11:15:07.738970 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Jan 29 11:15:07.738976 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Jan 29 11:15:07.738981 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Jan 29 11:15:07.738986 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Jan 29 11:15:07.738991 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Jan 29 11:15:07.738996 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Jan 29 11:15:07.739001 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Jan 29 11:15:07.739006 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Jan 29 11:15:07.739011 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Jan 29 11:15:07.739016 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Jan 29 11:15:07.739021 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Jan 29 11:15:07.739027 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Jan 29 11:15:07.739032 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Jan 29 11:15:07.739037 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Jan 29 11:15:07.739041 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Jan 29 11:15:07.739046 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Jan 29 11:15:07.739051 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Jan 29 11:15:07.739056 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Jan 29 11:15:07.739061 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Jan 29 11:15:07.739066 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Jan 29 11:15:07.739071 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Jan 29 11:15:07.739077 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Jan 29 11:15:07.739082 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Jan 29 11:15:07.739087 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Jan 29 11:15:07.739095 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Jan 29 11:15:07.739101 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Jan 29 11:15:07.739106 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Jan 29 11:15:07.739112 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Jan 29 11:15:07.739117 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Jan 29 11:15:07.739122 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Jan 29 11:15:07.739128 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Jan 29 11:15:07.739134 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Jan 29 11:15:07.739139 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Jan 29 11:15:07.739144 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Jan 29 11:15:07.739149 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Jan 29 11:15:07.739155 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Jan 29 11:15:07.739160 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Jan 29 11:15:07.739165 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Jan 29 11:15:07.739177 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Jan 29 11:15:07.739182 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Jan 29 11:15:07.739189 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Jan 29 11:15:07.739194 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Jan 29 11:15:07.739199 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Jan 29 11:15:07.739204 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Jan 29 11:15:07.739210 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Jan 29 11:15:07.739215 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Jan 29 11:15:07.739220 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Jan 29 11:15:07.739226 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Jan 29 11:15:07.739231 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Jan 29 11:15:07.739236 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Jan 29 11:15:07.739241 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Jan 29 11:15:07.739247 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Jan 29 11:15:07.739253 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Jan 29 11:15:07.739258 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Jan 29 11:15:07.739264 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Jan 29 11:15:07.739269 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Jan 29 11:15:07.739274 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Jan 29 11:15:07.739280 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Jan 29 11:15:07.739285 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Jan 29 11:15:07.739290 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Jan 29 11:15:07.739295 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Jan 29 11:15:07.739302 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Jan 29 11:15:07.739307 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Jan 29 11:15:07.739312 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Jan 29 11:15:07.739317 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Jan 29 11:15:07.739332 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Jan 29 11:15:07.739338 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Jan 29 11:15:07.739344 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Jan 29 11:15:07.739349 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Jan 29 11:15:07.739354 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Jan 29 11:15:07.739359 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Jan 29 11:15:07.739367 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Jan 29 11:15:07.739372 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Jan 29 11:15:07.739377 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Jan 29 11:15:07.739382 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Jan 29 11:15:07.739388 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Jan 29 11:15:07.739393 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Jan 29 11:15:07.739398 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Jan 29 11:15:07.739403 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Jan 29 11:15:07.739409 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Jan 29 11:15:07.739414 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Jan 29 11:15:07.739420 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Jan 29 11:15:07.739426 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Jan 29 11:15:07.739431 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Jan 29 11:15:07.739436 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Jan 29 11:15:07.739441 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Jan 29 11:15:07.739447 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Jan 29 11:15:07.739452 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Jan 29 11:15:07.739458 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Jan 29 11:15:07.739463 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Jan 29 11:15:07.739468 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Jan 29 11:15:07.739474 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Jan 29 11:15:07.739480 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Jan 29 11:15:07.739485 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 29 11:15:07.739491 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 29 11:15:07.739496 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Jan 29 11:15:07.739502 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Jan 29 11:15:07.739507 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Jan 29 11:15:07.739513 kernel: Zone ranges: Jan 29 11:15:07.739518 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 11:15:07.739524 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Jan 29 11:15:07.739530 kernel: Normal empty Jan 29 11:15:07.739536 kernel: Movable zone start for each node Jan 29 11:15:07.739541 kernel: Early memory node ranges Jan 29 11:15:07.739547 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Jan 29 11:15:07.739552 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Jan 29 11:15:07.739557 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Jan 29 11:15:07.739563 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Jan 29 11:15:07.739568 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 11:15:07.739574 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Jan 29 11:15:07.739580 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Jan 29 11:15:07.739586 kernel: ACPI: PM-Timer IO Port: 0x1008 Jan 29 11:15:07.739591 kernel: system APIC only can use physical flat Jan 29 11:15:07.739596 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Jan 29 11:15:07.739602 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jan 29 11:15:07.739607 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jan 29 11:15:07.739613 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jan 29 11:15:07.739618 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jan 29 11:15:07.739623 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jan 29 11:15:07.739630 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jan 29 11:15:07.739635 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jan 29 11:15:07.739640 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jan 29 11:15:07.739646 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jan 29 11:15:07.739651 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jan 29 11:15:07.739656 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jan 29 11:15:07.739662 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jan 29 11:15:07.739667 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jan 29 11:15:07.739672 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jan 29 11:15:07.739678 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jan 29 11:15:07.739684 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jan 29 11:15:07.739690 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Jan 29 11:15:07.739695 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Jan 29 11:15:07.739700 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Jan 29 11:15:07.739705 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Jan 29 11:15:07.739711 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Jan 29 11:15:07.739716 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Jan 29 11:15:07.739721 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Jan 29 11:15:07.739727 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Jan 29 11:15:07.739732 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Jan 29 11:15:07.739739 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Jan 29 11:15:07.739744 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Jan 29 11:15:07.739749 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Jan 29 11:15:07.739755 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Jan 29 11:15:07.739760 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Jan 29 11:15:07.739765 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Jan 29 11:15:07.739771 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Jan 29 11:15:07.739776 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Jan 29 11:15:07.739781 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Jan 29 11:15:07.739787 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Jan 29 11:15:07.739793 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Jan 29 11:15:07.739799 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Jan 29 11:15:07.739804 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Jan 29 11:15:07.739809 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Jan 29 11:15:07.739815 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Jan 29 11:15:07.739820 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Jan 29 11:15:07.739825 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Jan 29 11:15:07.739831 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Jan 29 11:15:07.739836 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Jan 29 11:15:07.739841 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Jan 29 11:15:07.739848 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Jan 29 11:15:07.739853 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Jan 29 11:15:07.739858 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Jan 29 11:15:07.739864 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Jan 29 11:15:07.739869 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Jan 29 11:15:07.739875 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Jan 29 11:15:07.739880 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Jan 29 11:15:07.739885 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Jan 29 11:15:07.739891 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Jan 29 11:15:07.739897 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Jan 29 11:15:07.739902 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Jan 29 11:15:07.739908 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Jan 29 11:15:07.739913 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Jan 29 11:15:07.739918 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Jan 29 11:15:07.739924 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Jan 29 11:15:07.739929 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Jan 29 11:15:07.739935 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Jan 29 11:15:07.739940 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Jan 29 11:15:07.739945 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Jan 29 11:15:07.739951 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Jan 29 11:15:07.739957 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Jan 29 11:15:07.739962 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Jan 29 11:15:07.739967 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Jan 29 11:15:07.739973 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Jan 29 11:15:07.739978 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Jan 29 11:15:07.739984 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Jan 29 11:15:07.739989 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Jan 29 11:15:07.739994 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Jan 29 11:15:07.740000 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Jan 29 11:15:07.740013 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Jan 29 11:15:07.740019 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Jan 29 11:15:07.740024 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Jan 29 11:15:07.740030 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Jan 29 11:15:07.740035 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Jan 29 11:15:07.740040 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Jan 29 11:15:07.740046 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Jan 29 11:15:07.740051 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Jan 29 11:15:07.740056 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Jan 29 11:15:07.740062 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Jan 29 11:15:07.740068 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Jan 29 11:15:07.740073 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Jan 29 11:15:07.740079 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Jan 29 11:15:07.740084 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Jan 29 11:15:07.740090 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Jan 29 11:15:07.740095 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Jan 29 11:15:07.740100 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Jan 29 11:15:07.740106 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Jan 29 11:15:07.740111 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Jan 29 11:15:07.740118 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Jan 29 11:15:07.740124 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Jan 29 11:15:07.740129 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Jan 29 11:15:07.740135 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Jan 29 11:15:07.740140 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Jan 29 11:15:07.740145 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Jan 29 11:15:07.740151 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Jan 29 11:15:07.740156 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Jan 29 11:15:07.740161 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Jan 29 11:15:07.740208 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Jan 29 11:15:07.740217 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Jan 29 11:15:07.740223 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Jan 29 11:15:07.740228 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Jan 29 11:15:07.740234 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Jan 29 11:15:07.740239 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Jan 29 11:15:07.740244 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Jan 29 11:15:07.740249 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Jan 29 11:15:07.740255 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Jan 29 11:15:07.740260 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Jan 29 11:15:07.740265 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Jan 29 11:15:07.740272 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Jan 29 11:15:07.740277 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Jan 29 11:15:07.740282 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Jan 29 11:15:07.740288 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Jan 29 11:15:07.740293 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Jan 29 11:15:07.740298 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Jan 29 11:15:07.740304 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Jan 29 11:15:07.740309 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Jan 29 11:15:07.740315 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Jan 29 11:15:07.740320 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Jan 29 11:15:07.740326 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Jan 29 11:15:07.740332 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Jan 29 11:15:07.740337 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Jan 29 11:15:07.740342 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Jan 29 11:15:07.740348 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Jan 29 11:15:07.740354 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Jan 29 11:15:07.740359 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 11:15:07.740364 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Jan 29 11:15:07.740370 kernel: TSC deadline timer available Jan 29 11:15:07.740376 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Jan 29 11:15:07.740382 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Jan 29 11:15:07.740387 kernel: Booting paravirtualized kernel on VMware hypervisor Jan 29 11:15:07.740393 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 11:15:07.740398 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Jan 29 11:15:07.740404 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 29 11:15:07.740409 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 29 11:15:07.740415 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Jan 29 11:15:07.740420 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Jan 29 11:15:07.740427 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Jan 29 11:15:07.740432 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Jan 29 11:15:07.740437 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Jan 29 11:15:07.740450 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Jan 29 11:15:07.740456 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Jan 29 11:15:07.740462 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Jan 29 11:15:07.740468 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Jan 29 11:15:07.740473 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Jan 29 11:15:07.740480 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Jan 29 11:15:07.740486 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Jan 29 11:15:07.740491 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Jan 29 11:15:07.740497 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Jan 29 11:15:07.740503 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Jan 29 11:15:07.740508 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Jan 29 11:15:07.740515 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 29 11:15:07.740521 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:15:07.740528 kernel: random: crng init done Jan 29 11:15:07.740533 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jan 29 11:15:07.740539 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Jan 29 11:15:07.740545 kernel: printk: log_buf_len min size: 262144 bytes Jan 29 11:15:07.740550 kernel: printk: log_buf_len: 1048576 bytes Jan 29 11:15:07.740556 kernel: printk: early log buf free: 239648(91%) Jan 29 11:15:07.740562 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:15:07.740568 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 29 11:15:07.740573 kernel: Fallback order for Node 0: 0 Jan 29 11:15:07.740579 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Jan 29 11:15:07.740586 kernel: Policy zone: DMA32 Jan 29 11:15:07.740592 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:15:07.740598 kernel: Memory: 1934292K/2096628K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 162076K reserved, 0K cma-reserved) Jan 29 11:15:07.740605 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Jan 29 11:15:07.740611 kernel: ftrace: allocating 37893 entries in 149 pages Jan 29 11:15:07.740617 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 11:15:07.740623 kernel: Dynamic Preempt: voluntary Jan 29 11:15:07.740629 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:15:07.740635 kernel: rcu: RCU event tracing is enabled. Jan 29 11:15:07.740641 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Jan 29 11:15:07.740647 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:15:07.740653 kernel: Rude variant of Tasks RCU enabled. Jan 29 11:15:07.740659 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:15:07.740664 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:15:07.740670 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Jan 29 11:15:07.740677 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Jan 29 11:15:07.740683 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Jan 29 11:15:07.740689 kernel: Console: colour VGA+ 80x25 Jan 29 11:15:07.740694 kernel: printk: console [tty0] enabled Jan 29 11:15:07.740700 kernel: printk: console [ttyS0] enabled Jan 29 11:15:07.740706 kernel: ACPI: Core revision 20230628 Jan 29 11:15:07.740712 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Jan 29 11:15:07.740717 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 11:15:07.740723 kernel: x2apic enabled Jan 29 11:15:07.740730 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 11:15:07.740736 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 11:15:07.740743 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jan 29 11:15:07.740749 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Jan 29 11:15:07.740754 kernel: Disabled fast string operations Jan 29 11:15:07.740760 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 29 11:15:07.740766 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 29 11:15:07.740772 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 11:15:07.740778 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 29 11:15:07.740785 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 29 11:15:07.740790 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 29 11:15:07.740796 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 11:15:07.740802 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 29 11:15:07.740808 kernel: RETBleed: Mitigation: Enhanced IBRS Jan 29 11:15:07.740814 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 11:15:07.740820 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 11:15:07.740826 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 11:15:07.740832 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 29 11:15:07.740838 kernel: GDS: Unknown: Dependent on hypervisor status Jan 29 11:15:07.740844 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 11:15:07.740850 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 11:15:07.740856 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 11:15:07.740862 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 11:15:07.740867 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 29 11:15:07.740873 kernel: Freeing SMP alternatives memory: 32K Jan 29 11:15:07.740879 kernel: pid_max: default: 131072 minimum: 1024 Jan 29 11:15:07.740885 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:15:07.740892 kernel: landlock: Up and running. Jan 29 11:15:07.740897 kernel: SELinux: Initializing. Jan 29 11:15:07.740903 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 11:15:07.740909 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 11:15:07.740915 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jan 29 11:15:07.740921 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jan 29 11:15:07.740926 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jan 29 11:15:07.740932 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jan 29 11:15:07.740939 kernel: Performance Events: Skylake events, core PMU driver. Jan 29 11:15:07.740945 kernel: core: CPUID marked event: 'cpu cycles' unavailable Jan 29 11:15:07.740951 kernel: core: CPUID marked event: 'instructions' unavailable Jan 29 11:15:07.740957 kernel: core: CPUID marked event: 'bus cycles' unavailable Jan 29 11:15:07.740962 kernel: core: CPUID marked event: 'cache references' unavailable Jan 29 11:15:07.740968 kernel: core: CPUID marked event: 'cache misses' unavailable Jan 29 11:15:07.740973 kernel: core: CPUID marked event: 'branch instructions' unavailable Jan 29 11:15:07.740979 kernel: core: CPUID marked event: 'branch misses' unavailable Jan 29 11:15:07.740985 kernel: ... version: 1 Jan 29 11:15:07.740992 kernel: ... bit width: 48 Jan 29 11:15:07.740997 kernel: ... generic registers: 4 Jan 29 11:15:07.741008 kernel: ... value mask: 0000ffffffffffff Jan 29 11:15:07.741013 kernel: ... max period: 000000007fffffff Jan 29 11:15:07.741019 kernel: ... fixed-purpose events: 0 Jan 29 11:15:07.741025 kernel: ... event mask: 000000000000000f Jan 29 11:15:07.741031 kernel: signal: max sigframe size: 1776 Jan 29 11:15:07.741037 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:15:07.741043 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:15:07.741050 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 29 11:15:07.741056 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:15:07.741061 kernel: smpboot: x86: Booting SMP configuration: Jan 29 11:15:07.741067 kernel: .... node #0, CPUs: #1 Jan 29 11:15:07.741073 kernel: Disabled fast string operations Jan 29 11:15:07.741079 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Jan 29 11:15:07.741085 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 29 11:15:07.741090 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 11:15:07.741096 kernel: smpboot: Max logical packages: 128 Jan 29 11:15:07.741102 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Jan 29 11:15:07.741109 kernel: devtmpfs: initialized Jan 29 11:15:07.741115 kernel: x86/mm: Memory block size: 128MB Jan 29 11:15:07.741120 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Jan 29 11:15:07.741126 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:15:07.741132 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jan 29 11:15:07.741138 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:15:07.741145 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:15:07.741151 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:15:07.741156 kernel: audit: type=2000 audit(1738149306.068:1): state=initialized audit_enabled=0 res=1 Jan 29 11:15:07.741163 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:15:07.741174 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 11:15:07.741186 kernel: cpuidle: using governor menu Jan 29 11:15:07.741192 kernel: Simple Boot Flag at 0x36 set to 0x80 Jan 29 11:15:07.741198 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:15:07.741204 kernel: dca service started, version 1.12.1 Jan 29 11:15:07.741210 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Jan 29 11:15:07.741215 kernel: PCI: Using configuration type 1 for base access Jan 29 11:15:07.741221 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 11:15:07.741229 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:15:07.741235 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:15:07.741240 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:15:07.741246 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:15:07.741252 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:15:07.741258 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:15:07.741264 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:15:07.741269 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:15:07.741275 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:15:07.741282 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jan 29 11:15:07.741288 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 11:15:07.741294 kernel: ACPI: Interpreter enabled Jan 29 11:15:07.741300 kernel: ACPI: PM: (supports S0 S1 S5) Jan 29 11:15:07.741305 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 11:15:07.741311 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 11:15:07.741317 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 11:15:07.741323 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Jan 29 11:15:07.741329 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Jan 29 11:15:07.741408 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:15:07.741466 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Jan 29 11:15:07.741516 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jan 29 11:15:07.741524 kernel: PCI host bridge to bus 0000:00 Jan 29 11:15:07.741574 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 11:15:07.741619 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Jan 29 11:15:07.741667 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 29 11:15:07.741711 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 11:15:07.741755 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Jan 29 11:15:07.741798 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Jan 29 11:15:07.741856 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Jan 29 11:15:07.741913 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Jan 29 11:15:07.741971 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Jan 29 11:15:07.742028 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Jan 29 11:15:07.742079 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Jan 29 11:15:07.742130 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 29 11:15:07.742194 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 29 11:15:07.742247 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 29 11:15:07.742297 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 29 11:15:07.742354 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Jan 29 11:15:07.742407 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Jan 29 11:15:07.742458 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Jan 29 11:15:07.742513 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Jan 29 11:15:07.742564 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Jan 29 11:15:07.742614 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Jan 29 11:15:07.742671 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Jan 29 11:15:07.742721 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Jan 29 11:15:07.742771 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Jan 29 11:15:07.742821 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Jan 29 11:15:07.742873 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Jan 29 11:15:07.742924 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 11:15:07.742977 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Jan 29 11:15:07.743039 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.743090 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.743145 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.743220 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.743278 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.743329 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.743389 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.743441 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.743498 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.743550 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.743605 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.743656 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.743712 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.743765 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.743820 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.743871 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.743927 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.743979 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.744041 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.744093 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.744149 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.744224 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.744280 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.744331 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.744391 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.744442 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.744496 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.744547 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.744602 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.744652 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.744709 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.744761 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.744816 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.744868 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.744923 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.744975 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.745037 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.745094 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.745149 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.745222 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.745282 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.745334 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.745389 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.745445 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.745499 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.745550 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.745605 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.745657 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.745713 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.745768 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.745823 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.745875 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.745931 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.745983 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.746039 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.746093 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.746148 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.746231 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.746290 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.746341 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.746405 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.746460 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.746514 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.746565 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.746618 kernel: pci_bus 0000:01: extended config space not accessible Jan 29 11:15:07.746670 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 29 11:15:07.746721 kernel: pci_bus 0000:02: extended config space not accessible Jan 29 11:15:07.746730 kernel: acpiphp: Slot [32] registered Jan 29 11:15:07.746738 kernel: acpiphp: Slot [33] registered Jan 29 11:15:07.746744 kernel: acpiphp: Slot [34] registered Jan 29 11:15:07.746750 kernel: acpiphp: Slot [35] registered Jan 29 11:15:07.746756 kernel: acpiphp: Slot [36] registered Jan 29 11:15:07.746762 kernel: acpiphp: Slot [37] registered Jan 29 11:15:07.746768 kernel: acpiphp: Slot [38] registered Jan 29 11:15:07.746773 kernel: acpiphp: Slot [39] registered Jan 29 11:15:07.746779 kernel: acpiphp: Slot [40] registered Jan 29 11:15:07.746785 kernel: acpiphp: Slot [41] registered Jan 29 11:15:07.746791 kernel: acpiphp: Slot [42] registered Jan 29 11:15:07.746797 kernel: acpiphp: Slot [43] registered Jan 29 11:15:07.746803 kernel: acpiphp: Slot [44] registered Jan 29 11:15:07.746809 kernel: acpiphp: Slot [45] registered Jan 29 11:15:07.746814 kernel: acpiphp: Slot [46] registered Jan 29 11:15:07.746820 kernel: acpiphp: Slot [47] registered Jan 29 11:15:07.746826 kernel: acpiphp: Slot [48] registered Jan 29 11:15:07.746832 kernel: acpiphp: Slot [49] registered Jan 29 11:15:07.746837 kernel: acpiphp: Slot [50] registered Jan 29 11:15:07.746843 kernel: acpiphp: Slot [51] registered Jan 29 11:15:07.746850 kernel: acpiphp: Slot [52] registered Jan 29 11:15:07.746856 kernel: acpiphp: Slot [53] registered Jan 29 11:15:07.746861 kernel: acpiphp: Slot [54] registered Jan 29 11:15:07.746867 kernel: acpiphp: Slot [55] registered Jan 29 11:15:07.746872 kernel: acpiphp: Slot [56] registered Jan 29 11:15:07.746878 kernel: acpiphp: Slot [57] registered Jan 29 11:15:07.746884 kernel: acpiphp: Slot [58] registered Jan 29 11:15:07.746890 kernel: acpiphp: Slot [59] registered Jan 29 11:15:07.746895 kernel: acpiphp: Slot [60] registered Jan 29 11:15:07.746902 kernel: acpiphp: Slot [61] registered Jan 29 11:15:07.746908 kernel: acpiphp: Slot [62] registered Jan 29 11:15:07.746914 kernel: acpiphp: Slot [63] registered Jan 29 11:15:07.746963 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jan 29 11:15:07.747022 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jan 29 11:15:07.747075 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jan 29 11:15:07.747125 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jan 29 11:15:07.747562 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Jan 29 11:15:07.747626 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Jan 29 11:15:07.747678 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Jan 29 11:15:07.747729 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Jan 29 11:15:07.747780 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Jan 29 11:15:07.747837 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Jan 29 11:15:07.747890 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Jan 29 11:15:07.747941 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Jan 29 11:15:07.747995 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jan 29 11:15:07.748046 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.748097 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jan 29 11:15:07.748149 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jan 29 11:15:07.748219 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jan 29 11:15:07.748271 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jan 29 11:15:07.748321 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jan 29 11:15:07.748370 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jan 29 11:15:07.748424 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jan 29 11:15:07.748474 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jan 29 11:15:07.748526 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jan 29 11:15:07.748576 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jan 29 11:15:07.748627 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jan 29 11:15:07.748748 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jan 29 11:15:07.748804 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jan 29 11:15:07.748859 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jan 29 11:15:07.748910 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jan 29 11:15:07.748960 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jan 29 11:15:07.749011 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jan 29 11:15:07.749061 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jan 29 11:15:07.749115 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jan 29 11:15:07.749165 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jan 29 11:15:07.750391 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jan 29 11:15:07.750449 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jan 29 11:15:07.750502 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jan 29 11:15:07.750554 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jan 29 11:15:07.750606 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jan 29 11:15:07.750657 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jan 29 11:15:07.750711 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jan 29 11:15:07.750768 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Jan 29 11:15:07.750822 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Jan 29 11:15:07.750874 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Jan 29 11:15:07.750926 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Jan 29 11:15:07.750977 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Jan 29 11:15:07.751051 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jan 29 11:15:07.751108 kernel: pci 0000:0b:00.0: supports D1 D2 Jan 29 11:15:07.751160 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 29 11:15:07.751231 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jan 29 11:15:07.751339 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jan 29 11:15:07.751394 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jan 29 11:15:07.751445 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jan 29 11:15:07.751495 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jan 29 11:15:07.751546 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jan 29 11:15:07.751600 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jan 29 11:15:07.751650 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jan 29 11:15:07.751700 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jan 29 11:15:07.751750 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jan 29 11:15:07.751800 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jan 29 11:15:07.751850 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jan 29 11:15:07.751901 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jan 29 11:15:07.751950 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jan 29 11:15:07.752059 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jan 29 11:15:07.753087 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jan 29 11:15:07.753151 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jan 29 11:15:07.753252 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jan 29 11:15:07.753309 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jan 29 11:15:07.753363 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jan 29 11:15:07.753416 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jan 29 11:15:07.753471 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jan 29 11:15:07.753528 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jan 29 11:15:07.753581 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jan 29 11:15:07.753635 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jan 29 11:15:07.753689 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jan 29 11:15:07.753741 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jan 29 11:15:07.753794 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jan 29 11:15:07.753847 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jan 29 11:15:07.753899 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jan 29 11:15:07.753955 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jan 29 11:15:07.754013 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jan 29 11:15:07.754066 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jan 29 11:15:07.754119 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jan 29 11:15:07.754245 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jan 29 11:15:07.754332 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jan 29 11:15:07.754386 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jan 29 11:15:07.754439 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jan 29 11:15:07.754488 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jan 29 11:15:07.754539 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jan 29 11:15:07.754589 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jan 29 11:15:07.754640 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jan 29 11:15:07.754691 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jan 29 11:15:07.754740 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jan 29 11:15:07.754790 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jan 29 11:15:07.754843 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jan 29 11:15:07.754893 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jan 29 11:15:07.754943 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jan 29 11:15:07.755007 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jan 29 11:15:07.755086 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jan 29 11:15:07.755140 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jan 29 11:15:07.755205 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jan 29 11:15:07.755258 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jan 29 11:15:07.755313 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jan 29 11:15:07.755365 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jan 29 11:15:07.755415 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jan 29 11:15:07.755466 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jan 29 11:15:07.755517 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jan 29 11:15:07.755568 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jan 29 11:15:07.755619 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jan 29 11:15:07.755669 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jan 29 11:15:07.755723 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jan 29 11:15:07.755774 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jan 29 11:15:07.755826 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jan 29 11:15:07.755877 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jan 29 11:15:07.755928 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jan 29 11:15:07.755979 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jan 29 11:15:07.756029 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jan 29 11:15:07.756083 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jan 29 11:15:07.756134 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jan 29 11:15:07.756279 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jan 29 11:15:07.756334 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jan 29 11:15:07.756386 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jan 29 11:15:07.756437 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jan 29 11:15:07.756488 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jan 29 11:15:07.756538 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jan 29 11:15:07.756619 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jan 29 11:15:07.756678 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jan 29 11:15:07.756729 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jan 29 11:15:07.756779 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jan 29 11:15:07.756788 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Jan 29 11:15:07.756794 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Jan 29 11:15:07.756800 kernel: ACPI: PCI: Interrupt link LNKB disabled Jan 29 11:15:07.756807 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 11:15:07.756812 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Jan 29 11:15:07.756821 kernel: iommu: Default domain type: Translated Jan 29 11:15:07.756826 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 11:15:07.756832 kernel: PCI: Using ACPI for IRQ routing Jan 29 11:15:07.756838 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 11:15:07.756844 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Jan 29 11:15:07.756850 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Jan 29 11:15:07.756898 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Jan 29 11:15:07.756948 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Jan 29 11:15:07.756999 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 11:15:07.757015 kernel: vgaarb: loaded Jan 29 11:15:07.757022 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Jan 29 11:15:07.757028 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Jan 29 11:15:07.757033 kernel: clocksource: Switched to clocksource tsc-early Jan 29 11:15:07.757039 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:15:07.757045 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:15:07.757051 kernel: pnp: PnP ACPI init Jan 29 11:15:07.757112 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Jan 29 11:15:07.757164 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Jan 29 11:15:07.757228 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Jan 29 11:15:07.757279 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Jan 29 11:15:07.757328 kernel: pnp 00:06: [dma 2] Jan 29 11:15:07.757378 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Jan 29 11:15:07.757425 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Jan 29 11:15:07.757471 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Jan 29 11:15:07.757481 kernel: pnp: PnP ACPI: found 8 devices Jan 29 11:15:07.757488 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 11:15:07.757494 kernel: NET: Registered PF_INET protocol family Jan 29 11:15:07.757500 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:15:07.757506 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 29 11:15:07.757512 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:15:07.757518 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 11:15:07.757524 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 29 11:15:07.757531 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 29 11:15:07.757537 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 11:15:07.757543 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 11:15:07.757549 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:15:07.757555 kernel: NET: Registered PF_XDP protocol family Jan 29 11:15:07.757606 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jan 29 11:15:07.757658 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 29 11:15:07.757710 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 29 11:15:07.757765 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 29 11:15:07.757815 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 29 11:15:07.757921 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jan 29 11:15:07.757976 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jan 29 11:15:07.758028 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jan 29 11:15:07.758080 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jan 29 11:15:07.758136 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jan 29 11:15:07.758201 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jan 29 11:15:07.758255 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jan 29 11:15:07.758307 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jan 29 11:15:07.758358 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jan 29 11:15:07.758414 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jan 29 11:15:07.758466 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jan 29 11:15:07.758517 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jan 29 11:15:07.758569 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jan 29 11:15:07.758620 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jan 29 11:15:07.758671 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jan 29 11:15:07.758726 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jan 29 11:15:07.758777 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jan 29 11:15:07.758829 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Jan 29 11:15:07.758879 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Jan 29 11:15:07.758979 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Jan 29 11:15:07.759036 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.759087 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.759142 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.759209 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.759261 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.759312 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.759362 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.759414 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.759465 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.759516 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.759628 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.759685 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.759737 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.759788 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.759838 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.759889 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.759939 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.759990 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.760044 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.760113 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.760165 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.760224 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.760275 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.760327 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.760378 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.760429 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.760484 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.760535 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.760585 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.760636 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.760687 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.760744 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.760796 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.760846 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.760900 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.760951 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.761026 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.761087 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.761138 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.761203 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.761257 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.761351 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.761408 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.761459 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.761510 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.761560 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.761612 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.761662 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.761713 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.761763 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.761814 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.761865 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.761919 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.761969 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.762020 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.762071 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.762122 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.762229 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.762284 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.762334 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.762438 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.762495 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.762546 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.762596 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.762647 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.762697 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.762748 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.762798 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.762849 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.762899 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.762949 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.763002 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.763052 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.763103 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.763153 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.763253 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.763304 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.763355 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.763404 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.763454 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.763508 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.763558 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.763608 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.763658 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.763709 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 29 11:15:07.763760 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Jan 29 11:15:07.763810 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jan 29 11:15:07.763859 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jan 29 11:15:07.763909 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jan 29 11:15:07.763965 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Jan 29 11:15:07.764017 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jan 29 11:15:07.764068 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jan 29 11:15:07.764119 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jan 29 11:15:07.764183 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Jan 29 11:15:07.764237 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jan 29 11:15:07.764288 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jan 29 11:15:07.764338 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jan 29 11:15:07.764389 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jan 29 11:15:07.764444 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jan 29 11:15:07.764494 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jan 29 11:15:07.764544 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jan 29 11:15:07.764594 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jan 29 11:15:07.764645 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jan 29 11:15:07.764695 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jan 29 11:15:07.764745 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jan 29 11:15:07.764796 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jan 29 11:15:07.764847 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jan 29 11:15:07.764900 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jan 29 11:15:07.764953 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jan 29 11:15:07.765007 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jan 29 11:15:07.765059 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jan 29 11:15:07.765109 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jan 29 11:15:07.765158 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jan 29 11:15:07.765281 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jan 29 11:15:07.765333 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jan 29 11:15:07.765382 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jan 29 11:15:07.765433 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jan 29 11:15:07.765484 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Jan 29 11:15:07.765534 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jan 29 11:15:07.765583 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jan 29 11:15:07.765635 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jan 29 11:15:07.765685 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Jan 29 11:15:07.765739 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jan 29 11:15:07.765790 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jan 29 11:15:07.765840 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jan 29 11:15:07.765891 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jan 29 11:15:07.765941 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jan 29 11:15:07.765991 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jan 29 11:15:07.766042 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jan 29 11:15:07.766093 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jan 29 11:15:07.766143 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jan 29 11:15:07.766209 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jan 29 11:15:07.766262 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jan 29 11:15:07.766313 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jan 29 11:15:07.766363 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jan 29 11:15:07.766414 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jan 29 11:15:07.766465 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jan 29 11:15:07.766516 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jan 29 11:15:07.766566 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jan 29 11:15:07.766616 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jan 29 11:15:07.766666 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jan 29 11:15:07.766721 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jan 29 11:15:07.766772 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jan 29 11:15:07.766822 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jan 29 11:15:07.766872 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jan 29 11:15:07.766922 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jan 29 11:15:07.766972 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jan 29 11:15:07.767022 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jan 29 11:15:07.767071 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jan 29 11:15:07.767122 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jan 29 11:15:07.767192 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jan 29 11:15:07.767247 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jan 29 11:15:07.767297 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jan 29 11:15:07.767348 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jan 29 11:15:07.767399 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jan 29 11:15:07.767449 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jan 29 11:15:07.767499 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jan 29 11:15:07.767550 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jan 29 11:15:07.767600 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jan 29 11:15:07.767651 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jan 29 11:15:07.767705 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jan 29 11:15:07.767755 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jan 29 11:15:07.767805 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jan 29 11:15:07.767855 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jan 29 11:15:07.767906 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jan 29 11:15:07.767956 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jan 29 11:15:07.768010 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jan 29 11:15:07.768062 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jan 29 11:15:07.768113 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jan 29 11:15:07.768201 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jan 29 11:15:07.768260 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jan 29 11:15:07.768310 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jan 29 11:15:07.768359 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jan 29 11:15:07.768409 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jan 29 11:15:07.768459 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jan 29 11:15:07.768509 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jan 29 11:15:07.768559 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jan 29 11:15:07.768610 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jan 29 11:15:07.768660 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jan 29 11:15:07.768713 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jan 29 11:15:07.768763 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jan 29 11:15:07.768813 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jan 29 11:15:07.768863 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jan 29 11:15:07.768913 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jan 29 11:15:07.768963 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jan 29 11:15:07.769013 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jan 29 11:15:07.769063 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jan 29 11:15:07.769112 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jan 29 11:15:07.769188 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jan 29 11:15:07.769243 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jan 29 11:15:07.769294 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jan 29 11:15:07.769344 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jan 29 11:15:07.769395 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jan 29 11:15:07.769446 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jan 29 11:15:07.769496 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jan 29 11:15:07.769547 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jan 29 11:15:07.769597 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jan 29 11:15:07.769648 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jan 29 11:15:07.769700 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Jan 29 11:15:07.769756 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Jan 29 11:15:07.769801 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Jan 29 11:15:07.769846 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Jan 29 11:15:07.769891 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Jan 29 11:15:07.769940 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Jan 29 11:15:07.770428 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Jan 29 11:15:07.770487 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Jan 29 11:15:07.770536 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Jan 29 11:15:07.770582 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Jan 29 11:15:07.770628 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Jan 29 11:15:07.770674 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Jan 29 11:15:07.770728 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Jan 29 11:15:07.770781 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Jan 29 11:15:07.770828 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Jan 29 11:15:07.770878 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Jan 29 11:15:07.770928 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Jan 29 11:15:07.770975 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Jan 29 11:15:07.771045 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Jan 29 11:15:07.771098 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Jan 29 11:15:07.771145 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Jan 29 11:15:07.771217 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Jan 29 11:15:07.771280 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Jan 29 11:15:07.771334 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Jan 29 11:15:07.771383 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Jan 29 11:15:07.771431 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Jan 29 11:15:07.771481 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Jan 29 11:15:07.771527 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Jan 29 11:15:07.771580 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Jan 29 11:15:07.771627 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Jan 29 11:15:07.771681 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Jan 29 11:15:07.771738 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Jan 29 11:15:07.771791 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Jan 29 11:15:07.771877 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Jan 29 11:15:07.771942 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Jan 29 11:15:07.771993 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Jan 29 11:15:07.772089 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Jan 29 11:15:07.772136 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Jan 29 11:15:07.772228 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Jan 29 11:15:07.772278 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Jan 29 11:15:07.772332 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Jan 29 11:15:07.772385 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Jan 29 11:15:07.772433 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Jan 29 11:15:07.772484 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Jan 29 11:15:07.772531 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Jan 29 11:15:07.772581 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Jan 29 11:15:07.772631 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Jan 29 11:15:07.772681 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Jan 29 11:15:07.772732 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Jan 29 11:15:07.772782 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Jan 29 11:15:07.772829 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Jan 29 11:15:07.772879 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Jan 29 11:15:07.772927 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Jan 29 11:15:07.772976 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Jan 29 11:15:07.773027 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Jan 29 11:15:07.773133 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Jan 29 11:15:07.773198 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Jan 29 11:15:07.773252 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Jan 29 11:15:07.773300 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Jan 29 11:15:07.773351 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Jan 29 11:15:07.773403 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Jan 29 11:15:07.773451 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Jan 29 11:15:07.773504 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Jan 29 11:15:07.773552 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Jan 29 11:15:07.773603 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Jan 29 11:15:07.773651 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Jan 29 11:15:07.773705 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Jan 29 11:15:07.773752 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Jan 29 11:15:07.773805 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Jan 29 11:15:07.773853 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Jan 29 11:15:07.773907 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jan 29 11:15:07.773958 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Jan 29 11:15:07.774005 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Jan 29 11:15:07.774056 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Jan 29 11:15:07.774103 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Jan 29 11:15:07.774150 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Jan 29 11:15:07.774214 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Jan 29 11:15:07.774263 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Jan 29 11:15:07.774317 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Jan 29 11:15:07.774365 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Jan 29 11:15:07.774417 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Jan 29 11:15:07.774573 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Jan 29 11:15:07.774800 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Jan 29 11:15:07.774852 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Jan 29 11:15:07.774917 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Jan 29 11:15:07.774965 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Jan 29 11:15:07.775016 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Jan 29 11:15:07.775064 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Jan 29 11:15:07.775120 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 29 11:15:07.775130 kernel: PCI: CLS 32 bytes, default 64 Jan 29 11:15:07.775137 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 29 11:15:07.775146 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jan 29 11:15:07.775152 kernel: clocksource: Switched to clocksource tsc Jan 29 11:15:07.775158 kernel: Initialise system trusted keyrings Jan 29 11:15:07.775164 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 29 11:15:07.775176 kernel: Key type asymmetric registered Jan 29 11:15:07.775183 kernel: Asymmetric key parser 'x509' registered Jan 29 11:15:07.775189 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 11:15:07.775195 kernel: io scheduler mq-deadline registered Jan 29 11:15:07.775201 kernel: io scheduler kyber registered Jan 29 11:15:07.775209 kernel: io scheduler bfq registered Jan 29 11:15:07.775263 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Jan 29 11:15:07.775318 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.775371 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Jan 29 11:15:07.775423 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.775475 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Jan 29 11:15:07.775528 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.775580 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Jan 29 11:15:07.775635 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.775690 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Jan 29 11:15:07.775743 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.775794 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Jan 29 11:15:07.775846 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.775901 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Jan 29 11:15:07.775953 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.776008 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Jan 29 11:15:07.776062 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.776114 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Jan 29 11:15:07.776171 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.776234 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Jan 29 11:15:07.776286 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.776337 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Jan 29 11:15:07.776388 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.776451 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Jan 29 11:15:07.776509 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.776565 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Jan 29 11:15:07.776617 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.776669 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Jan 29 11:15:07.776721 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.776773 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Jan 29 11:15:07.776828 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.776879 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Jan 29 11:15:07.776931 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.776982 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Jan 29 11:15:07.777033 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.777084 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Jan 29 11:15:07.777136 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.777204 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Jan 29 11:15:07.777256 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.777307 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Jan 29 11:15:07.777358 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.777410 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Jan 29 11:15:07.777461 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.777516 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Jan 29 11:15:07.777591 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.777649 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Jan 29 11:15:07.777701 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.777752 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Jan 29 11:15:07.777808 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.777859 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Jan 29 11:15:07.777911 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.777962 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Jan 29 11:15:07.778013 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.778064 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Jan 29 11:15:07.778118 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.780200 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Jan 29 11:15:07.780264 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.780320 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Jan 29 11:15:07.780373 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.780426 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Jan 29 11:15:07.780482 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.780534 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Jan 29 11:15:07.780586 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.780637 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Jan 29 11:15:07.780689 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.780700 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 11:15:07.780707 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:15:07.780714 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 11:15:07.780720 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Jan 29 11:15:07.780727 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 11:15:07.780734 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 11:15:07.780785 kernel: rtc_cmos 00:01: registered as rtc0 Jan 29 11:15:07.780833 kernel: rtc_cmos 00:01: setting system clock to 2025-01-29T11:15:07 UTC (1738149307) Jan 29 11:15:07.780844 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 11:15:07.780889 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Jan 29 11:15:07.780898 kernel: intel_pstate: CPU model not supported Jan 29 11:15:07.780905 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:15:07.780911 kernel: Segment Routing with IPv6 Jan 29 11:15:07.780917 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:15:07.780924 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:15:07.780930 kernel: Key type dns_resolver registered Jan 29 11:15:07.780936 kernel: IPI shorthand broadcast: enabled Jan 29 11:15:07.780944 kernel: sched_clock: Marking stable (932210695, 239295427)->(1237141746, -65635624) Jan 29 11:15:07.780951 kernel: registered taskstats version 1 Jan 29 11:15:07.780957 kernel: Loading compiled-in X.509 certificates Jan 29 11:15:07.780963 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 7f0738935740330d55027faa5877e7155d5f24f4' Jan 29 11:15:07.780969 kernel: Key type .fscrypt registered Jan 29 11:15:07.780975 kernel: Key type fscrypt-provisioning registered Jan 29 11:15:07.780982 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:15:07.780988 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:15:07.780996 kernel: ima: No architecture policies found Jan 29 11:15:07.781021 kernel: clk: Disabling unused clocks Jan 29 11:15:07.781029 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 29 11:15:07.781035 kernel: Write protecting the kernel read-only data: 38912k Jan 29 11:15:07.781042 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 29 11:15:07.781048 kernel: Run /init as init process Jan 29 11:15:07.781054 kernel: with arguments: Jan 29 11:15:07.781060 kernel: /init Jan 29 11:15:07.781066 kernel: with environment: Jan 29 11:15:07.781072 kernel: HOME=/ Jan 29 11:15:07.781080 kernel: TERM=linux Jan 29 11:15:07.781086 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:15:07.781094 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:15:07.781102 systemd[1]: Detected virtualization vmware. Jan 29 11:15:07.781109 systemd[1]: Detected architecture x86-64. Jan 29 11:15:07.781115 systemd[1]: Running in initrd. Jan 29 11:15:07.781122 systemd[1]: No hostname configured, using default hostname. Jan 29 11:15:07.781129 systemd[1]: Hostname set to . Jan 29 11:15:07.781136 systemd[1]: Initializing machine ID from random generator. Jan 29 11:15:07.781142 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:15:07.781149 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:15:07.781155 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:15:07.781162 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:15:07.781178 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:15:07.781185 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:15:07.781193 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:15:07.781201 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:15:07.781208 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:15:07.781214 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:15:07.781220 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:15:07.781227 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:15:07.781233 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:15:07.781242 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:15:07.781249 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:15:07.781255 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:15:07.781262 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:15:07.781268 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:15:07.781275 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:15:07.781281 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:15:07.781288 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:15:07.781295 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:15:07.781302 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:15:07.781308 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:15:07.781315 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:15:07.781321 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:15:07.781329 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:15:07.781340 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:15:07.781352 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:15:07.781360 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:15:07.781382 systemd-journald[217]: Collecting audit messages is disabled. Jan 29 11:15:07.781398 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:15:07.781405 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:15:07.781411 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:15:07.781420 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:15:07.781427 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:15:07.781433 kernel: Bridge firewalling registered Jan 29 11:15:07.781439 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:15:07.781446 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:15:07.781455 systemd-journald[217]: Journal started Jan 29 11:15:07.781469 systemd-journald[217]: Runtime Journal (/run/log/journal/4dff5ff9cb044d32bffebd0e7ed16a62) is 4.8M, max 38.6M, 33.8M free. Jan 29 11:15:07.754319 systemd-modules-load[218]: Inserted module 'overlay' Jan 29 11:15:07.778151 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 29 11:15:07.784179 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:15:07.786179 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:15:07.787179 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:15:07.787376 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:15:07.791364 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:15:07.792257 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:15:07.792908 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:15:07.798820 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:15:07.799293 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:15:07.803262 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:15:07.803431 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:15:07.806263 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:15:07.809590 dracut-cmdline[249]: dracut-dracut-053 Jan 29 11:15:07.812006 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 29 11:15:07.824821 systemd-resolved[255]: Positive Trust Anchors: Jan 29 11:15:07.824832 systemd-resolved[255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:15:07.824855 systemd-resolved[255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:15:07.826811 systemd-resolved[255]: Defaulting to hostname 'linux'. Jan 29 11:15:07.827695 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:15:07.828128 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:15:07.856187 kernel: SCSI subsystem initialized Jan 29 11:15:07.863181 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:15:07.870180 kernel: iscsi: registered transport (tcp) Jan 29 11:15:07.884219 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:15:07.884264 kernel: QLogic iSCSI HBA Driver Jan 29 11:15:07.905256 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:15:07.910257 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:15:07.924384 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:15:07.925442 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:15:07.925455 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:15:07.958180 kernel: raid6: avx2x4 gen() 47258 MB/s Jan 29 11:15:07.973182 kernel: raid6: avx2x2 gen() 53972 MB/s Jan 29 11:15:07.990522 kernel: raid6: avx2x1 gen() 45713 MB/s Jan 29 11:15:07.990541 kernel: raid6: using algorithm avx2x2 gen() 53972 MB/s Jan 29 11:15:08.008526 kernel: raid6: .... xor() 27168 MB/s, rmw enabled Jan 29 11:15:08.008585 kernel: raid6: using avx2x2 recovery algorithm Jan 29 11:15:08.023193 kernel: xor: automatically using best checksumming function avx Jan 29 11:15:08.117184 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:15:08.122867 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:15:08.127267 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:15:08.134201 systemd-udevd[436]: Using default interface naming scheme 'v255'. Jan 29 11:15:08.136676 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:15:08.142240 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:15:08.148733 dracut-pre-trigger[441]: rd.md=0: removing MD RAID activation Jan 29 11:15:08.164080 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:15:08.168252 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:15:08.240365 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:15:08.244592 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:15:08.263176 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:15:08.263669 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:15:08.264118 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:15:08.264373 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:15:08.268285 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:15:08.279940 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:15:08.319180 kernel: VMware PVSCSI driver - version 1.0.7.0-k Jan 29 11:15:08.324159 kernel: vmw_pvscsi: using 64bit dma Jan 29 11:15:08.324203 kernel: vmw_pvscsi: max_id: 16 Jan 29 11:15:08.324217 kernel: vmw_pvscsi: setting ring_pages to 8 Jan 29 11:15:08.332176 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Jan 29 11:15:08.334435 kernel: vmw_pvscsi: enabling reqCallThreshold Jan 29 11:15:08.334453 kernel: vmw_pvscsi: driver-based request coalescing enabled Jan 29 11:15:08.334461 kernel: vmw_pvscsi: using MSI-X Jan 29 11:15:08.335734 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Jan 29 11:15:08.342624 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Jan 29 11:15:08.345817 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Jan 29 11:15:08.346015 kernel: libata version 3.00 loaded. Jan 29 11:15:08.351188 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Jan 29 11:15:08.355451 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 11:15:08.355462 kernel: ata_piix 0000:00:07.1: version 2.13 Jan 29 11:15:08.360798 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Jan 29 11:15:08.360877 kernel: scsi host1: ata_piix Jan 29 11:15:08.360939 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Jan 29 11:15:08.361011 kernel: scsi host2: ata_piix Jan 29 11:15:08.361077 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Jan 29 11:15:08.361086 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Jan 29 11:15:08.362305 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:15:08.362381 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:15:08.362680 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:15:08.362796 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:15:08.362862 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:15:08.363024 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:15:08.366295 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:15:08.372901 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 11:15:08.372922 kernel: AES CTR mode by8 optimization enabled Jan 29 11:15:08.379055 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:15:08.383248 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:15:08.394340 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:15:08.528194 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Jan 29 11:15:08.534196 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Jan 29 11:15:08.551277 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Jan 29 11:15:08.600190 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 29 11:15:08.600305 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Jan 29 11:15:08.600392 kernel: sd 0:0:0:0: [sda] Cache data unavailable Jan 29 11:15:08.600479 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Jan 29 11:15:08.600563 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Jan 29 11:15:08.600648 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 11:15:08.600660 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 29 11:15:08.600740 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:15:08.600752 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 29 11:15:08.657193 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (486) Jan 29 11:15:08.661184 kernel: BTRFS: device fsid f8084233-4a6f-4e67-af0b-519e43b19e58 devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (498) Jan 29 11:15:08.663627 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Jan 29 11:15:08.667087 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Jan 29 11:15:08.669692 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Jan 29 11:15:08.669993 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Jan 29 11:15:08.673182 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jan 29 11:15:08.681279 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:15:08.707194 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:15:08.712181 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:15:09.720314 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:15:09.720354 disk-uuid[597]: The operation has completed successfully. Jan 29 11:15:09.799663 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:15:09.799720 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:15:09.804248 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:15:09.805956 sh[613]: Success Jan 29 11:15:09.814228 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 29 11:15:09.924754 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:15:09.934926 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:15:09.935299 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:15:09.999808 kernel: BTRFS info (device dm-0): first mount of filesystem f8084233-4a6f-4e67-af0b-519e43b19e58 Jan 29 11:15:09.999841 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:15:09.999851 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:15:10.000911 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:15:10.001714 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:15:10.009180 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 29 11:15:10.011224 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:15:10.020253 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Jan 29 11:15:10.021507 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:15:10.038555 kernel: BTRFS info (device sda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 29 11:15:10.038586 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:15:10.038595 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:15:10.076179 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 11:15:10.084689 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:15:10.088180 kernel: BTRFS info (device sda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 29 11:15:10.094751 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:15:10.098264 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:15:10.140294 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jan 29 11:15:10.145389 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:15:10.203265 ignition[674]: Ignition 2.20.0 Jan 29 11:15:10.203272 ignition[674]: Stage: fetch-offline Jan 29 11:15:10.203290 ignition[674]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:15:10.203296 ignition[674]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 29 11:15:10.203573 ignition[674]: parsed url from cmdline: "" Jan 29 11:15:10.203575 ignition[674]: no config URL provided Jan 29 11:15:10.203583 ignition[674]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:15:10.203600 ignition[674]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:15:10.204628 ignition[674]: config successfully fetched Jan 29 11:15:10.204646 ignition[674]: parsing config with SHA512: a75c139a1940bba0780c18ae4683dffe2515ce72d9a2a9bec7df0472feba101d616f2245c4b89fa0b279aa1c94e40094414ef52398828b9e196ce601e873e8b8 Jan 29 11:15:10.207524 unknown[674]: fetched base config from "system" Jan 29 11:15:10.207636 unknown[674]: fetched user config from "vmware" Jan 29 11:15:10.208008 ignition[674]: fetch-offline: fetch-offline passed Jan 29 11:15:10.208396 ignition[674]: Ignition finished successfully Jan 29 11:15:10.209104 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:15:10.210074 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:15:10.218276 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:15:10.229776 systemd-networkd[808]: lo: Link UP Jan 29 11:15:10.229782 systemd-networkd[808]: lo: Gained carrier Jan 29 11:15:10.230488 systemd-networkd[808]: Enumeration completed Jan 29 11:15:10.230736 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:15:10.230752 systemd-networkd[808]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Jan 29 11:15:10.231067 systemd[1]: Reached target network.target - Network. Jan 29 11:15:10.231302 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 11:15:10.234178 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jan 29 11:15:10.234283 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jan 29 11:15:10.234020 systemd-networkd[808]: ens192: Link UP Jan 29 11:15:10.234022 systemd-networkd[808]: ens192: Gained carrier Jan 29 11:15:10.241283 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:15:10.248470 ignition[810]: Ignition 2.20.0 Jan 29 11:15:10.248477 ignition[810]: Stage: kargs Jan 29 11:15:10.248611 ignition[810]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:15:10.248617 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 29 11:15:10.249162 ignition[810]: kargs: kargs passed Jan 29 11:15:10.249201 ignition[810]: Ignition finished successfully Jan 29 11:15:10.250333 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:15:10.254286 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:15:10.261017 ignition[817]: Ignition 2.20.0 Jan 29 11:15:10.261025 ignition[817]: Stage: disks Jan 29 11:15:10.261138 ignition[817]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:15:10.261144 ignition[817]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 29 11:15:10.261748 ignition[817]: disks: disks passed Jan 29 11:15:10.261777 ignition[817]: Ignition finished successfully Jan 29 11:15:10.262687 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:15:10.262968 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:15:10.263079 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:15:10.263201 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:15:10.263296 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:15:10.263391 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:15:10.267273 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:15:10.277226 systemd-fsck[825]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 29 11:15:10.278180 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:15:10.282234 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:15:10.337286 kernel: EXT4-fs (sda9): mounted filesystem cdc615db-d057-439f-af25-aa57b1c399e2 r/w with ordered data mode. Quota mode: none. Jan 29 11:15:10.337718 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:15:10.338098 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:15:10.341233 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:15:10.343005 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:15:10.343501 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 11:15:10.343821 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:15:10.344077 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:15:10.346114 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:15:10.347194 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:15:10.351898 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (833) Jan 29 11:15:10.351930 kernel: BTRFS info (device sda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 29 11:15:10.351940 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:15:10.352818 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:15:10.357180 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 11:15:10.357916 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:15:10.377530 initrd-setup-root[857]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:15:10.379974 initrd-setup-root[864]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:15:10.382119 initrd-setup-root[871]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:15:10.384182 initrd-setup-root[878]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:15:10.442192 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:15:10.447250 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:15:10.449685 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:15:10.454201 kernel: BTRFS info (device sda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 29 11:15:10.464727 ignition[945]: INFO : Ignition 2.20.0 Jan 29 11:15:10.465222 ignition[945]: INFO : Stage: mount Jan 29 11:15:10.465222 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:15:10.465222 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 29 11:15:10.465964 ignition[945]: INFO : mount: mount passed Jan 29 11:15:10.466093 ignition[945]: INFO : Ignition finished successfully Jan 29 11:15:10.466683 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:15:10.470280 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:15:10.470484 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:15:10.998200 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:15:11.003337 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:15:11.011208 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (957) Jan 29 11:15:11.013360 kernel: BTRFS info (device sda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 29 11:15:11.013378 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:15:11.013386 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:15:11.017183 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 11:15:11.017788 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:15:11.033265 ignition[974]: INFO : Ignition 2.20.0 Jan 29 11:15:11.033265 ignition[974]: INFO : Stage: files Jan 29 11:15:11.033756 ignition[974]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:15:11.033756 ignition[974]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 29 11:15:11.033985 ignition[974]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:15:11.034624 ignition[974]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:15:11.034624 ignition[974]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:15:11.036788 ignition[974]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:15:11.036930 ignition[974]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:15:11.037075 ignition[974]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:15:11.037043 unknown[974]: wrote ssh authorized keys file for user: core Jan 29 11:15:11.038430 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 29 11:15:11.038600 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 29 11:15:11.181470 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 11:15:11.298067 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 29 11:15:11.298067 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:15:11.298478 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:15:11.298478 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:15:11.298478 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:15:11.298478 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:15:11.298478 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:15:11.298478 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:15:11.298478 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:15:11.299511 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:15:11.299511 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:15:11.299511 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 11:15:11.299511 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 11:15:11.299511 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 11:15:11.299511 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Jan 29 11:15:11.630853 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 29 11:15:11.886480 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 11:15:11.886727 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jan 29 11:15:11.886727 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jan 29 11:15:11.886727 ignition[974]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 29 11:15:11.887560 ignition[974]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:15:11.887560 ignition[974]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:15:11.887560 ignition[974]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 29 11:15:11.887560 ignition[974]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 29 11:15:11.887560 ignition[974]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:15:11.887560 ignition[974]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:15:11.887560 ignition[974]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 29 11:15:11.887560 ignition[974]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 11:15:11.925783 ignition[974]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:15:11.927894 ignition[974]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:15:11.927894 ignition[974]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 11:15:11.927894 ignition[974]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 29 11:15:11.927894 ignition[974]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 11:15:11.929613 ignition[974]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:15:11.929613 ignition[974]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:15:11.929613 ignition[974]: INFO : files: files passed Jan 29 11:15:11.929613 ignition[974]: INFO : Ignition finished successfully Jan 29 11:15:11.929228 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:15:11.934255 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:15:11.935653 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:15:11.936051 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:15:11.936105 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:15:11.942333 initrd-setup-root-after-ignition[1004]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:15:11.942599 initrd-setup-root-after-ignition[1004]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:15:11.943049 initrd-setup-root-after-ignition[1008]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:15:11.943951 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:15:11.944299 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:15:11.947240 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:15:11.959439 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:15:11.959489 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:15:11.959785 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:15:11.959889 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:15:11.960011 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:15:11.961241 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:15:11.969399 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:15:11.975247 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:15:11.980394 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:15:11.980549 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:15:11.980761 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:15:11.980943 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:15:11.981030 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:15:11.981299 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:15:11.981529 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:15:11.981705 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:15:11.981880 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:15:11.982112 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:15:11.982314 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:15:11.982647 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:15:11.982850 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:15:11.983090 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:15:11.983301 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:15:11.983464 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:15:11.983525 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:15:11.983767 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:15:11.983913 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:15:11.984091 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:15:11.984135 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:15:11.984324 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:15:11.984379 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:15:11.984613 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:15:11.984671 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:15:11.984934 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:15:11.985074 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:15:11.988186 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:15:11.988344 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:15:11.988539 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:15:11.988717 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:15:11.988782 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:15:11.988988 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:15:11.989032 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:15:11.989271 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:15:11.989328 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:15:11.989571 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:15:11.989624 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:15:11.994256 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:15:11.994404 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:15:11.994467 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:15:11.996299 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:15:11.996468 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:15:11.996549 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:15:11.996722 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:15:11.996800 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:15:12.000236 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:15:12.000314 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:15:12.004518 ignition[1028]: INFO : Ignition 2.20.0 Jan 29 11:15:12.004518 ignition[1028]: INFO : Stage: umount Jan 29 11:15:12.005116 ignition[1028]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:15:12.005116 ignition[1028]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 29 11:15:12.005116 ignition[1028]: INFO : umount: umount passed Jan 29 11:15:12.005705 ignition[1028]: INFO : Ignition finished successfully Jan 29 11:15:12.005617 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:15:12.005680 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:15:12.006039 systemd[1]: Stopped target network.target - Network. Jan 29 11:15:12.006134 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:15:12.006281 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:15:12.006393 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:15:12.006415 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:15:12.006515 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:15:12.006535 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:15:12.006647 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:15:12.006668 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:15:12.006880 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:15:12.007312 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:15:12.008429 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:15:12.012027 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:15:12.012087 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:15:12.012568 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:15:12.012592 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:15:12.017231 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:15:12.017331 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:15:12.017358 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:15:12.017492 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Jan 29 11:15:12.017514 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jan 29 11:15:12.017740 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:15:12.019321 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:15:12.019372 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:15:12.022093 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:15:12.022135 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:15:12.022681 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:15:12.022706 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:15:12.022943 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:15:12.022966 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:15:12.024717 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:15:12.024932 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:15:12.025349 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:15:12.025505 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:15:12.025988 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:15:12.026130 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:15:12.026392 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:15:12.026409 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:15:12.026512 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:15:12.026535 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:15:12.026697 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:15:12.026718 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:15:12.026850 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:15:12.026872 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:15:12.028244 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:15:12.028363 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:15:12.028389 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:15:12.028577 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:15:12.028599 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:15:12.033292 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:15:12.033354 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:15:12.068755 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:15:12.068811 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:15:12.069164 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:15:12.069295 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:15:12.069330 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:15:12.072259 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:15:12.076040 systemd[1]: Switching root. Jan 29 11:15:12.117501 systemd-journald[217]: Journal stopped Jan 29 11:15:07.738484 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:29:54 -00 2025 Jan 29 11:15:07.738502 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 29 11:15:07.738508 kernel: Disabled fast string operations Jan 29 11:15:07.738512 kernel: BIOS-provided physical RAM map: Jan 29 11:15:07.738516 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Jan 29 11:15:07.738520 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Jan 29 11:15:07.738526 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Jan 29 11:15:07.738531 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Jan 29 11:15:07.738535 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Jan 29 11:15:07.738539 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Jan 29 11:15:07.738543 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Jan 29 11:15:07.738547 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Jan 29 11:15:07.738552 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Jan 29 11:15:07.738556 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jan 29 11:15:07.738562 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Jan 29 11:15:07.738567 kernel: NX (Execute Disable) protection: active Jan 29 11:15:07.738572 kernel: APIC: Static calls initialized Jan 29 11:15:07.738576 kernel: SMBIOS 2.7 present. Jan 29 11:15:07.738581 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Jan 29 11:15:07.738586 kernel: vmware: hypercall mode: 0x00 Jan 29 11:15:07.738591 kernel: Hypervisor detected: VMware Jan 29 11:15:07.738596 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Jan 29 11:15:07.738602 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Jan 29 11:15:07.738607 kernel: vmware: using clock offset of 4204245664 ns Jan 29 11:15:07.738612 kernel: tsc: Detected 3408.000 MHz processor Jan 29 11:15:07.738617 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 11:15:07.738622 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 11:15:07.738627 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Jan 29 11:15:07.738632 kernel: total RAM covered: 3072M Jan 29 11:15:07.738636 kernel: Found optimal setting for mtrr clean up Jan 29 11:15:07.738642 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Jan 29 11:15:07.738647 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Jan 29 11:15:07.738653 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 11:15:07.738657 kernel: Using GB pages for direct mapping Jan 29 11:15:07.738662 kernel: ACPI: Early table checksum verification disabled Jan 29 11:15:07.738667 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Jan 29 11:15:07.738672 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Jan 29 11:15:07.738677 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Jan 29 11:15:07.738682 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Jan 29 11:15:07.738687 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jan 29 11:15:07.738694 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jan 29 11:15:07.738699 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Jan 29 11:15:07.738704 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Jan 29 11:15:07.738710 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Jan 29 11:15:07.738715 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Jan 29 11:15:07.738720 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Jan 29 11:15:07.738726 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Jan 29 11:15:07.738731 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Jan 29 11:15:07.738736 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Jan 29 11:15:07.738741 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jan 29 11:15:07.738746 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jan 29 11:15:07.738751 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Jan 29 11:15:07.738757 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Jan 29 11:15:07.738762 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Jan 29 11:15:07.738767 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Jan 29 11:15:07.738773 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Jan 29 11:15:07.738778 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Jan 29 11:15:07.738783 kernel: system APIC only can use physical flat Jan 29 11:15:07.738788 kernel: APIC: Switched APIC routing to: physical flat Jan 29 11:15:07.738793 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 29 11:15:07.738798 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 29 11:15:07.738803 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 29 11:15:07.738808 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 29 11:15:07.738813 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 29 11:15:07.738818 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 29 11:15:07.738824 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 29 11:15:07.738829 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 29 11:15:07.738834 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Jan 29 11:15:07.738839 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Jan 29 11:15:07.738844 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Jan 29 11:15:07.738849 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Jan 29 11:15:07.738854 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Jan 29 11:15:07.738859 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Jan 29 11:15:07.738864 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Jan 29 11:15:07.738869 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Jan 29 11:15:07.738875 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Jan 29 11:15:07.738880 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Jan 29 11:15:07.738884 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Jan 29 11:15:07.738889 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Jan 29 11:15:07.738894 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Jan 29 11:15:07.738899 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Jan 29 11:15:07.738904 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Jan 29 11:15:07.738909 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Jan 29 11:15:07.738914 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Jan 29 11:15:07.738919 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Jan 29 11:15:07.738925 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Jan 29 11:15:07.738930 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Jan 29 11:15:07.738935 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Jan 29 11:15:07.738940 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Jan 29 11:15:07.738945 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Jan 29 11:15:07.738950 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Jan 29 11:15:07.738955 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Jan 29 11:15:07.738960 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Jan 29 11:15:07.738965 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Jan 29 11:15:07.738970 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Jan 29 11:15:07.738976 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Jan 29 11:15:07.738981 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Jan 29 11:15:07.738986 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Jan 29 11:15:07.738991 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Jan 29 11:15:07.738996 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Jan 29 11:15:07.739001 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Jan 29 11:15:07.739006 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Jan 29 11:15:07.739011 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Jan 29 11:15:07.739016 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Jan 29 11:15:07.739021 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Jan 29 11:15:07.739027 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Jan 29 11:15:07.739032 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Jan 29 11:15:07.739037 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Jan 29 11:15:07.739041 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Jan 29 11:15:07.739046 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Jan 29 11:15:07.739051 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Jan 29 11:15:07.739056 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Jan 29 11:15:07.739061 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Jan 29 11:15:07.739066 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Jan 29 11:15:07.739071 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Jan 29 11:15:07.739077 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Jan 29 11:15:07.739082 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Jan 29 11:15:07.739087 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Jan 29 11:15:07.739095 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Jan 29 11:15:07.739101 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Jan 29 11:15:07.739106 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Jan 29 11:15:07.739112 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Jan 29 11:15:07.739117 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Jan 29 11:15:07.739122 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Jan 29 11:15:07.739128 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Jan 29 11:15:07.739134 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Jan 29 11:15:07.739139 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Jan 29 11:15:07.739144 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Jan 29 11:15:07.739149 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Jan 29 11:15:07.739155 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Jan 29 11:15:07.739160 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Jan 29 11:15:07.739165 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Jan 29 11:15:07.739177 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Jan 29 11:15:07.739182 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Jan 29 11:15:07.739189 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Jan 29 11:15:07.739194 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Jan 29 11:15:07.739199 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Jan 29 11:15:07.739204 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Jan 29 11:15:07.739210 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Jan 29 11:15:07.739215 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Jan 29 11:15:07.739220 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Jan 29 11:15:07.739226 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Jan 29 11:15:07.739231 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Jan 29 11:15:07.739236 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Jan 29 11:15:07.739241 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Jan 29 11:15:07.739247 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Jan 29 11:15:07.739253 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Jan 29 11:15:07.739258 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Jan 29 11:15:07.739264 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Jan 29 11:15:07.739269 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Jan 29 11:15:07.739274 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Jan 29 11:15:07.739280 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Jan 29 11:15:07.739285 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Jan 29 11:15:07.739290 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Jan 29 11:15:07.739295 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Jan 29 11:15:07.739302 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Jan 29 11:15:07.739307 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Jan 29 11:15:07.739312 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Jan 29 11:15:07.739317 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Jan 29 11:15:07.739332 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Jan 29 11:15:07.739338 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Jan 29 11:15:07.739344 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Jan 29 11:15:07.739349 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Jan 29 11:15:07.739354 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Jan 29 11:15:07.739359 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Jan 29 11:15:07.739367 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Jan 29 11:15:07.739372 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Jan 29 11:15:07.739377 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Jan 29 11:15:07.739382 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Jan 29 11:15:07.739388 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Jan 29 11:15:07.739393 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Jan 29 11:15:07.739398 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Jan 29 11:15:07.739403 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Jan 29 11:15:07.739409 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Jan 29 11:15:07.739414 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Jan 29 11:15:07.739420 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Jan 29 11:15:07.739426 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Jan 29 11:15:07.739431 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Jan 29 11:15:07.739436 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Jan 29 11:15:07.739441 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Jan 29 11:15:07.739447 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Jan 29 11:15:07.739452 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Jan 29 11:15:07.739458 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Jan 29 11:15:07.739463 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Jan 29 11:15:07.739468 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Jan 29 11:15:07.739474 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Jan 29 11:15:07.739480 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Jan 29 11:15:07.739485 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 29 11:15:07.739491 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 29 11:15:07.739496 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Jan 29 11:15:07.739502 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Jan 29 11:15:07.739507 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Jan 29 11:15:07.739513 kernel: Zone ranges: Jan 29 11:15:07.739518 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 11:15:07.739524 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Jan 29 11:15:07.739530 kernel: Normal empty Jan 29 11:15:07.739536 kernel: Movable zone start for each node Jan 29 11:15:07.739541 kernel: Early memory node ranges Jan 29 11:15:07.739547 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Jan 29 11:15:07.739552 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Jan 29 11:15:07.739557 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Jan 29 11:15:07.739563 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Jan 29 11:15:07.739568 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 11:15:07.739574 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Jan 29 11:15:07.739580 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Jan 29 11:15:07.739586 kernel: ACPI: PM-Timer IO Port: 0x1008 Jan 29 11:15:07.739591 kernel: system APIC only can use physical flat Jan 29 11:15:07.739596 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Jan 29 11:15:07.739602 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jan 29 11:15:07.739607 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jan 29 11:15:07.739613 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jan 29 11:15:07.739618 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jan 29 11:15:07.739623 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jan 29 11:15:07.739630 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jan 29 11:15:07.739635 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jan 29 11:15:07.739640 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jan 29 11:15:07.739646 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jan 29 11:15:07.739651 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jan 29 11:15:07.739656 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jan 29 11:15:07.739662 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jan 29 11:15:07.739667 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jan 29 11:15:07.739672 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jan 29 11:15:07.739678 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jan 29 11:15:07.739684 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jan 29 11:15:07.739690 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Jan 29 11:15:07.739695 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Jan 29 11:15:07.739700 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Jan 29 11:15:07.739705 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Jan 29 11:15:07.739711 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Jan 29 11:15:07.739716 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Jan 29 11:15:07.739721 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Jan 29 11:15:07.739727 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Jan 29 11:15:07.739732 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Jan 29 11:15:07.739739 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Jan 29 11:15:07.739744 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Jan 29 11:15:07.739749 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Jan 29 11:15:07.739755 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Jan 29 11:15:07.739760 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Jan 29 11:15:07.739765 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Jan 29 11:15:07.739771 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Jan 29 11:15:07.739776 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Jan 29 11:15:07.739781 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Jan 29 11:15:07.739787 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Jan 29 11:15:07.739793 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Jan 29 11:15:07.739799 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Jan 29 11:15:07.739804 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Jan 29 11:15:07.739809 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Jan 29 11:15:07.739815 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Jan 29 11:15:07.739820 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Jan 29 11:15:07.739825 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Jan 29 11:15:07.739831 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Jan 29 11:15:07.739836 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Jan 29 11:15:07.739841 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Jan 29 11:15:07.739848 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Jan 29 11:15:07.739853 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Jan 29 11:15:07.739858 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Jan 29 11:15:07.739864 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Jan 29 11:15:07.739869 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Jan 29 11:15:07.739875 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Jan 29 11:15:07.739880 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Jan 29 11:15:07.739885 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Jan 29 11:15:07.739891 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Jan 29 11:15:07.739897 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Jan 29 11:15:07.739902 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Jan 29 11:15:07.739908 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Jan 29 11:15:07.739913 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Jan 29 11:15:07.739918 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Jan 29 11:15:07.739924 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Jan 29 11:15:07.739929 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Jan 29 11:15:07.739935 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Jan 29 11:15:07.739940 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Jan 29 11:15:07.739945 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Jan 29 11:15:07.739951 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Jan 29 11:15:07.739957 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Jan 29 11:15:07.739962 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Jan 29 11:15:07.739967 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Jan 29 11:15:07.739973 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Jan 29 11:15:07.739978 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Jan 29 11:15:07.739984 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Jan 29 11:15:07.739989 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Jan 29 11:15:07.739994 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Jan 29 11:15:07.740000 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Jan 29 11:15:07.740013 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Jan 29 11:15:07.740019 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Jan 29 11:15:07.740024 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Jan 29 11:15:07.740030 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Jan 29 11:15:07.740035 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Jan 29 11:15:07.740040 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Jan 29 11:15:07.740046 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Jan 29 11:15:07.740051 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Jan 29 11:15:07.740056 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Jan 29 11:15:07.740062 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Jan 29 11:15:07.740068 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Jan 29 11:15:07.740073 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Jan 29 11:15:07.740079 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Jan 29 11:15:07.740084 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Jan 29 11:15:07.740090 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Jan 29 11:15:07.740095 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Jan 29 11:15:07.740100 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Jan 29 11:15:07.740106 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Jan 29 11:15:07.740111 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Jan 29 11:15:07.740118 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Jan 29 11:15:07.740124 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Jan 29 11:15:07.740129 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Jan 29 11:15:07.740135 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Jan 29 11:15:07.740140 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Jan 29 11:15:07.740145 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Jan 29 11:15:07.740151 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Jan 29 11:15:07.740156 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Jan 29 11:15:07.740161 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Jan 29 11:15:07.740208 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Jan 29 11:15:07.740217 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Jan 29 11:15:07.740223 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Jan 29 11:15:07.740228 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Jan 29 11:15:07.740234 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Jan 29 11:15:07.740239 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Jan 29 11:15:07.740244 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Jan 29 11:15:07.740249 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Jan 29 11:15:07.740255 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Jan 29 11:15:07.740260 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Jan 29 11:15:07.740265 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Jan 29 11:15:07.740272 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Jan 29 11:15:07.740277 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Jan 29 11:15:07.740282 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Jan 29 11:15:07.740288 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Jan 29 11:15:07.740293 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Jan 29 11:15:07.740298 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Jan 29 11:15:07.740304 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Jan 29 11:15:07.740309 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Jan 29 11:15:07.740315 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Jan 29 11:15:07.740320 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Jan 29 11:15:07.740326 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Jan 29 11:15:07.740332 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Jan 29 11:15:07.740337 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Jan 29 11:15:07.740342 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Jan 29 11:15:07.740348 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Jan 29 11:15:07.740354 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Jan 29 11:15:07.740359 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 11:15:07.740364 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Jan 29 11:15:07.740370 kernel: TSC deadline timer available Jan 29 11:15:07.740376 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Jan 29 11:15:07.740382 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Jan 29 11:15:07.740387 kernel: Booting paravirtualized kernel on VMware hypervisor Jan 29 11:15:07.740393 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 11:15:07.740398 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Jan 29 11:15:07.740404 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 29 11:15:07.740409 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 29 11:15:07.740415 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Jan 29 11:15:07.740420 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Jan 29 11:15:07.740427 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Jan 29 11:15:07.740432 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Jan 29 11:15:07.740437 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Jan 29 11:15:07.740450 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Jan 29 11:15:07.740456 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Jan 29 11:15:07.740462 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Jan 29 11:15:07.740468 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Jan 29 11:15:07.740473 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Jan 29 11:15:07.740480 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Jan 29 11:15:07.740486 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Jan 29 11:15:07.740491 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Jan 29 11:15:07.740497 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Jan 29 11:15:07.740503 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Jan 29 11:15:07.740508 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Jan 29 11:15:07.740515 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 29 11:15:07.740521 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:15:07.740528 kernel: random: crng init done Jan 29 11:15:07.740533 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jan 29 11:15:07.740539 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Jan 29 11:15:07.740545 kernel: printk: log_buf_len min size: 262144 bytes Jan 29 11:15:07.740550 kernel: printk: log_buf_len: 1048576 bytes Jan 29 11:15:07.740556 kernel: printk: early log buf free: 239648(91%) Jan 29 11:15:07.740562 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:15:07.740568 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 29 11:15:07.740573 kernel: Fallback order for Node 0: 0 Jan 29 11:15:07.740579 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Jan 29 11:15:07.740586 kernel: Policy zone: DMA32 Jan 29 11:15:07.740592 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:15:07.740598 kernel: Memory: 1934292K/2096628K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 162076K reserved, 0K cma-reserved) Jan 29 11:15:07.740605 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Jan 29 11:15:07.740611 kernel: ftrace: allocating 37893 entries in 149 pages Jan 29 11:15:07.740617 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 11:15:07.740623 kernel: Dynamic Preempt: voluntary Jan 29 11:15:07.740629 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:15:07.740635 kernel: rcu: RCU event tracing is enabled. Jan 29 11:15:07.740641 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Jan 29 11:15:07.740647 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:15:07.740653 kernel: Rude variant of Tasks RCU enabled. Jan 29 11:15:07.740659 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:15:07.740664 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:15:07.740670 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Jan 29 11:15:07.740677 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Jan 29 11:15:07.740683 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Jan 29 11:15:07.740689 kernel: Console: colour VGA+ 80x25 Jan 29 11:15:07.740694 kernel: printk: console [tty0] enabled Jan 29 11:15:07.740700 kernel: printk: console [ttyS0] enabled Jan 29 11:15:07.740706 kernel: ACPI: Core revision 20230628 Jan 29 11:15:07.740712 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Jan 29 11:15:07.740717 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 11:15:07.740723 kernel: x2apic enabled Jan 29 11:15:07.740730 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 11:15:07.740736 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 11:15:07.740743 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jan 29 11:15:07.740749 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Jan 29 11:15:07.740754 kernel: Disabled fast string operations Jan 29 11:15:07.740760 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 29 11:15:07.740766 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 29 11:15:07.740772 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 11:15:07.740778 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 29 11:15:07.740785 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 29 11:15:07.740790 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 29 11:15:07.740796 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 11:15:07.740802 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 29 11:15:07.740808 kernel: RETBleed: Mitigation: Enhanced IBRS Jan 29 11:15:07.740814 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 11:15:07.740820 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 11:15:07.740826 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 11:15:07.740832 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 29 11:15:07.740838 kernel: GDS: Unknown: Dependent on hypervisor status Jan 29 11:15:07.740844 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 11:15:07.740850 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 11:15:07.740856 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 11:15:07.740862 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 11:15:07.740867 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 29 11:15:07.740873 kernel: Freeing SMP alternatives memory: 32K Jan 29 11:15:07.740879 kernel: pid_max: default: 131072 minimum: 1024 Jan 29 11:15:07.740885 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:15:07.740892 kernel: landlock: Up and running. Jan 29 11:15:07.740897 kernel: SELinux: Initializing. Jan 29 11:15:07.740903 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 11:15:07.740909 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 11:15:07.740915 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jan 29 11:15:07.740921 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jan 29 11:15:07.740926 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jan 29 11:15:07.740932 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jan 29 11:15:07.740939 kernel: Performance Events: Skylake events, core PMU driver. Jan 29 11:15:07.740945 kernel: core: CPUID marked event: 'cpu cycles' unavailable Jan 29 11:15:07.740951 kernel: core: CPUID marked event: 'instructions' unavailable Jan 29 11:15:07.740957 kernel: core: CPUID marked event: 'bus cycles' unavailable Jan 29 11:15:07.740962 kernel: core: CPUID marked event: 'cache references' unavailable Jan 29 11:15:07.740968 kernel: core: CPUID marked event: 'cache misses' unavailable Jan 29 11:15:07.740973 kernel: core: CPUID marked event: 'branch instructions' unavailable Jan 29 11:15:07.740979 kernel: core: CPUID marked event: 'branch misses' unavailable Jan 29 11:15:07.740985 kernel: ... version: 1 Jan 29 11:15:07.740992 kernel: ... bit width: 48 Jan 29 11:15:07.740997 kernel: ... generic registers: 4 Jan 29 11:15:07.741008 kernel: ... value mask: 0000ffffffffffff Jan 29 11:15:07.741013 kernel: ... max period: 000000007fffffff Jan 29 11:15:07.741019 kernel: ... fixed-purpose events: 0 Jan 29 11:15:07.741025 kernel: ... event mask: 000000000000000f Jan 29 11:15:07.741031 kernel: signal: max sigframe size: 1776 Jan 29 11:15:07.741037 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:15:07.741043 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:15:07.741050 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 29 11:15:07.741056 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:15:07.741061 kernel: smpboot: x86: Booting SMP configuration: Jan 29 11:15:07.741067 kernel: .... node #0, CPUs: #1 Jan 29 11:15:07.741073 kernel: Disabled fast string operations Jan 29 11:15:07.741079 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Jan 29 11:15:07.741085 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 29 11:15:07.741090 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 11:15:07.741096 kernel: smpboot: Max logical packages: 128 Jan 29 11:15:07.741102 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Jan 29 11:15:07.741109 kernel: devtmpfs: initialized Jan 29 11:15:07.741115 kernel: x86/mm: Memory block size: 128MB Jan 29 11:15:07.741120 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Jan 29 11:15:07.741126 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:15:07.741132 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jan 29 11:15:07.741138 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:15:07.741145 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:15:07.741151 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:15:07.741156 kernel: audit: type=2000 audit(1738149306.068:1): state=initialized audit_enabled=0 res=1 Jan 29 11:15:07.741163 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:15:07.741174 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 11:15:07.741186 kernel: cpuidle: using governor menu Jan 29 11:15:07.741192 kernel: Simple Boot Flag at 0x36 set to 0x80 Jan 29 11:15:07.741198 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:15:07.741204 kernel: dca service started, version 1.12.1 Jan 29 11:15:07.741210 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Jan 29 11:15:07.741215 kernel: PCI: Using configuration type 1 for base access Jan 29 11:15:07.741221 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 11:15:07.741229 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:15:07.741235 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:15:07.741240 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:15:07.741246 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:15:07.741252 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:15:07.741258 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:15:07.741264 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:15:07.741269 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:15:07.741275 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:15:07.741282 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jan 29 11:15:07.741288 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 11:15:07.741294 kernel: ACPI: Interpreter enabled Jan 29 11:15:07.741300 kernel: ACPI: PM: (supports S0 S1 S5) Jan 29 11:15:07.741305 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 11:15:07.741311 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 11:15:07.741317 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 11:15:07.741323 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Jan 29 11:15:07.741329 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Jan 29 11:15:07.741408 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:15:07.741466 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Jan 29 11:15:07.741516 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jan 29 11:15:07.741524 kernel: PCI host bridge to bus 0000:00 Jan 29 11:15:07.741574 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 11:15:07.741619 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Jan 29 11:15:07.741667 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 29 11:15:07.741711 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 11:15:07.741755 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Jan 29 11:15:07.741798 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Jan 29 11:15:07.741856 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Jan 29 11:15:07.741913 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Jan 29 11:15:07.741971 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Jan 29 11:15:07.742028 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Jan 29 11:15:07.742079 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Jan 29 11:15:07.742130 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 29 11:15:07.742194 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 29 11:15:07.742247 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 29 11:15:07.742297 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 29 11:15:07.742354 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Jan 29 11:15:07.742407 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Jan 29 11:15:07.742458 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Jan 29 11:15:07.742513 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Jan 29 11:15:07.742564 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Jan 29 11:15:07.742614 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Jan 29 11:15:07.742671 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Jan 29 11:15:07.742721 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Jan 29 11:15:07.742771 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Jan 29 11:15:07.742821 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Jan 29 11:15:07.742873 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Jan 29 11:15:07.742924 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 11:15:07.742977 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Jan 29 11:15:07.743039 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.743090 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.743145 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.743220 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.743278 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.743329 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.743389 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.743441 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.743498 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.743550 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.743605 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.743656 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.743712 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.743765 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.743820 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.743871 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.743927 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.743979 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.744041 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.744093 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.744149 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.744224 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.744280 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.744331 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.744391 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.744442 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.744496 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.744547 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.744602 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.744652 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.744709 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.744761 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.744816 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.744868 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.744923 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.744975 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.745037 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.745094 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.745149 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.745222 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.745282 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.745334 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.745389 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.745445 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.745499 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.745550 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.745605 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.745657 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.745713 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.745768 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.745823 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.745875 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.745931 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.745983 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.746039 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.746093 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.746148 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.746231 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.746290 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.746341 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.746405 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.746460 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.746514 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Jan 29 11:15:07.746565 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.746618 kernel: pci_bus 0000:01: extended config space not accessible Jan 29 11:15:07.746670 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 29 11:15:07.746721 kernel: pci_bus 0000:02: extended config space not accessible Jan 29 11:15:07.746730 kernel: acpiphp: Slot [32] registered Jan 29 11:15:07.746738 kernel: acpiphp: Slot [33] registered Jan 29 11:15:07.746744 kernel: acpiphp: Slot [34] registered Jan 29 11:15:07.746750 kernel: acpiphp: Slot [35] registered Jan 29 11:15:07.746756 kernel: acpiphp: Slot [36] registered Jan 29 11:15:07.746762 kernel: acpiphp: Slot [37] registered Jan 29 11:15:07.746768 kernel: acpiphp: Slot [38] registered Jan 29 11:15:07.746773 kernel: acpiphp: Slot [39] registered Jan 29 11:15:07.746779 kernel: acpiphp: Slot [40] registered Jan 29 11:15:07.746785 kernel: acpiphp: Slot [41] registered Jan 29 11:15:07.746791 kernel: acpiphp: Slot [42] registered Jan 29 11:15:07.746797 kernel: acpiphp: Slot [43] registered Jan 29 11:15:07.746803 kernel: acpiphp: Slot [44] registered Jan 29 11:15:07.746809 kernel: acpiphp: Slot [45] registered Jan 29 11:15:07.746814 kernel: acpiphp: Slot [46] registered Jan 29 11:15:07.746820 kernel: acpiphp: Slot [47] registered Jan 29 11:15:07.746826 kernel: acpiphp: Slot [48] registered Jan 29 11:15:07.746832 kernel: acpiphp: Slot [49] registered Jan 29 11:15:07.746837 kernel: acpiphp: Slot [50] registered Jan 29 11:15:07.746843 kernel: acpiphp: Slot [51] registered Jan 29 11:15:07.746850 kernel: acpiphp: Slot [52] registered Jan 29 11:15:07.746856 kernel: acpiphp: Slot [53] registered Jan 29 11:15:07.746861 kernel: acpiphp: Slot [54] registered Jan 29 11:15:07.746867 kernel: acpiphp: Slot [55] registered Jan 29 11:15:07.746872 kernel: acpiphp: Slot [56] registered Jan 29 11:15:07.746878 kernel: acpiphp: Slot [57] registered Jan 29 11:15:07.746884 kernel: acpiphp: Slot [58] registered Jan 29 11:15:07.746890 kernel: acpiphp: Slot [59] registered Jan 29 11:15:07.746895 kernel: acpiphp: Slot [60] registered Jan 29 11:15:07.746902 kernel: acpiphp: Slot [61] registered Jan 29 11:15:07.746908 kernel: acpiphp: Slot [62] registered Jan 29 11:15:07.746914 kernel: acpiphp: Slot [63] registered Jan 29 11:15:07.746963 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jan 29 11:15:07.747022 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jan 29 11:15:07.747075 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jan 29 11:15:07.747125 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jan 29 11:15:07.747562 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Jan 29 11:15:07.747626 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Jan 29 11:15:07.747678 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Jan 29 11:15:07.747729 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Jan 29 11:15:07.747780 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Jan 29 11:15:07.747837 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Jan 29 11:15:07.747890 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Jan 29 11:15:07.747941 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Jan 29 11:15:07.747995 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jan 29 11:15:07.748046 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jan 29 11:15:07.748097 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jan 29 11:15:07.748149 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jan 29 11:15:07.748219 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jan 29 11:15:07.748271 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jan 29 11:15:07.748321 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jan 29 11:15:07.748370 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jan 29 11:15:07.748424 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jan 29 11:15:07.748474 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jan 29 11:15:07.748526 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jan 29 11:15:07.748576 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jan 29 11:15:07.748627 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jan 29 11:15:07.748748 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jan 29 11:15:07.748804 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jan 29 11:15:07.748859 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jan 29 11:15:07.748910 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jan 29 11:15:07.748960 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jan 29 11:15:07.749011 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jan 29 11:15:07.749061 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jan 29 11:15:07.749115 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jan 29 11:15:07.749165 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jan 29 11:15:07.750391 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jan 29 11:15:07.750449 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jan 29 11:15:07.750502 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jan 29 11:15:07.750554 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jan 29 11:15:07.750606 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jan 29 11:15:07.750657 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jan 29 11:15:07.750711 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jan 29 11:15:07.750768 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Jan 29 11:15:07.750822 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Jan 29 11:15:07.750874 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Jan 29 11:15:07.750926 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Jan 29 11:15:07.750977 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Jan 29 11:15:07.751051 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jan 29 11:15:07.751108 kernel: pci 0000:0b:00.0: supports D1 D2 Jan 29 11:15:07.751160 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 29 11:15:07.751231 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jan 29 11:15:07.751339 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jan 29 11:15:07.751394 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jan 29 11:15:07.751445 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jan 29 11:15:07.751495 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jan 29 11:15:07.751546 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jan 29 11:15:07.751600 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jan 29 11:15:07.751650 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jan 29 11:15:07.751700 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jan 29 11:15:07.751750 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jan 29 11:15:07.751800 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jan 29 11:15:07.751850 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jan 29 11:15:07.751901 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jan 29 11:15:07.751950 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jan 29 11:15:07.752059 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jan 29 11:15:07.753087 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jan 29 11:15:07.753151 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jan 29 11:15:07.753252 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jan 29 11:15:07.753309 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jan 29 11:15:07.753363 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jan 29 11:15:07.753416 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jan 29 11:15:07.753471 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jan 29 11:15:07.753528 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jan 29 11:15:07.753581 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jan 29 11:15:07.753635 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jan 29 11:15:07.753689 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jan 29 11:15:07.753741 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jan 29 11:15:07.753794 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jan 29 11:15:07.753847 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jan 29 11:15:07.753899 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jan 29 11:15:07.753955 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jan 29 11:15:07.754013 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jan 29 11:15:07.754066 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jan 29 11:15:07.754119 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jan 29 11:15:07.754245 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jan 29 11:15:07.754332 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jan 29 11:15:07.754386 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jan 29 11:15:07.754439 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jan 29 11:15:07.754488 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jan 29 11:15:07.754539 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jan 29 11:15:07.754589 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jan 29 11:15:07.754640 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jan 29 11:15:07.754691 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jan 29 11:15:07.754740 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jan 29 11:15:07.754790 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jan 29 11:15:07.754843 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jan 29 11:15:07.754893 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jan 29 11:15:07.754943 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jan 29 11:15:07.755007 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jan 29 11:15:07.755086 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jan 29 11:15:07.755140 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jan 29 11:15:07.755205 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jan 29 11:15:07.755258 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jan 29 11:15:07.755313 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jan 29 11:15:07.755365 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jan 29 11:15:07.755415 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jan 29 11:15:07.755466 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jan 29 11:15:07.755517 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jan 29 11:15:07.755568 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jan 29 11:15:07.755619 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jan 29 11:15:07.755669 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jan 29 11:15:07.755723 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jan 29 11:15:07.755774 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jan 29 11:15:07.755826 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jan 29 11:15:07.755877 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jan 29 11:15:07.755928 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jan 29 11:15:07.755979 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jan 29 11:15:07.756029 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jan 29 11:15:07.756083 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jan 29 11:15:07.756134 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jan 29 11:15:07.756279 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jan 29 11:15:07.756334 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jan 29 11:15:07.756386 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jan 29 11:15:07.756437 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jan 29 11:15:07.756488 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jan 29 11:15:07.756538 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jan 29 11:15:07.756619 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jan 29 11:15:07.756678 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jan 29 11:15:07.756729 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jan 29 11:15:07.756779 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jan 29 11:15:07.756788 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Jan 29 11:15:07.756794 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Jan 29 11:15:07.756800 kernel: ACPI: PCI: Interrupt link LNKB disabled Jan 29 11:15:07.756807 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 11:15:07.756812 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Jan 29 11:15:07.756821 kernel: iommu: Default domain type: Translated Jan 29 11:15:07.756826 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 11:15:07.756832 kernel: PCI: Using ACPI for IRQ routing Jan 29 11:15:07.756838 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 11:15:07.756844 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Jan 29 11:15:07.756850 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Jan 29 11:15:07.756898 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Jan 29 11:15:07.756948 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Jan 29 11:15:07.756999 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 11:15:07.757015 kernel: vgaarb: loaded Jan 29 11:15:07.757022 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Jan 29 11:15:07.757028 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Jan 29 11:15:07.757033 kernel: clocksource: Switched to clocksource tsc-early Jan 29 11:15:07.757039 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:15:07.757045 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:15:07.757051 kernel: pnp: PnP ACPI init Jan 29 11:15:07.757112 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Jan 29 11:15:07.757164 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Jan 29 11:15:07.757228 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Jan 29 11:15:07.757279 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Jan 29 11:15:07.757328 kernel: pnp 00:06: [dma 2] Jan 29 11:15:07.757378 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Jan 29 11:15:07.757425 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Jan 29 11:15:07.757471 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Jan 29 11:15:07.757481 kernel: pnp: PnP ACPI: found 8 devices Jan 29 11:15:07.757488 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 11:15:07.757494 kernel: NET: Registered PF_INET protocol family Jan 29 11:15:07.757500 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:15:07.757506 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 29 11:15:07.757512 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:15:07.757518 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 11:15:07.757524 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 29 11:15:07.757531 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 29 11:15:07.757537 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 11:15:07.757543 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 11:15:07.757549 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:15:07.757555 kernel: NET: Registered PF_XDP protocol family Jan 29 11:15:07.757606 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jan 29 11:15:07.757658 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 29 11:15:07.757710 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 29 11:15:07.757765 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 29 11:15:07.757815 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 29 11:15:07.757921 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jan 29 11:15:07.757976 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jan 29 11:15:07.758028 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jan 29 11:15:07.758080 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jan 29 11:15:07.758136 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jan 29 11:15:07.758201 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jan 29 11:15:07.758255 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jan 29 11:15:07.758307 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jan 29 11:15:07.758358 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jan 29 11:15:07.758414 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jan 29 11:15:07.758466 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jan 29 11:15:07.758517 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jan 29 11:15:07.758569 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jan 29 11:15:07.758620 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jan 29 11:15:07.758671 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jan 29 11:15:07.758726 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jan 29 11:15:07.758777 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jan 29 11:15:07.758829 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Jan 29 11:15:07.758879 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Jan 29 11:15:07.758979 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Jan 29 11:15:07.759036 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.759087 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.759142 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.759209 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.759261 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.759312 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.759362 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.759414 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.759465 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.759516 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.759628 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.759685 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.759737 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.759788 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.759838 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.759889 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.759939 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.759990 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.760044 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.760113 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.760165 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.760224 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.760275 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.760327 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.760378 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.760429 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.760484 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.760535 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.760585 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.760636 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.760687 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.760744 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.760796 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.760846 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.760900 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.760951 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.761026 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.761087 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.761138 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.761203 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.761257 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.761351 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.761408 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.761459 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.761510 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.761560 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.761612 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.761662 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.761713 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.761763 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.761814 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.761865 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.761919 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.761969 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.762020 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.762071 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.762122 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.762229 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.762284 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.762334 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.762438 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.762495 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.762546 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.762596 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.762647 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.762697 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.762748 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.762798 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.762849 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.762899 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.762949 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.763002 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.763052 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.763103 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.763153 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.763253 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.763304 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.763355 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.763404 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.763454 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.763508 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.763558 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.763608 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jan 29 11:15:07.763658 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jan 29 11:15:07.763709 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 29 11:15:07.763760 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Jan 29 11:15:07.763810 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jan 29 11:15:07.763859 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jan 29 11:15:07.763909 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jan 29 11:15:07.763965 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Jan 29 11:15:07.764017 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jan 29 11:15:07.764068 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jan 29 11:15:07.764119 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jan 29 11:15:07.764183 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Jan 29 11:15:07.764237 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jan 29 11:15:07.764288 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jan 29 11:15:07.764338 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jan 29 11:15:07.764389 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jan 29 11:15:07.764444 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jan 29 11:15:07.764494 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jan 29 11:15:07.764544 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jan 29 11:15:07.764594 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jan 29 11:15:07.764645 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jan 29 11:15:07.764695 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jan 29 11:15:07.764745 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jan 29 11:15:07.764796 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jan 29 11:15:07.764847 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jan 29 11:15:07.764900 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jan 29 11:15:07.764953 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jan 29 11:15:07.765007 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jan 29 11:15:07.765059 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jan 29 11:15:07.765109 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jan 29 11:15:07.765158 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jan 29 11:15:07.765281 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jan 29 11:15:07.765333 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jan 29 11:15:07.765382 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jan 29 11:15:07.765433 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jan 29 11:15:07.765484 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Jan 29 11:15:07.765534 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jan 29 11:15:07.765583 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jan 29 11:15:07.765635 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jan 29 11:15:07.765685 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Jan 29 11:15:07.765739 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jan 29 11:15:07.765790 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jan 29 11:15:07.765840 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jan 29 11:15:07.765891 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jan 29 11:15:07.765941 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jan 29 11:15:07.765991 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jan 29 11:15:07.766042 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jan 29 11:15:07.766093 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jan 29 11:15:07.766143 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jan 29 11:15:07.766209 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jan 29 11:15:07.766262 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jan 29 11:15:07.766313 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jan 29 11:15:07.766363 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jan 29 11:15:07.766414 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jan 29 11:15:07.766465 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jan 29 11:15:07.766516 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jan 29 11:15:07.766566 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jan 29 11:15:07.766616 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jan 29 11:15:07.766666 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jan 29 11:15:07.766721 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jan 29 11:15:07.766772 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jan 29 11:15:07.766822 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jan 29 11:15:07.766872 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jan 29 11:15:07.766922 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jan 29 11:15:07.766972 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jan 29 11:15:07.767022 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jan 29 11:15:07.767071 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jan 29 11:15:07.767122 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jan 29 11:15:07.767192 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jan 29 11:15:07.767247 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jan 29 11:15:07.767297 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jan 29 11:15:07.767348 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jan 29 11:15:07.767399 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jan 29 11:15:07.767449 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jan 29 11:15:07.767499 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jan 29 11:15:07.767550 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jan 29 11:15:07.767600 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jan 29 11:15:07.767651 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jan 29 11:15:07.767705 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jan 29 11:15:07.767755 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jan 29 11:15:07.767805 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jan 29 11:15:07.767855 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jan 29 11:15:07.767906 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jan 29 11:15:07.767956 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jan 29 11:15:07.768010 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jan 29 11:15:07.768062 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jan 29 11:15:07.768113 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jan 29 11:15:07.768201 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jan 29 11:15:07.768260 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jan 29 11:15:07.768310 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jan 29 11:15:07.768359 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jan 29 11:15:07.768409 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jan 29 11:15:07.768459 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jan 29 11:15:07.768509 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jan 29 11:15:07.768559 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jan 29 11:15:07.768610 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jan 29 11:15:07.768660 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jan 29 11:15:07.768713 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jan 29 11:15:07.768763 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jan 29 11:15:07.768813 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jan 29 11:15:07.768863 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jan 29 11:15:07.768913 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jan 29 11:15:07.768963 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jan 29 11:15:07.769013 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jan 29 11:15:07.769063 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jan 29 11:15:07.769112 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jan 29 11:15:07.769188 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jan 29 11:15:07.769243 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jan 29 11:15:07.769294 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jan 29 11:15:07.769344 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jan 29 11:15:07.769395 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jan 29 11:15:07.769446 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jan 29 11:15:07.769496 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jan 29 11:15:07.769547 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jan 29 11:15:07.769597 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jan 29 11:15:07.769648 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jan 29 11:15:07.769700 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Jan 29 11:15:07.769756 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Jan 29 11:15:07.769801 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Jan 29 11:15:07.769846 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Jan 29 11:15:07.769891 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Jan 29 11:15:07.769940 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Jan 29 11:15:07.770428 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Jan 29 11:15:07.770487 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Jan 29 11:15:07.770536 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Jan 29 11:15:07.770582 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Jan 29 11:15:07.770628 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Jan 29 11:15:07.770674 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Jan 29 11:15:07.770728 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Jan 29 11:15:07.770781 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Jan 29 11:15:07.770828 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Jan 29 11:15:07.770878 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Jan 29 11:15:07.770928 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Jan 29 11:15:07.770975 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Jan 29 11:15:07.771045 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Jan 29 11:15:07.771098 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Jan 29 11:15:07.771145 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Jan 29 11:15:07.771217 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Jan 29 11:15:07.771280 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Jan 29 11:15:07.771334 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Jan 29 11:15:07.771383 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Jan 29 11:15:07.771431 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Jan 29 11:15:07.771481 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Jan 29 11:15:07.771527 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Jan 29 11:15:07.771580 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Jan 29 11:15:07.771627 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Jan 29 11:15:07.771681 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Jan 29 11:15:07.771738 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Jan 29 11:15:07.771791 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Jan 29 11:15:07.771877 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Jan 29 11:15:07.771942 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Jan 29 11:15:07.771993 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Jan 29 11:15:07.772089 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Jan 29 11:15:07.772136 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Jan 29 11:15:07.772228 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Jan 29 11:15:07.772278 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Jan 29 11:15:07.772332 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Jan 29 11:15:07.772385 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Jan 29 11:15:07.772433 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Jan 29 11:15:07.772484 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Jan 29 11:15:07.772531 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Jan 29 11:15:07.772581 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Jan 29 11:15:07.772631 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Jan 29 11:15:07.772681 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Jan 29 11:15:07.772732 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Jan 29 11:15:07.772782 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Jan 29 11:15:07.772829 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Jan 29 11:15:07.772879 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Jan 29 11:15:07.772927 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Jan 29 11:15:07.772976 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Jan 29 11:15:07.773027 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Jan 29 11:15:07.773133 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Jan 29 11:15:07.773198 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Jan 29 11:15:07.773252 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Jan 29 11:15:07.773300 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Jan 29 11:15:07.773351 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Jan 29 11:15:07.773403 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Jan 29 11:15:07.773451 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Jan 29 11:15:07.773504 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Jan 29 11:15:07.773552 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Jan 29 11:15:07.773603 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Jan 29 11:15:07.773651 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Jan 29 11:15:07.773705 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Jan 29 11:15:07.773752 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Jan 29 11:15:07.773805 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Jan 29 11:15:07.773853 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Jan 29 11:15:07.773907 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jan 29 11:15:07.773958 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Jan 29 11:15:07.774005 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Jan 29 11:15:07.774056 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Jan 29 11:15:07.774103 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Jan 29 11:15:07.774150 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Jan 29 11:15:07.774214 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Jan 29 11:15:07.774263 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Jan 29 11:15:07.774317 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Jan 29 11:15:07.774365 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Jan 29 11:15:07.774417 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Jan 29 11:15:07.774573 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Jan 29 11:15:07.774800 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Jan 29 11:15:07.774852 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Jan 29 11:15:07.774917 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Jan 29 11:15:07.774965 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Jan 29 11:15:07.775016 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Jan 29 11:15:07.775064 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Jan 29 11:15:07.775120 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 29 11:15:07.775130 kernel: PCI: CLS 32 bytes, default 64 Jan 29 11:15:07.775137 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 29 11:15:07.775146 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jan 29 11:15:07.775152 kernel: clocksource: Switched to clocksource tsc Jan 29 11:15:07.775158 kernel: Initialise system trusted keyrings Jan 29 11:15:07.775164 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 29 11:15:07.775176 kernel: Key type asymmetric registered Jan 29 11:15:07.775183 kernel: Asymmetric key parser 'x509' registered Jan 29 11:15:07.775189 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 11:15:07.775195 kernel: io scheduler mq-deadline registered Jan 29 11:15:07.775201 kernel: io scheduler kyber registered Jan 29 11:15:07.775209 kernel: io scheduler bfq registered Jan 29 11:15:07.775263 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Jan 29 11:15:07.775318 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.775371 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Jan 29 11:15:07.775423 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.775475 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Jan 29 11:15:07.775528 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.775580 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Jan 29 11:15:07.775635 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.775690 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Jan 29 11:15:07.775743 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.775794 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Jan 29 11:15:07.775846 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.775901 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Jan 29 11:15:07.775953 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.776008 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Jan 29 11:15:07.776062 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.776114 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Jan 29 11:15:07.776171 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.776234 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Jan 29 11:15:07.776286 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.776337 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Jan 29 11:15:07.776388 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.776451 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Jan 29 11:15:07.776509 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.776565 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Jan 29 11:15:07.776617 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.776669 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Jan 29 11:15:07.776721 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.776773 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Jan 29 11:15:07.776828 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.776879 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Jan 29 11:15:07.776931 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.776982 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Jan 29 11:15:07.777033 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.777084 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Jan 29 11:15:07.777136 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.777204 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Jan 29 11:15:07.777256 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.777307 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Jan 29 11:15:07.777358 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.777410 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Jan 29 11:15:07.777461 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.777516 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Jan 29 11:15:07.777591 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.777649 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Jan 29 11:15:07.777701 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.777752 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Jan 29 11:15:07.777808 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.777859 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Jan 29 11:15:07.777911 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.777962 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Jan 29 11:15:07.778013 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.778064 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Jan 29 11:15:07.778118 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.780200 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Jan 29 11:15:07.780264 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.780320 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Jan 29 11:15:07.780373 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.780426 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Jan 29 11:15:07.780482 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.780534 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Jan 29 11:15:07.780586 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.780637 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Jan 29 11:15:07.780689 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:15:07.780700 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 11:15:07.780707 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:15:07.780714 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 11:15:07.780720 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Jan 29 11:15:07.780727 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 11:15:07.780734 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 11:15:07.780785 kernel: rtc_cmos 00:01: registered as rtc0 Jan 29 11:15:07.780833 kernel: rtc_cmos 00:01: setting system clock to 2025-01-29T11:15:07 UTC (1738149307) Jan 29 11:15:07.780844 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 11:15:07.780889 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Jan 29 11:15:07.780898 kernel: intel_pstate: CPU model not supported Jan 29 11:15:07.780905 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:15:07.780911 kernel: Segment Routing with IPv6 Jan 29 11:15:07.780917 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:15:07.780924 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:15:07.780930 kernel: Key type dns_resolver registered Jan 29 11:15:07.780936 kernel: IPI shorthand broadcast: enabled Jan 29 11:15:07.780944 kernel: sched_clock: Marking stable (932210695, 239295427)->(1237141746, -65635624) Jan 29 11:15:07.780951 kernel: registered taskstats version 1 Jan 29 11:15:07.780957 kernel: Loading compiled-in X.509 certificates Jan 29 11:15:07.780963 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 7f0738935740330d55027faa5877e7155d5f24f4' Jan 29 11:15:07.780969 kernel: Key type .fscrypt registered Jan 29 11:15:07.780975 kernel: Key type fscrypt-provisioning registered Jan 29 11:15:07.780982 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:15:07.780988 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:15:07.780996 kernel: ima: No architecture policies found Jan 29 11:15:07.781021 kernel: clk: Disabling unused clocks Jan 29 11:15:07.781029 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 29 11:15:07.781035 kernel: Write protecting the kernel read-only data: 38912k Jan 29 11:15:07.781042 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 29 11:15:07.781048 kernel: Run /init as init process Jan 29 11:15:07.781054 kernel: with arguments: Jan 29 11:15:07.781060 kernel: /init Jan 29 11:15:07.781066 kernel: with environment: Jan 29 11:15:07.781072 kernel: HOME=/ Jan 29 11:15:07.781080 kernel: TERM=linux Jan 29 11:15:07.781086 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:15:07.781094 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:15:07.781102 systemd[1]: Detected virtualization vmware. Jan 29 11:15:07.781109 systemd[1]: Detected architecture x86-64. Jan 29 11:15:07.781115 systemd[1]: Running in initrd. Jan 29 11:15:07.781122 systemd[1]: No hostname configured, using default hostname. Jan 29 11:15:07.781129 systemd[1]: Hostname set to . Jan 29 11:15:07.781136 systemd[1]: Initializing machine ID from random generator. Jan 29 11:15:07.781142 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:15:07.781149 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:15:07.781155 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:15:07.781162 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:15:07.781178 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:15:07.781185 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:15:07.781193 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:15:07.781201 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:15:07.781208 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:15:07.781214 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:15:07.781220 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:15:07.781227 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:15:07.781233 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:15:07.781242 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:15:07.781249 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:15:07.781255 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:15:07.781262 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:15:07.781268 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:15:07.781275 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:15:07.781281 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:15:07.781288 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:15:07.781295 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:15:07.781302 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:15:07.781308 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:15:07.781315 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:15:07.781321 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:15:07.781329 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:15:07.781340 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:15:07.781352 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:15:07.781360 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:15:07.781382 systemd-journald[217]: Collecting audit messages is disabled. Jan 29 11:15:07.781398 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:15:07.781405 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:15:07.781411 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:15:07.781420 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:15:07.781427 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:15:07.781433 kernel: Bridge firewalling registered Jan 29 11:15:07.781439 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:15:07.781446 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:15:07.781455 systemd-journald[217]: Journal started Jan 29 11:15:07.781469 systemd-journald[217]: Runtime Journal (/run/log/journal/4dff5ff9cb044d32bffebd0e7ed16a62) is 4.8M, max 38.6M, 33.8M free. Jan 29 11:15:07.754319 systemd-modules-load[218]: Inserted module 'overlay' Jan 29 11:15:07.778151 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 29 11:15:07.784179 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:15:07.786179 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:15:07.787179 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:15:07.787376 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:15:07.791364 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:15:07.792257 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:15:07.792908 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:15:07.798820 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:15:07.799293 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:15:07.803262 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:15:07.803431 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:15:07.806263 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:15:07.809590 dracut-cmdline[249]: dracut-dracut-053 Jan 29 11:15:07.812006 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 29 11:15:07.824821 systemd-resolved[255]: Positive Trust Anchors: Jan 29 11:15:07.824832 systemd-resolved[255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:15:07.824855 systemd-resolved[255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:15:07.826811 systemd-resolved[255]: Defaulting to hostname 'linux'. Jan 29 11:15:07.827695 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:15:07.828128 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:15:07.856187 kernel: SCSI subsystem initialized Jan 29 11:15:07.863181 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:15:07.870180 kernel: iscsi: registered transport (tcp) Jan 29 11:15:07.884219 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:15:07.884264 kernel: QLogic iSCSI HBA Driver Jan 29 11:15:07.905256 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:15:07.910257 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:15:07.924384 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:15:07.925442 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:15:07.925455 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:15:07.958180 kernel: raid6: avx2x4 gen() 47258 MB/s Jan 29 11:15:07.973182 kernel: raid6: avx2x2 gen() 53972 MB/s Jan 29 11:15:07.990522 kernel: raid6: avx2x1 gen() 45713 MB/s Jan 29 11:15:07.990541 kernel: raid6: using algorithm avx2x2 gen() 53972 MB/s Jan 29 11:15:08.008526 kernel: raid6: .... xor() 27168 MB/s, rmw enabled Jan 29 11:15:08.008585 kernel: raid6: using avx2x2 recovery algorithm Jan 29 11:15:08.023193 kernel: xor: automatically using best checksumming function avx Jan 29 11:15:08.117184 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:15:08.122867 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:15:08.127267 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:15:08.134201 systemd-udevd[436]: Using default interface naming scheme 'v255'. Jan 29 11:15:08.136676 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:15:08.142240 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:15:08.148733 dracut-pre-trigger[441]: rd.md=0: removing MD RAID activation Jan 29 11:15:08.164080 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:15:08.168252 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:15:08.240365 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:15:08.244592 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:15:08.263176 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:15:08.263669 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:15:08.264118 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:15:08.264373 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:15:08.268285 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:15:08.279940 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:15:08.319180 kernel: VMware PVSCSI driver - version 1.0.7.0-k Jan 29 11:15:08.324159 kernel: vmw_pvscsi: using 64bit dma Jan 29 11:15:08.324203 kernel: vmw_pvscsi: max_id: 16 Jan 29 11:15:08.324217 kernel: vmw_pvscsi: setting ring_pages to 8 Jan 29 11:15:08.332176 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Jan 29 11:15:08.334435 kernel: vmw_pvscsi: enabling reqCallThreshold Jan 29 11:15:08.334453 kernel: vmw_pvscsi: driver-based request coalescing enabled Jan 29 11:15:08.334461 kernel: vmw_pvscsi: using MSI-X Jan 29 11:15:08.335734 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Jan 29 11:15:08.342624 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Jan 29 11:15:08.345817 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Jan 29 11:15:08.346015 kernel: libata version 3.00 loaded. Jan 29 11:15:08.351188 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Jan 29 11:15:08.355451 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 11:15:08.355462 kernel: ata_piix 0000:00:07.1: version 2.13 Jan 29 11:15:08.360798 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Jan 29 11:15:08.360877 kernel: scsi host1: ata_piix Jan 29 11:15:08.360939 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Jan 29 11:15:08.361011 kernel: scsi host2: ata_piix Jan 29 11:15:08.361077 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Jan 29 11:15:08.361086 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Jan 29 11:15:08.362305 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:15:08.362381 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:15:08.362680 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:15:08.362796 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:15:08.362862 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:15:08.363024 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:15:08.366295 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:15:08.372901 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 11:15:08.372922 kernel: AES CTR mode by8 optimization enabled Jan 29 11:15:08.379055 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:15:08.383248 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:15:08.394340 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:15:08.528194 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Jan 29 11:15:08.534196 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Jan 29 11:15:08.551277 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Jan 29 11:15:08.600190 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 29 11:15:08.600305 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Jan 29 11:15:08.600392 kernel: sd 0:0:0:0: [sda] Cache data unavailable Jan 29 11:15:08.600479 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Jan 29 11:15:08.600563 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Jan 29 11:15:08.600648 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 11:15:08.600660 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 29 11:15:08.600740 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:15:08.600752 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 29 11:15:08.657193 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (486) Jan 29 11:15:08.661184 kernel: BTRFS: device fsid f8084233-4a6f-4e67-af0b-519e43b19e58 devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (498) Jan 29 11:15:08.663627 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Jan 29 11:15:08.667087 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Jan 29 11:15:08.669692 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Jan 29 11:15:08.669993 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Jan 29 11:15:08.673182 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jan 29 11:15:08.681279 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:15:08.707194 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:15:08.712181 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:15:09.720314 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:15:09.720354 disk-uuid[597]: The operation has completed successfully. Jan 29 11:15:09.799663 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:15:09.799720 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:15:09.804248 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:15:09.805956 sh[613]: Success Jan 29 11:15:09.814228 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 29 11:15:09.924754 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:15:09.934926 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:15:09.935299 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:15:09.999808 kernel: BTRFS info (device dm-0): first mount of filesystem f8084233-4a6f-4e67-af0b-519e43b19e58 Jan 29 11:15:09.999841 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:15:09.999851 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:15:10.000911 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:15:10.001714 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:15:10.009180 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 29 11:15:10.011224 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:15:10.020253 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Jan 29 11:15:10.021507 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:15:10.038555 kernel: BTRFS info (device sda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 29 11:15:10.038586 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:15:10.038595 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:15:10.076179 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 11:15:10.084689 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:15:10.088180 kernel: BTRFS info (device sda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 29 11:15:10.094751 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:15:10.098264 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:15:10.140294 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jan 29 11:15:10.145389 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:15:10.203265 ignition[674]: Ignition 2.20.0 Jan 29 11:15:10.203272 ignition[674]: Stage: fetch-offline Jan 29 11:15:10.203290 ignition[674]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:15:10.203296 ignition[674]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 29 11:15:10.203573 ignition[674]: parsed url from cmdline: "" Jan 29 11:15:10.203575 ignition[674]: no config URL provided Jan 29 11:15:10.203583 ignition[674]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:15:10.203600 ignition[674]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:15:10.204628 ignition[674]: config successfully fetched Jan 29 11:15:10.204646 ignition[674]: parsing config with SHA512: a75c139a1940bba0780c18ae4683dffe2515ce72d9a2a9bec7df0472feba101d616f2245c4b89fa0b279aa1c94e40094414ef52398828b9e196ce601e873e8b8 Jan 29 11:15:10.207524 unknown[674]: fetched base config from "system" Jan 29 11:15:10.207636 unknown[674]: fetched user config from "vmware" Jan 29 11:15:10.208008 ignition[674]: fetch-offline: fetch-offline passed Jan 29 11:15:10.208396 ignition[674]: Ignition finished successfully Jan 29 11:15:10.209104 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:15:10.210074 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:15:10.218276 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:15:10.229776 systemd-networkd[808]: lo: Link UP Jan 29 11:15:10.229782 systemd-networkd[808]: lo: Gained carrier Jan 29 11:15:10.230488 systemd-networkd[808]: Enumeration completed Jan 29 11:15:10.230736 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:15:10.230752 systemd-networkd[808]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Jan 29 11:15:10.231067 systemd[1]: Reached target network.target - Network. Jan 29 11:15:10.231302 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 11:15:10.234178 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jan 29 11:15:10.234283 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jan 29 11:15:10.234020 systemd-networkd[808]: ens192: Link UP Jan 29 11:15:10.234022 systemd-networkd[808]: ens192: Gained carrier Jan 29 11:15:10.241283 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:15:10.248470 ignition[810]: Ignition 2.20.0 Jan 29 11:15:10.248477 ignition[810]: Stage: kargs Jan 29 11:15:10.248611 ignition[810]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:15:10.248617 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 29 11:15:10.249162 ignition[810]: kargs: kargs passed Jan 29 11:15:10.249201 ignition[810]: Ignition finished successfully Jan 29 11:15:10.250333 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:15:10.254286 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:15:10.261017 ignition[817]: Ignition 2.20.0 Jan 29 11:15:10.261025 ignition[817]: Stage: disks Jan 29 11:15:10.261138 ignition[817]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:15:10.261144 ignition[817]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 29 11:15:10.261748 ignition[817]: disks: disks passed Jan 29 11:15:10.261777 ignition[817]: Ignition finished successfully Jan 29 11:15:10.262687 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:15:10.262968 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:15:10.263079 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:15:10.263201 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:15:10.263296 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:15:10.263391 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:15:10.267273 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:15:10.277226 systemd-fsck[825]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 29 11:15:10.278180 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:15:10.282234 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:15:10.337286 kernel: EXT4-fs (sda9): mounted filesystem cdc615db-d057-439f-af25-aa57b1c399e2 r/w with ordered data mode. Quota mode: none. Jan 29 11:15:10.337718 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:15:10.338098 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:15:10.341233 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:15:10.343005 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:15:10.343501 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 11:15:10.343821 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:15:10.344077 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:15:10.346114 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:15:10.347194 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:15:10.351898 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (833) Jan 29 11:15:10.351930 kernel: BTRFS info (device sda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 29 11:15:10.351940 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:15:10.352818 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:15:10.357180 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 11:15:10.357916 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:15:10.377530 initrd-setup-root[857]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:15:10.379974 initrd-setup-root[864]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:15:10.382119 initrd-setup-root[871]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:15:10.384182 initrd-setup-root[878]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:15:10.442192 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:15:10.447250 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:15:10.449685 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:15:10.454201 kernel: BTRFS info (device sda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 29 11:15:10.464727 ignition[945]: INFO : Ignition 2.20.0 Jan 29 11:15:10.465222 ignition[945]: INFO : Stage: mount Jan 29 11:15:10.465222 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:15:10.465222 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 29 11:15:10.465964 ignition[945]: INFO : mount: mount passed Jan 29 11:15:10.466093 ignition[945]: INFO : Ignition finished successfully Jan 29 11:15:10.466683 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:15:10.470280 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:15:10.470484 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:15:10.998200 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:15:11.003337 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:15:11.011208 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (957) Jan 29 11:15:11.013360 kernel: BTRFS info (device sda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 29 11:15:11.013378 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:15:11.013386 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:15:11.017183 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 11:15:11.017788 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:15:11.033265 ignition[974]: INFO : Ignition 2.20.0 Jan 29 11:15:11.033265 ignition[974]: INFO : Stage: files Jan 29 11:15:11.033756 ignition[974]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:15:11.033756 ignition[974]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 29 11:15:11.033985 ignition[974]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:15:11.034624 ignition[974]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:15:11.034624 ignition[974]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:15:11.036788 ignition[974]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:15:11.036930 ignition[974]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:15:11.037075 ignition[974]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:15:11.037043 unknown[974]: wrote ssh authorized keys file for user: core Jan 29 11:15:11.038430 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 29 11:15:11.038600 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 29 11:15:11.181470 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 11:15:11.298067 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 29 11:15:11.298067 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:15:11.298478 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:15:11.298478 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:15:11.298478 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:15:11.298478 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:15:11.298478 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:15:11.298478 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:15:11.298478 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:15:11.299511 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:15:11.299511 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:15:11.299511 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 11:15:11.299511 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 11:15:11.299511 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 11:15:11.299511 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Jan 29 11:15:11.630853 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 29 11:15:11.886480 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 11:15:11.886727 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jan 29 11:15:11.886727 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jan 29 11:15:11.886727 ignition[974]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 29 11:15:11.887560 ignition[974]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:15:11.887560 ignition[974]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:15:11.887560 ignition[974]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 29 11:15:11.887560 ignition[974]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 29 11:15:11.887560 ignition[974]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:15:11.887560 ignition[974]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:15:11.887560 ignition[974]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 29 11:15:11.887560 ignition[974]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 11:15:11.925783 ignition[974]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:15:11.927894 ignition[974]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:15:11.927894 ignition[974]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 11:15:11.927894 ignition[974]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 29 11:15:11.927894 ignition[974]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 11:15:11.929613 ignition[974]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:15:11.929613 ignition[974]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:15:11.929613 ignition[974]: INFO : files: files passed Jan 29 11:15:11.929613 ignition[974]: INFO : Ignition finished successfully Jan 29 11:15:11.929228 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:15:11.934255 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:15:11.935653 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:15:11.936051 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:15:11.936105 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:15:11.942333 initrd-setup-root-after-ignition[1004]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:15:11.942599 initrd-setup-root-after-ignition[1004]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:15:11.943049 initrd-setup-root-after-ignition[1008]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:15:11.943951 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:15:11.944299 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:15:11.947240 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:15:11.959439 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:15:11.959489 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:15:11.959785 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:15:11.959889 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:15:11.960011 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:15:11.961241 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:15:11.969399 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:15:11.975247 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:15:11.980394 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:15:11.980549 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:15:11.980761 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:15:11.980943 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:15:11.981030 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:15:11.981299 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:15:11.981529 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:15:11.981705 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:15:11.981880 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:15:11.982112 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:15:11.982314 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:15:11.982647 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:15:11.982850 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:15:11.983090 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:15:11.983301 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:15:11.983464 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:15:11.983525 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:15:11.983767 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:15:11.983913 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:15:11.984091 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:15:11.984135 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:15:11.984324 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:15:11.984379 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:15:11.984613 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:15:11.984671 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:15:11.984934 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:15:11.985074 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:15:11.988186 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:15:11.988344 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:15:11.988539 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:15:11.988717 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:15:11.988782 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:15:11.988988 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:15:11.989032 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:15:11.989271 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:15:11.989328 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:15:11.989571 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:15:11.989624 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:15:11.994256 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:15:11.994404 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:15:11.994467 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:15:11.996299 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:15:11.996468 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:15:11.996549 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:15:11.996722 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:15:11.996800 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:15:12.000236 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:15:12.000314 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:15:12.004518 ignition[1028]: INFO : Ignition 2.20.0 Jan 29 11:15:12.004518 ignition[1028]: INFO : Stage: umount Jan 29 11:15:12.005116 ignition[1028]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:15:12.005116 ignition[1028]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 29 11:15:12.005116 ignition[1028]: INFO : umount: umount passed Jan 29 11:15:12.005705 ignition[1028]: INFO : Ignition finished successfully Jan 29 11:15:12.005617 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:15:12.005680 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:15:12.006039 systemd[1]: Stopped target network.target - Network. Jan 29 11:15:12.006134 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:15:12.006281 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:15:12.006393 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:15:12.006415 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:15:12.006515 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:15:12.006535 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:15:12.006647 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:15:12.006668 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:15:12.006880 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:15:12.007312 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:15:12.008429 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:15:12.012027 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:15:12.012087 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:15:12.012568 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:15:12.012592 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:15:12.017231 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:15:12.017331 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:15:12.017358 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:15:12.017492 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Jan 29 11:15:12.017514 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jan 29 11:15:12.017740 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:15:12.019321 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:15:12.019372 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:15:12.022093 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:15:12.022135 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:15:12.022681 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:15:12.022706 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:15:12.022943 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:15:12.022966 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:15:12.024717 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:15:12.024932 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:15:12.025349 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:15:12.025505 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:15:12.025988 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:15:12.026130 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:15:12.026392 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:15:12.026409 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:15:12.026512 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:15:12.026535 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:15:12.026697 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:15:12.026718 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:15:12.026850 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:15:12.026872 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:15:12.028244 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:15:12.028363 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:15:12.028389 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:15:12.028577 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:15:12.028599 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:15:12.033292 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:15:12.033354 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:15:12.068755 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:15:12.068811 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:15:12.069164 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:15:12.069295 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:15:12.069330 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:15:12.072259 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:15:12.076040 systemd[1]: Switching root. Jan 29 11:15:12.117501 systemd-journald[217]: Journal stopped Jan 29 11:15:13.144722 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Jan 29 11:15:13.144751 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:15:13.144759 kernel: SELinux: policy capability open_perms=1 Jan 29 11:15:13.144765 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:15:13.144770 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:15:13.144776 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:15:13.144783 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:15:13.144789 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:15:13.144795 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:15:13.144801 kernel: audit: type=1403 audit(1738149312.617:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:15:13.144808 systemd[1]: Successfully loaded SELinux policy in 30.700ms. Jan 29 11:15:13.144815 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.288ms. Jan 29 11:15:13.144822 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:15:13.144829 systemd[1]: Detected virtualization vmware. Jan 29 11:15:13.144836 systemd[1]: Detected architecture x86-64. Jan 29 11:15:13.144842 systemd[1]: Detected first boot. Jan 29 11:15:13.144849 systemd[1]: Initializing machine ID from random generator. Jan 29 11:15:13.144857 zram_generator::config[1071]: No configuration found. Jan 29 11:15:13.144864 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:15:13.144871 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jan 29 11:15:13.144878 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Jan 29 11:15:13.144885 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 11:15:13.144891 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 11:15:13.144898 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 11:15:13.144904 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:15:13.144912 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:15:13.144919 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:15:13.144925 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:15:13.144932 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:15:13.144938 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:15:13.144945 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:15:13.144951 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:15:13.144959 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:15:13.144966 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:15:13.144973 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:15:13.144979 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:15:13.144986 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:15:13.144992 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:15:13.144999 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 11:15:13.145030 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:15:13.145041 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 11:15:13.145050 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 11:15:13.145057 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 11:15:13.145063 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:15:13.145071 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:15:13.145078 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:15:13.145085 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:15:13.145093 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:15:13.145100 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:15:13.145107 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:15:13.145114 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:15:13.145121 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:15:13.145129 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:15:13.145136 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:15:13.145143 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:15:13.145150 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:15:13.145157 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:15:13.145164 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:15:13.146194 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:15:13.146204 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:15:13.146214 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:15:13.146221 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:15:13.146228 systemd[1]: Reached target machines.target - Containers. Jan 29 11:15:13.146235 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:15:13.146243 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Jan 29 11:15:13.146250 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:15:13.146257 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:15:13.146263 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:15:13.146270 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:15:13.146279 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:15:13.146286 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:15:13.146293 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:15:13.146300 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:15:13.146308 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 11:15:13.146315 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 11:15:13.146322 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 11:15:13.146329 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 11:15:13.146338 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:15:13.146344 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:15:13.146352 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:15:13.146359 kernel: fuse: init (API version 7.39) Jan 29 11:15:13.146367 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:15:13.146374 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:15:13.146381 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 11:15:13.146388 systemd[1]: Stopped verity-setup.service. Jan 29 11:15:13.146395 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:15:13.146404 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:15:13.146411 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:15:13.146418 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:15:13.146425 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:15:13.146432 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:15:13.146439 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:15:13.146446 kernel: ACPI: bus type drm_connector registered Jan 29 11:15:13.146453 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:15:13.146461 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:15:13.146468 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:15:13.146475 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:15:13.146482 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:15:13.146488 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:15:13.146495 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:15:13.146502 kernel: loop: module loaded Jan 29 11:15:13.146508 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:15:13.146515 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:15:13.146523 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:15:13.146530 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:15:13.146538 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:15:13.146544 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:15:13.146551 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:15:13.146558 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:15:13.146565 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:15:13.146587 systemd-journald[1154]: Collecting audit messages is disabled. Jan 29 11:15:13.146605 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:15:13.146613 systemd-journald[1154]: Journal started Jan 29 11:15:13.146629 systemd-journald[1154]: Runtime Journal (/run/log/journal/da50ad2f11aa48b6bb8aaace16a0d596) is 4.8M, max 38.6M, 33.8M free. Jan 29 11:15:12.965022 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:15:12.979923 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 29 11:15:12.980141 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 11:15:13.147844 jq[1138]: true Jan 29 11:15:13.151337 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:15:13.151359 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:15:13.151517 jq[1163]: true Jan 29 11:15:13.152203 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:15:13.153184 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:15:13.157188 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:15:13.167826 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:15:13.178023 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:15:13.178061 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:15:13.183161 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:15:13.183230 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:15:13.188293 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:15:13.188323 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:15:13.192582 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:15:13.198586 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:15:13.198619 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:15:13.200916 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:15:13.201094 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:15:13.201276 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:15:13.201495 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:15:13.219538 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:15:13.220817 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:15:13.234323 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:15:13.234907 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:15:13.238598 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:15:13.240268 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:15:13.250633 kernel: loop0: detected capacity change from 0 to 2960 Jan 29 11:15:13.250719 systemd-journald[1154]: Time spent on flushing to /var/log/journal/da50ad2f11aa48b6bb8aaace16a0d596 is 68.259ms for 1837 entries. Jan 29 11:15:13.250719 systemd-journald[1154]: System Journal (/var/log/journal/da50ad2f11aa48b6bb8aaace16a0d596) is 8.0M, max 584.8M, 576.8M free. Jan 29 11:15:13.334221 systemd-journald[1154]: Received client request to flush runtime journal. Jan 29 11:15:13.334249 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:15:13.334261 kernel: loop1: detected capacity change from 0 to 218376 Jan 29 11:15:13.279760 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Jan 29 11:15:13.266343 ignition[1174]: Ignition 2.20.0 Jan 29 11:15:13.306735 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:15:13.266538 ignition[1174]: deleting config from guestinfo properties Jan 29 11:15:13.307607 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:15:13.278687 ignition[1174]: Successfully deleted config Jan 29 11:15:13.320580 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:15:13.329299 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:15:13.329630 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:15:13.333555 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:15:13.337711 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:15:13.351881 udevadm[1234]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 11:15:13.358206 kernel: loop2: detected capacity change from 0 to 141000 Jan 29 11:15:13.359512 systemd-tmpfiles[1230]: ACLs are not supported, ignoring. Jan 29 11:15:13.359522 systemd-tmpfiles[1230]: ACLs are not supported, ignoring. Jan 29 11:15:13.363568 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:15:13.404274 kernel: loop3: detected capacity change from 0 to 138184 Jan 29 11:15:13.454186 kernel: loop4: detected capacity change from 0 to 2960 Jan 29 11:15:13.472204 kernel: loop5: detected capacity change from 0 to 218376 Jan 29 11:15:13.491198 kernel: loop6: detected capacity change from 0 to 141000 Jan 29 11:15:13.519260 kernel: loop7: detected capacity change from 0 to 138184 Jan 29 11:15:13.543415 (sd-merge)[1241]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Jan 29 11:15:13.544224 (sd-merge)[1241]: Merged extensions into '/usr'. Jan 29 11:15:13.549292 systemd[1]: Reloading requested from client PID 1189 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:15:13.549337 systemd[1]: Reloading... Jan 29 11:15:13.605198 zram_generator::config[1267]: No configuration found. Jan 29 11:15:13.728814 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jan 29 11:15:13.745803 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:15:13.776573 systemd[1]: Reloading finished in 226 ms. Jan 29 11:15:13.780530 ldconfig[1185]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:15:13.798769 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:15:13.799074 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:15:13.799421 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:15:13.806330 systemd[1]: Starting ensure-sysext.service... Jan 29 11:15:13.807369 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:15:13.809265 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:15:13.819343 systemd[1]: Reloading requested from client PID 1324 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:15:13.819356 systemd[1]: Reloading... Jan 29 11:15:13.835586 systemd-tmpfiles[1325]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:15:13.835766 systemd-tmpfiles[1325]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:15:13.836330 systemd-tmpfiles[1325]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:15:13.836509 systemd-tmpfiles[1325]: ACLs are not supported, ignoring. Jan 29 11:15:13.836547 systemd-tmpfiles[1325]: ACLs are not supported, ignoring. Jan 29 11:15:13.839477 systemd-tmpfiles[1325]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:15:13.839484 systemd-tmpfiles[1325]: Skipping /boot Jan 29 11:15:13.846366 systemd-tmpfiles[1325]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:15:13.846373 systemd-tmpfiles[1325]: Skipping /boot Jan 29 11:15:13.851654 systemd-udevd[1326]: Using default interface naming scheme 'v255'. Jan 29 11:15:13.864177 zram_generator::config[1349]: No configuration found. Jan 29 11:15:13.975184 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 29 11:15:13.977086 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jan 29 11:15:13.980193 kernel: ACPI: button: Power Button [PWRF] Jan 29 11:15:14.006148 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:15:14.025193 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1359) Jan 29 11:15:14.049190 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Jan 29 11:15:14.055179 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Jan 29 11:15:14.055350 kernel: Guest personality initialized and is active Jan 29 11:15:14.056309 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jan 29 11:15:14.056327 kernel: Initialized host personality Jan 29 11:15:14.059134 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 11:15:14.059228 systemd[1]: Reloading finished in 239 ms. Jan 29 11:15:14.068340 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:15:14.074126 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:15:14.092230 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Jan 29 11:15:14.106134 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:15:14.109849 (udev-worker)[1371]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Jan 29 11:15:14.111351 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:15:14.115450 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:15:14.121922 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:15:14.122196 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 11:15:14.124164 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:15:14.128165 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:15:14.129462 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:15:14.129645 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:15:14.131473 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:15:14.133673 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:15:14.137590 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:15:14.139028 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:15:14.139226 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:15:14.140290 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:15:14.141219 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:15:14.141568 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:15:14.141695 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:15:14.142521 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:15:14.142612 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:15:14.142937 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:15:14.143321 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:15:14.152187 systemd[1]: Finished ensure-sysext.service. Jan 29 11:15:14.157289 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jan 29 11:15:14.163092 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:15:14.163272 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:15:14.163347 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:15:14.170069 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 11:15:14.172321 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:15:14.174997 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:15:14.176240 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:15:14.177656 augenrules[1478]: No rules Jan 29 11:15:14.177624 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:15:14.181151 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:15:14.191430 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:15:14.193123 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:15:14.193639 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:15:14.193928 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:15:14.194460 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:15:14.204346 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:15:14.204569 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:15:14.204932 lvm[1488]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:15:14.213149 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:15:14.214161 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:15:14.228023 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:15:14.228506 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:15:14.235295 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:15:14.241095 lvm[1503]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:15:14.270280 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:15:14.271052 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:15:14.273530 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 11:15:14.273719 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:15:14.280104 systemd-networkd[1455]: lo: Link UP Jan 29 11:15:14.280247 systemd-networkd[1455]: lo: Gained carrier Jan 29 11:15:14.281028 systemd-networkd[1455]: Enumeration completed Jan 29 11:15:14.281183 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:15:14.281425 systemd-networkd[1455]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Jan 29 11:15:14.283543 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jan 29 11:15:14.283657 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jan 29 11:15:14.283988 systemd-networkd[1455]: ens192: Link UP Jan 29 11:15:14.284149 systemd-networkd[1455]: ens192: Gained carrier Jan 29 11:15:14.288278 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:15:14.288739 systemd-resolved[1458]: Positive Trust Anchors: Jan 29 11:15:14.289027 systemd-timesyncd[1475]: Network configuration changed, trying to establish connection. Jan 29 11:15:14.289182 systemd-resolved[1458]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:15:14.289237 systemd-resolved[1458]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:15:14.292400 systemd-resolved[1458]: Defaulting to hostname 'linux'. Jan 29 11:15:14.293410 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:15:14.293563 systemd[1]: Reached target network.target - Network. Jan 29 11:15:14.293661 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:15:14.293782 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:15:14.293936 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:15:14.294070 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:15:14.294292 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:15:14.294447 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:15:14.294564 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:15:14.294677 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:15:14.294700 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:15:14.294789 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:15:14.295305 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:15:14.296321 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:15:14.303173 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:15:14.303637 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:15:14.303788 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:15:14.303891 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:15:14.304022 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:15:14.304041 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:15:14.304742 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:15:14.307261 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:15:14.308904 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:15:14.310287 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:15:14.310398 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:15:14.312753 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:15:14.314128 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 11:15:14.315742 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:15:14.317403 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:15:14.324381 jq[1515]: false Jan 29 11:15:14.325052 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:15:14.325362 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:15:14.325745 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:15:14.326123 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:15:14.328060 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:15:14.330052 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Jan 29 11:15:14.333339 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:15:14.333453 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:15:14.345742 extend-filesystems[1516]: Found loop4 Jan 29 11:15:14.349234 extend-filesystems[1516]: Found loop5 Jan 29 11:15:14.350459 extend-filesystems[1516]: Found loop6 Jan 29 11:15:14.350597 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:15:14.350707 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:15:14.351868 extend-filesystems[1516]: Found loop7 Jan 29 11:15:14.351868 extend-filesystems[1516]: Found sda Jan 29 11:15:14.351868 extend-filesystems[1516]: Found sda1 Jan 29 11:15:14.351868 extend-filesystems[1516]: Found sda2 Jan 29 11:15:14.351868 extend-filesystems[1516]: Found sda3 Jan 29 11:15:14.351868 extend-filesystems[1516]: Found usr Jan 29 11:15:14.358950 extend-filesystems[1516]: Found sda4 Jan 29 11:15:14.358950 extend-filesystems[1516]: Found sda6 Jan 29 11:15:14.358950 extend-filesystems[1516]: Found sda7 Jan 29 11:15:14.358950 extend-filesystems[1516]: Found sda9 Jan 29 11:15:14.358950 extend-filesystems[1516]: Checking size of /dev/sda9 Jan 29 11:15:14.362733 jq[1525]: true Jan 29 11:15:14.362010 (ntainerd)[1540]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:15:14.366362 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:15:14.366502 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:15:14.368764 tar[1529]: linux-amd64/LICENSE Jan 29 11:15:14.373672 tar[1529]: linux-amd64/helm Jan 29 11:15:14.376221 jq[1545]: true Jan 29 11:15:14.376695 dbus-daemon[1514]: [system] SELinux support is enabled Jan 29 11:15:14.376896 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:15:14.378479 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:15:14.378496 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:15:14.378721 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:15:14.378767 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:15:14.378861 systemd-logind[1521]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 11:15:14.378872 systemd-logind[1521]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 11:15:14.380112 systemd-logind[1521]: New seat seat0. Jan 29 11:15:14.380780 update_engine[1524]: I20250129 11:15:14.372145 1524 main.cc:92] Flatcar Update Engine starting Jan 29 11:15:14.388929 extend-filesystems[1516]: Old size kept for /dev/sda9 Jan 29 11:15:14.388929 extend-filesystems[1516]: Found sr0 Jan 29 11:15:14.390195 update_engine[1524]: I20250129 11:15:14.389183 1524 update_check_scheduler.cc:74] Next update check in 8m17s Jan 29 11:15:14.389399 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:15:14.389649 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:15:14.389745 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:15:14.396329 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Jan 29 11:15:14.399443 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:15:14.409314 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Jan 29 11:15:14.410284 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:15:14.437046 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Jan 29 11:15:14.444468 unknown[1553]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Jan 29 11:15:14.446227 unknown[1553]: Core dump limit set to -1 Jan 29 11:15:14.454407 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1373) Jan 29 11:15:14.467187 kernel: NET: Registered PF_VSOCK protocol family Jan 29 11:15:14.509096 bash[1575]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:15:14.506905 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:15:14.507878 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 11:15:14.548481 locksmithd[1559]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:15:14.553544 sshd_keygen[1547]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:15:14.579423 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:15:14.588458 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:15:14.603083 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:15:14.603229 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:15:14.609501 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:15:14.622053 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:15:14.627521 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:15:14.629422 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 11:15:14.630248 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:15:14.652986 containerd[1540]: time="2025-01-29T11:15:14.652935835Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 11:15:14.679946 containerd[1540]: time="2025-01-29T11:15:14.679805504Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:15:14.682853 containerd[1540]: time="2025-01-29T11:15:14.682673058Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:15:14.682853 containerd[1540]: time="2025-01-29T11:15:14.682707469Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:15:14.682853 containerd[1540]: time="2025-01-29T11:15:14.682726112Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:15:14.682936 containerd[1540]: time="2025-01-29T11:15:14.682865250Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:15:14.682936 containerd[1540]: time="2025-01-29T11:15:14.682879956Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:15:14.682962 containerd[1540]: time="2025-01-29T11:15:14.682930294Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:15:14.682962 containerd[1540]: time="2025-01-29T11:15:14.682943087Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:15:14.683233 containerd[1540]: time="2025-01-29T11:15:14.683104026Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:15:14.683233 containerd[1540]: time="2025-01-29T11:15:14.683119481Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:15:14.683233 containerd[1540]: time="2025-01-29T11:15:14.683133258Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:15:14.683233 containerd[1540]: time="2025-01-29T11:15:14.683143395Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:15:14.683233 containerd[1540]: time="2025-01-29T11:15:14.683221847Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:15:14.683521 containerd[1540]: time="2025-01-29T11:15:14.683377561Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:15:14.683546 containerd[1540]: time="2025-01-29T11:15:14.683529344Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:15:14.683546 containerd[1540]: time="2025-01-29T11:15:14.683542500Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:15:14.683618 containerd[1540]: time="2025-01-29T11:15:14.683604942Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:15:14.683660 containerd[1540]: time="2025-01-29T11:15:14.683647525Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:15:14.695201 containerd[1540]: time="2025-01-29T11:15:14.695130502Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:15:14.695437 containerd[1540]: time="2025-01-29T11:15:14.695262454Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:15:14.695437 containerd[1540]: time="2025-01-29T11:15:14.695278088Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:15:14.695437 containerd[1540]: time="2025-01-29T11:15:14.695288217Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:15:14.695437 containerd[1540]: time="2025-01-29T11:15:14.695314404Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:15:14.695437 containerd[1540]: time="2025-01-29T11:15:14.695419957Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:15:14.695610 containerd[1540]: time="2025-01-29T11:15:14.695587568Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:15:14.695673 containerd[1540]: time="2025-01-29T11:15:14.695660972Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:15:14.695985 containerd[1540]: time="2025-01-29T11:15:14.695858452Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:15:14.695985 containerd[1540]: time="2025-01-29T11:15:14.695872467Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:15:14.695985 containerd[1540]: time="2025-01-29T11:15:14.695880962Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:15:14.695985 containerd[1540]: time="2025-01-29T11:15:14.695888392Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:15:14.695985 containerd[1540]: time="2025-01-29T11:15:14.695895297Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:15:14.695985 containerd[1540]: time="2025-01-29T11:15:14.695903529Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:15:14.695985 containerd[1540]: time="2025-01-29T11:15:14.695911509Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:15:14.695985 containerd[1540]: time="2025-01-29T11:15:14.695929143Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:15:14.695985 containerd[1540]: time="2025-01-29T11:15:14.695938943Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:15:14.695985 containerd[1540]: time="2025-01-29T11:15:14.695945894Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:15:14.695985 containerd[1540]: time="2025-01-29T11:15:14.695961687Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:15:14.695985 containerd[1540]: time="2025-01-29T11:15:14.695971685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:15:14.695985 containerd[1540]: time="2025-01-29T11:15:14.695979015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:15:14.695985 containerd[1540]: time="2025-01-29T11:15:14.695986173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:15:14.696374 containerd[1540]: time="2025-01-29T11:15:14.695992853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:15:14.696374 containerd[1540]: time="2025-01-29T11:15:14.696115390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:15:14.696374 containerd[1540]: time="2025-01-29T11:15:14.696123244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:15:14.696374 containerd[1540]: time="2025-01-29T11:15:14.696129937Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:15:14.696374 containerd[1540]: time="2025-01-29T11:15:14.696136942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:15:14.696374 containerd[1540]: time="2025-01-29T11:15:14.696145192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:15:14.696374 containerd[1540]: time="2025-01-29T11:15:14.696151699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:15:14.696374 containerd[1540]: time="2025-01-29T11:15:14.696158055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:15:14.696699 containerd[1540]: time="2025-01-29T11:15:14.696164231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:15:14.696718 containerd[1540]: time="2025-01-29T11:15:14.696703778Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:15:14.696736 containerd[1540]: time="2025-01-29T11:15:14.696719392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:15:14.696736 containerd[1540]: time="2025-01-29T11:15:14.696728206Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:15:14.696736 containerd[1540]: time="2025-01-29T11:15:14.696735163Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:15:14.697035 containerd[1540]: time="2025-01-29T11:15:14.696773487Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:15:14.697035 containerd[1540]: time="2025-01-29T11:15:14.696786959Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:15:14.697035 containerd[1540]: time="2025-01-29T11:15:14.696793461Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:15:14.697035 containerd[1540]: time="2025-01-29T11:15:14.696800747Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:15:14.697035 containerd[1540]: time="2025-01-29T11:15:14.696805715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:15:14.697035 containerd[1540]: time="2025-01-29T11:15:14.696813294Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:15:14.697035 containerd[1540]: time="2025-01-29T11:15:14.696819857Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:15:14.697035 containerd[1540]: time="2025-01-29T11:15:14.696866907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:15:14.697143 containerd[1540]: time="2025-01-29T11:15:14.697099995Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:15:14.697143 containerd[1540]: time="2025-01-29T11:15:14.697128576Z" level=info msg="Connect containerd service" Jan 29 11:15:14.697478 containerd[1540]: time="2025-01-29T11:15:14.697144672Z" level=info msg="using legacy CRI server" Jan 29 11:15:14.697478 containerd[1540]: time="2025-01-29T11:15:14.697304297Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:15:14.697478 containerd[1540]: time="2025-01-29T11:15:14.697410969Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:15:14.697796 containerd[1540]: time="2025-01-29T11:15:14.697783185Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:15:14.697850 containerd[1540]: time="2025-01-29T11:15:14.697832515Z" level=info msg="Start subscribing containerd event" Jan 29 11:15:14.697872 containerd[1540]: time="2025-01-29T11:15:14.697858037Z" level=info msg="Start recovering state" Jan 29 11:15:14.697909 containerd[1540]: time="2025-01-29T11:15:14.697898406Z" level=info msg="Start event monitor" Jan 29 11:15:14.697929 containerd[1540]: time="2025-01-29T11:15:14.697911386Z" level=info msg="Start snapshots syncer" Jan 29 11:15:14.697929 containerd[1540]: time="2025-01-29T11:15:14.697916772Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:15:14.697929 containerd[1540]: time="2025-01-29T11:15:14.697920897Z" level=info msg="Start streaming server" Jan 29 11:15:14.698638 containerd[1540]: time="2025-01-29T11:15:14.698481748Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:15:14.698638 containerd[1540]: time="2025-01-29T11:15:14.698524327Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:15:14.698611 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:15:14.699264 containerd[1540]: time="2025-01-29T11:15:14.698885702Z" level=info msg="containerd successfully booted in 0.047358s" Jan 29 11:15:14.852026 tar[1529]: linux-amd64/README.md Jan 29 11:15:14.863746 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 11:15:16.305307 systemd-networkd[1455]: ens192: Gained IPv6LL Jan 29 11:15:16.305651 systemd-timesyncd[1475]: Network configuration changed, trying to establish connection. Jan 29 11:15:16.306561 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:15:16.307231 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:15:16.311362 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Jan 29 11:15:16.313032 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:15:16.314348 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:15:16.332758 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:15:16.343442 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 11:15:16.343639 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Jan 29 11:15:16.344358 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:15:17.342143 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:15:17.342550 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:15:17.343218 systemd[1]: Startup finished in 1.014s (kernel) + 4.992s (initrd) + 4.754s (userspace) = 10.762s. Jan 29 11:15:17.349412 (kubelet)[1691]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:15:17.353663 agetty[1604]: failed to open credentials directory Jan 29 11:15:17.354332 agetty[1606]: failed to open credentials directory Jan 29 11:15:17.384903 login[1604]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 11:15:17.384911 login[1606]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 11:15:17.390666 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:15:17.396601 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:15:17.399215 systemd-logind[1521]: New session 1 of user core. Jan 29 11:15:17.402700 systemd-logind[1521]: New session 2 of user core. Jan 29 11:15:17.406533 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:15:17.410468 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:15:17.416147 (systemd)[1698]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:15:17.492760 systemd[1698]: Queued start job for default target default.target. Jan 29 11:15:17.501156 systemd[1698]: Created slice app.slice - User Application Slice. Jan 29 11:15:17.501193 systemd[1698]: Reached target paths.target - Paths. Jan 29 11:15:17.501203 systemd[1698]: Reached target timers.target - Timers. Jan 29 11:15:17.502108 systemd[1698]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:15:17.509841 systemd[1698]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:15:17.509882 systemd[1698]: Reached target sockets.target - Sockets. Jan 29 11:15:17.509892 systemd[1698]: Reached target basic.target - Basic System. Jan 29 11:15:17.509919 systemd[1698]: Reached target default.target - Main User Target. Jan 29 11:15:17.509938 systemd[1698]: Startup finished in 89ms. Jan 29 11:15:17.510053 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:15:17.515291 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:15:17.516026 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:15:17.931595 kubelet[1691]: E0129 11:15:17.931561 1691 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:15:17.932992 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:15:17.933110 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:15:28.142054 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 11:15:28.153317 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:15:28.228536 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:15:28.231342 (kubelet)[1741]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:15:28.281445 kubelet[1741]: E0129 11:15:28.281416 1741 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:15:28.284594 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:15:28.284760 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:15:38.391999 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 11:15:38.399305 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:15:38.736697 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:15:38.742424 (kubelet)[1756]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:15:38.763850 kubelet[1756]: E0129 11:15:38.763793 1756 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:15:38.765237 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:15:38.765374 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:16:59.800030 systemd-resolved[1458]: Clock change detected. Flushing caches. Jan 29 11:16:59.800583 systemd-timesyncd[1475]: Contacted time server 142.202.190.19:123 (2.flatcar.pool.ntp.org). Jan 29 11:16:59.800636 systemd-timesyncd[1475]: Initial clock synchronization to Wed 2025-01-29 11:16:59.799943 UTC. Jan 29 11:17:01.871280 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 29 11:17:01.879577 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:17:02.328655 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:17:02.331498 (kubelet)[1771]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:17:02.370894 kubelet[1771]: E0129 11:17:02.370861 1771 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:17:02.371903 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:17:02.371980 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:17:07.610462 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:17:07.618630 systemd[1]: Started sshd@0-139.178.70.108:22-147.75.109.163:46336.service - OpenSSH per-connection server daemon (147.75.109.163:46336). Jan 29 11:17:07.681133 sshd[1779]: Accepted publickey for core from 147.75.109.163 port 46336 ssh2: RSA SHA256:oQKHht31ZrZ2aBtTxFwhSjNHKxqmsfAvv5OvtSg5zro Jan 29 11:17:07.681772 sshd-session[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:17:07.684400 systemd-logind[1521]: New session 3 of user core. Jan 29 11:17:07.692460 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 11:17:07.752512 systemd[1]: Started sshd@1-139.178.70.108:22-147.75.109.163:46340.service - OpenSSH per-connection server daemon (147.75.109.163:46340). Jan 29 11:17:07.786207 sshd[1784]: Accepted publickey for core from 147.75.109.163 port 46340 ssh2: RSA SHA256:oQKHht31ZrZ2aBtTxFwhSjNHKxqmsfAvv5OvtSg5zro Jan 29 11:17:07.787248 sshd-session[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:17:07.789878 systemd-logind[1521]: New session 4 of user core. Jan 29 11:17:07.799449 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 11:17:07.848754 sshd[1786]: Connection closed by 147.75.109.163 port 46340 Jan 29 11:17:07.849039 sshd-session[1784]: pam_unix(sshd:session): session closed for user core Jan 29 11:17:07.859669 systemd[1]: sshd@1-139.178.70.108:22-147.75.109.163:46340.service: Deactivated successfully. Jan 29 11:17:07.860704 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 11:17:07.861516 systemd-logind[1521]: Session 4 logged out. Waiting for processes to exit. Jan 29 11:17:07.862398 systemd[1]: Started sshd@2-139.178.70.108:22-147.75.109.163:46346.service - OpenSSH per-connection server daemon (147.75.109.163:46346). Jan 29 11:17:07.863426 systemd-logind[1521]: Removed session 4. Jan 29 11:17:07.899158 sshd[1791]: Accepted publickey for core from 147.75.109.163 port 46346 ssh2: RSA SHA256:oQKHht31ZrZ2aBtTxFwhSjNHKxqmsfAvv5OvtSg5zro Jan 29 11:17:07.899958 sshd-session[1791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:17:07.903348 systemd-logind[1521]: New session 5 of user core. Jan 29 11:17:07.912502 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 11:17:07.960319 sshd[1793]: Connection closed by 147.75.109.163 port 46346 Jan 29 11:17:07.960785 sshd-session[1791]: pam_unix(sshd:session): session closed for user core Jan 29 11:17:07.969133 systemd[1]: sshd@2-139.178.70.108:22-147.75.109.163:46346.service: Deactivated successfully. Jan 29 11:17:07.970079 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 11:17:07.971058 systemd-logind[1521]: Session 5 logged out. Waiting for processes to exit. Jan 29 11:17:07.971948 systemd[1]: Started sshd@3-139.178.70.108:22-147.75.109.163:46350.service - OpenSSH per-connection server daemon (147.75.109.163:46350). Jan 29 11:17:07.973508 systemd-logind[1521]: Removed session 5. Jan 29 11:17:08.021843 sshd[1798]: Accepted publickey for core from 147.75.109.163 port 46350 ssh2: RSA SHA256:oQKHht31ZrZ2aBtTxFwhSjNHKxqmsfAvv5OvtSg5zro Jan 29 11:17:08.022676 sshd-session[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:17:08.026056 systemd-logind[1521]: New session 6 of user core. Jan 29 11:17:08.034472 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:17:08.085452 sshd[1800]: Connection closed by 147.75.109.163 port 46350 Jan 29 11:17:08.085320 sshd-session[1798]: pam_unix(sshd:session): session closed for user core Jan 29 11:17:08.093097 systemd[1]: sshd@3-139.178.70.108:22-147.75.109.163:46350.service: Deactivated successfully. Jan 29 11:17:08.094014 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:17:08.094958 systemd-logind[1521]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:17:08.100639 systemd[1]: Started sshd@4-139.178.70.108:22-147.75.109.163:46354.service - OpenSSH per-connection server daemon (147.75.109.163:46354). Jan 29 11:17:08.102565 systemd-logind[1521]: Removed session 6. Jan 29 11:17:08.136809 sshd[1805]: Accepted publickey for core from 147.75.109.163 port 46354 ssh2: RSA SHA256:oQKHht31ZrZ2aBtTxFwhSjNHKxqmsfAvv5OvtSg5zro Jan 29 11:17:08.137816 sshd-session[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:17:08.141543 systemd-logind[1521]: New session 7 of user core. Jan 29 11:17:08.150510 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:17:08.208996 sudo[1808]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 11:17:08.209199 sudo[1808]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:17:08.221625 sudo[1808]: pam_unix(sudo:session): session closed for user root Jan 29 11:17:08.222389 sshd[1807]: Connection closed by 147.75.109.163 port 46354 Jan 29 11:17:08.222652 sshd-session[1805]: pam_unix(sshd:session): session closed for user core Jan 29 11:17:08.231755 systemd[1]: sshd@4-139.178.70.108:22-147.75.109.163:46354.service: Deactivated successfully. Jan 29 11:17:08.232832 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:17:08.233793 systemd-logind[1521]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:17:08.238557 systemd[1]: Started sshd@5-139.178.70.108:22-147.75.109.163:46366.service - OpenSSH per-connection server daemon (147.75.109.163:46366). Jan 29 11:17:08.239336 systemd-logind[1521]: Removed session 7. Jan 29 11:17:08.274669 sshd[1813]: Accepted publickey for core from 147.75.109.163 port 46366 ssh2: RSA SHA256:oQKHht31ZrZ2aBtTxFwhSjNHKxqmsfAvv5OvtSg5zro Jan 29 11:17:08.275303 sshd-session[1813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:17:08.277817 systemd-logind[1521]: New session 8 of user core. Jan 29 11:17:08.283444 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 11:17:08.332095 sudo[1817]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 11:17:08.332314 sudo[1817]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:17:08.334684 sudo[1817]: pam_unix(sudo:session): session closed for user root Jan 29 11:17:08.338248 sudo[1816]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 11:17:08.338466 sudo[1816]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:17:08.350859 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:17:08.368942 augenrules[1839]: No rules Jan 29 11:17:08.369238 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:17:08.369352 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:17:08.370119 sudo[1816]: pam_unix(sudo:session): session closed for user root Jan 29 11:17:08.371038 sshd[1815]: Connection closed by 147.75.109.163 port 46366 Jan 29 11:17:08.371816 sshd-session[1813]: pam_unix(sshd:session): session closed for user core Jan 29 11:17:08.375648 systemd[1]: sshd@5-139.178.70.108:22-147.75.109.163:46366.service: Deactivated successfully. Jan 29 11:17:08.376441 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 11:17:08.376818 systemd-logind[1521]: Session 8 logged out. Waiting for processes to exit. Jan 29 11:17:08.377753 systemd[1]: Started sshd@6-139.178.70.108:22-147.75.109.163:46376.service - OpenSSH per-connection server daemon (147.75.109.163:46376). Jan 29 11:17:08.379528 systemd-logind[1521]: Removed session 8. Jan 29 11:17:08.413191 sshd[1847]: Accepted publickey for core from 147.75.109.163 port 46376 ssh2: RSA SHA256:oQKHht31ZrZ2aBtTxFwhSjNHKxqmsfAvv5OvtSg5zro Jan 29 11:17:08.414124 sshd-session[1847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:17:08.417410 systemd-logind[1521]: New session 9 of user core. Jan 29 11:17:08.426479 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 11:17:08.475701 sudo[1850]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 11:17:08.475901 sudo[1850]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:17:08.762569 (dockerd)[1867]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 11:17:08.762586 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 11:17:09.022038 dockerd[1867]: time="2025-01-29T11:17:09.021735433Z" level=info msg="Starting up" Jan 29 11:17:09.072985 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3752839726-merged.mount: Deactivated successfully. Jan 29 11:17:09.079446 systemd[1]: var-lib-docker-metacopy\x2dcheck3088213308-merged.mount: Deactivated successfully. Jan 29 11:17:09.093777 dockerd[1867]: time="2025-01-29T11:17:09.093611414Z" level=info msg="Loading containers: start." Jan 29 11:17:09.186390 kernel: Initializing XFRM netlink socket Jan 29 11:17:09.263071 systemd-networkd[1455]: docker0: Link UP Jan 29 11:17:09.279158 dockerd[1867]: time="2025-01-29T11:17:09.279001585Z" level=info msg="Loading containers: done." Jan 29 11:17:09.289319 dockerd[1867]: time="2025-01-29T11:17:09.289111105Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 11:17:09.289319 dockerd[1867]: time="2025-01-29T11:17:09.289164300Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 29 11:17:09.289319 dockerd[1867]: time="2025-01-29T11:17:09.289214610Z" level=info msg="Daemon has completed initialization" Jan 29 11:17:09.303344 dockerd[1867]: time="2025-01-29T11:17:09.303306433Z" level=info msg="API listen on /run/docker.sock" Jan 29 11:17:09.303659 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 11:17:09.828956 containerd[1540]: time="2025-01-29T11:17:09.828923749Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\"" Jan 29 11:17:10.385982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3939352668.mount: Deactivated successfully. Jan 29 11:17:11.345099 containerd[1540]: time="2025-01-29T11:17:11.344861284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:17:11.345423 containerd[1540]: time="2025-01-29T11:17:11.345400948Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.1: active requests=0, bytes read=28674824" Jan 29 11:17:11.350430 containerd[1540]: time="2025-01-29T11:17:11.350401966Z" level=info msg="ImageCreate event name:\"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:17:11.356120 containerd[1540]: time="2025-01-29T11:17:11.356090968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:17:11.356703 containerd[1540]: time="2025-01-29T11:17:11.356620391Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.1\" with image id \"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\", size \"28671624\" in 1.527669864s" Jan 29 11:17:11.356703 containerd[1540]: time="2025-01-29T11:17:11.356638072Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\" returns image reference \"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\"" Jan 29 11:17:11.357087 containerd[1540]: time="2025-01-29T11:17:11.357070707Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\"" Jan 29 11:17:12.563495 update_engine[1524]: I20250129 11:17:12.563412 1524 update_attempter.cc:509] Updating boot flags... Jan 29 11:17:12.583927 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 29 11:17:12.588760 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:17:12.598923 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2122) Jan 29 11:17:12.714833 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:17:12.717558 (kubelet)[2136]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:17:12.829843 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2122) Jan 29 11:17:12.830980 kubelet[2136]: E0129 11:17:12.830943 2136 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:17:12.832678 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:17:12.832838 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:17:13.028344 containerd[1540]: time="2025-01-29T11:17:13.028295990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:17:13.041436 containerd[1540]: time="2025-01-29T11:17:13.041403061Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.1: active requests=0, bytes read=24770711" Jan 29 11:17:13.073876 containerd[1540]: time="2025-01-29T11:17:13.073822218Z" level=info msg="ImageCreate event name:\"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:17:13.142854 containerd[1540]: time="2025-01-29T11:17:13.142810274Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:17:13.143457 containerd[1540]: time="2025-01-29T11:17:13.143168807Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.1\" with image id \"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\", size \"26258470\" in 1.786043079s" Jan 29 11:17:13.143457 containerd[1540]: time="2025-01-29T11:17:13.143188425Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\" returns image reference \"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\"" Jan 29 11:17:13.143629 containerd[1540]: time="2025-01-29T11:17:13.143617746Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\"" Jan 29 11:17:14.237351 containerd[1540]: time="2025-01-29T11:17:14.237323127Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:17:14.237955 containerd[1540]: time="2025-01-29T11:17:14.237912709Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.1: active requests=0, bytes read=19169759" Jan 29 11:17:14.238397 containerd[1540]: time="2025-01-29T11:17:14.238331837Z" level=info msg="ImageCreate event name:\"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:17:14.240258 containerd[1540]: time="2025-01-29T11:17:14.240245590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:17:14.240815 containerd[1540]: time="2025-01-29T11:17:14.240735712Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.1\" with image id \"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\", size \"20657536\" in 1.097066494s" Jan 29 11:17:14.240815 containerd[1540]: time="2025-01-29T11:17:14.240752023Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\" returns image reference \"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\"" Jan 29 11:17:14.241131 containerd[1540]: time="2025-01-29T11:17:14.241077288Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 29 11:17:15.121809 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3663098789.mount: Deactivated successfully. Jan 29 11:17:15.438336 containerd[1540]: time="2025-01-29T11:17:15.438047597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:17:15.443293 containerd[1540]: time="2025-01-29T11:17:15.443266780Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=30909466" Jan 29 11:17:15.453750 containerd[1540]: time="2025-01-29T11:17:15.453725465Z" level=info msg="ImageCreate event name:\"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:17:15.459008 containerd[1540]: time="2025-01-29T11:17:15.458993032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:17:15.459327 containerd[1540]: time="2025-01-29T11:17:15.459233959Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"30908485\" in 1.218061278s" Jan 29 11:17:15.459327 containerd[1540]: time="2025-01-29T11:17:15.459251992Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\"" Jan 29 11:17:15.459632 containerd[1540]: time="2025-01-29T11:17:15.459561153Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 29 11:17:16.168509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1190481873.mount: Deactivated successfully. Jan 29 11:17:17.459175 containerd[1540]: time="2025-01-29T11:17:17.458352389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:17:17.459567 containerd[1540]: time="2025-01-29T11:17:17.459544310Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jan 29 11:17:17.460136 containerd[1540]: time="2025-01-29T11:17:17.460120578Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:17:17.462773 containerd[1540]: time="2025-01-29T11:17:17.462755267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:17:17.466142 containerd[1540]: time="2025-01-29T11:17:17.466114991Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.006536316s" Jan 29 11:17:17.466192 containerd[1540]: time="2025-01-29T11:17:17.466141903Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 29 11:17:17.466763 containerd[1540]: time="2025-01-29T11:17:17.466582705Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 11:17:18.145922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3091623601.mount: Deactivated successfully. Jan 29 11:17:18.207394 containerd[1540]: time="2025-01-29T11:17:18.207334093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:17:18.219150 containerd[1540]: time="2025-01-29T11:17:18.219112751Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 29 11:17:18.230577 containerd[1540]: time="2025-01-29T11:17:18.230544141Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:17:18.239422 containerd[1540]: time="2025-01-29T11:17:18.239383380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:17:18.240124 containerd[1540]: time="2025-01-29T11:17:18.239872814Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 773.270874ms" Jan 29 11:17:18.240124 containerd[1540]: time="2025-01-29T11:17:18.239897628Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 29 11:17:18.240597 containerd[1540]: time="2025-01-29T11:17:18.240577687Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 29 11:17:18.815227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2242209924.mount: Deactivated successfully. Jan 29 11:17:21.466312 containerd[1540]: time="2025-01-29T11:17:21.466277479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:17:21.467212 containerd[1540]: time="2025-01-29T11:17:21.467181561Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551320" Jan 29 11:17:21.467810 containerd[1540]: time="2025-01-29T11:17:21.467541726Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:17:21.470915 containerd[1540]: time="2025-01-29T11:17:21.470889495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:17:21.471571 containerd[1540]: time="2025-01-29T11:17:21.471464285Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.230867088s" Jan 29 11:17:21.471571 containerd[1540]: time="2025-01-29T11:17:21.471481544Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 29 11:17:22.871828 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 29 11:17:22.879536 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:17:23.054358 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 11:17:23.054498 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 11:17:23.054629 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:17:23.059566 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:17:23.077793 systemd[1]: Reloading requested from client PID 2300 ('systemctl') (unit session-9.scope)... Jan 29 11:17:23.077803 systemd[1]: Reloading... Jan 29 11:17:23.126385 zram_generator::config[2337]: No configuration found. Jan 29 11:17:23.191591 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jan 29 11:17:23.207422 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:17:23.251904 systemd[1]: Reloading finished in 173 ms. Jan 29 11:17:23.276633 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 11:17:23.276746 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 11:17:23.276923 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:17:23.281568 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:17:23.615772 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:17:23.618633 (kubelet)[2405]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:17:23.655302 kubelet[2405]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:17:23.655302 kubelet[2405]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 29 11:17:23.655302 kubelet[2405]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:17:23.655530 kubelet[2405]: I0129 11:17:23.655347 2405 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:17:23.942698 kubelet[2405]: I0129 11:17:23.942671 2405 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 29 11:17:23.942698 kubelet[2405]: I0129 11:17:23.942690 2405 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:17:23.942875 kubelet[2405]: I0129 11:17:23.942860 2405 server.go:954] "Client rotation is on, will bootstrap in background" Jan 29 11:17:24.054471 kubelet[2405]: I0129 11:17:24.054263 2405 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:17:24.055611 kubelet[2405]: E0129 11:17:24.055585 2405 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:17:24.064363 kubelet[2405]: E0129 11:17:24.064316 2405 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:17:24.064363 kubelet[2405]: I0129 11:17:24.064360 2405 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:17:24.068443 kubelet[2405]: I0129 11:17:24.068429 2405 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:17:24.071111 kubelet[2405]: I0129 11:17:24.071086 2405 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:17:24.071215 kubelet[2405]: I0129 11:17:24.071107 2405 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:17:24.072681 kubelet[2405]: I0129 11:17:24.072664 2405 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:17:24.072681 kubelet[2405]: I0129 11:17:24.072680 2405 container_manager_linux.go:304] "Creating device plugin manager" Jan 29 11:17:24.072769 kubelet[2405]: I0129 11:17:24.072757 2405 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:17:24.075930 kubelet[2405]: I0129 11:17:24.075917 2405 kubelet.go:446] "Attempting to sync node with API server" Jan 29 11:17:24.075963 kubelet[2405]: I0129 11:17:24.075937 2405 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:17:24.075963 kubelet[2405]: I0129 11:17:24.075950 2405 kubelet.go:352] "Adding apiserver pod source" Jan 29 11:17:24.075963 kubelet[2405]: I0129 11:17:24.075957 2405 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:17:24.080646 kubelet[2405]: W0129 11:17:24.080417 2405 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Jan 29 11:17:24.080646 kubelet[2405]: E0129 11:17:24.080452 2405 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:17:24.080646 kubelet[2405]: W0129 11:17:24.080611 2405 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Jan 29 11:17:24.080646 kubelet[2405]: E0129 11:17:24.080633 2405 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:17:24.080808 kubelet[2405]: I0129 11:17:24.080798 2405 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:17:24.083222 kubelet[2405]: I0129 11:17:24.083107 2405 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:17:24.093757 kubelet[2405]: W0129 11:17:24.093562 2405 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 11:17:24.094256 kubelet[2405]: I0129 11:17:24.094248 2405 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 29 11:17:24.094308 kubelet[2405]: I0129 11:17:24.094303 2405 server.go:1287] "Started kubelet" Jan 29 11:17:24.129549 kubelet[2405]: I0129 11:17:24.129509 2405 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:17:24.129809 kubelet[2405]: I0129 11:17:24.129798 2405 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:17:24.131800 kubelet[2405]: I0129 11:17:24.131779 2405 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:17:24.139408 kubelet[2405]: I0129 11:17:24.137948 2405 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:17:24.140003 kubelet[2405]: E0129 11:17:24.135817 2405 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.108:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.108:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f25b5d5c86e84 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:17:24.094291588 +0000 UTC m=+0.473693138,LastTimestamp:2025-01-29 11:17:24.094291588 +0000 UTC m=+0.473693138,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:17:24.140082 kubelet[2405]: I0129 11:17:24.140063 2405 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:17:24.140357 kubelet[2405]: I0129 11:17:24.140348 2405 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 29 11:17:24.140556 kubelet[2405]: E0129 11:17:24.140544 2405 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:17:24.143514 kubelet[2405]: I0129 11:17:24.142993 2405 server.go:490] "Adding debug handlers to kubelet server" Jan 29 11:17:24.143593 kubelet[2405]: I0129 11:17:24.143584 2405 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:17:24.143663 kubelet[2405]: I0129 11:17:24.143656 2405 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:17:24.143807 kubelet[2405]: E0129 11:17:24.143791 2405 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="200ms" Jan 29 11:17:24.144404 kubelet[2405]: W0129 11:17:24.144242 2405 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Jan 29 11:17:24.144404 kubelet[2405]: E0129 11:17:24.144265 2405 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:17:24.147707 kubelet[2405]: I0129 11:17:24.147640 2405 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:17:24.147707 kubelet[2405]: I0129 11:17:24.147704 2405 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:17:24.152427 kubelet[2405]: I0129 11:17:24.151695 2405 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:17:24.154582 kubelet[2405]: I0129 11:17:24.154570 2405 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:17:24.154977 kubelet[2405]: I0129 11:17:24.154970 2405 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 29 11:17:24.155133 kubelet[2405]: I0129 11:17:24.155124 2405 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 29 11:17:24.155172 kubelet[2405]: I0129 11:17:24.155167 2405 kubelet.go:2388] "Starting kubelet main sync loop" Jan 29 11:17:24.156638 kubelet[2405]: E0129 11:17:24.155877 2405 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:17:24.156638 kubelet[2405]: W0129 11:17:24.156222 2405 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Jan 29 11:17:24.156638 kubelet[2405]: E0129 11:17:24.156257 2405 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:17:24.156638 kubelet[2405]: I0129 11:17:24.156442 2405 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:17:24.178513 kubelet[2405]: I0129 11:17:24.178494 2405 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 29 11:17:24.178513 kubelet[2405]: I0129 11:17:24.178504 2405 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 29 11:17:24.178513 kubelet[2405]: I0129 11:17:24.178517 2405 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:17:24.211757 kubelet[2405]: I0129 11:17:24.203892 2405 policy_none.go:49] "None policy: Start" Jan 29 11:17:24.211757 kubelet[2405]: I0129 11:17:24.203914 2405 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 29 11:17:24.211757 kubelet[2405]: I0129 11:17:24.203924 2405 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:17:24.223465 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 11:17:24.231332 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 11:17:24.234615 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 11:17:24.241500 kubelet[2405]: E0129 11:17:24.241480 2405 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:17:24.244154 kubelet[2405]: I0129 11:17:24.244135 2405 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:17:24.244273 kubelet[2405]: I0129 11:17:24.244258 2405 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:17:24.244305 kubelet[2405]: I0129 11:17:24.244271 2405 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:17:24.245105 kubelet[2405]: I0129 11:17:24.244971 2405 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:17:24.245602 kubelet[2405]: E0129 11:17:24.245539 2405 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 29 11:17:24.245602 kubelet[2405]: E0129 11:17:24.245571 2405 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 29 11:17:24.262550 systemd[1]: Created slice kubepods-burstable-pod4a4eb332b71946a2f1567be9b07cb89c.slice - libcontainer container kubepods-burstable-pod4a4eb332b71946a2f1567be9b07cb89c.slice. Jan 29 11:17:24.272892 kubelet[2405]: E0129 11:17:24.272869 2405 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 11:17:24.276267 systemd[1]: Created slice kubepods-burstable-pode9ba8773e418c2bbf5a955ad3b2b2e16.slice - libcontainer container kubepods-burstable-pode9ba8773e418c2bbf5a955ad3b2b2e16.slice. Jan 29 11:17:24.278042 kubelet[2405]: E0129 11:17:24.278030 2405 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 11:17:24.280428 systemd[1]: Created slice kubepods-burstable-podeb981ecac1bbdbbdd50082f31745642c.slice - libcontainer container kubepods-burstable-podeb981ecac1bbdbbdd50082f31745642c.slice. Jan 29 11:17:24.281517 kubelet[2405]: E0129 11:17:24.281506 2405 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 11:17:24.344129 kubelet[2405]: E0129 11:17:24.344105 2405 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="400ms" Jan 29 11:17:24.345237 kubelet[2405]: I0129 11:17:24.345220 2405 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 11:17:24.345541 kubelet[2405]: E0129 11:17:24.345515 2405 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://139.178.70.108:6443/api/v1/nodes\": dial tcp 139.178.70.108:6443: connect: connection refused" node="localhost" Jan 29 11:17:24.444797 kubelet[2405]: I0129 11:17:24.444616 2405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4a4eb332b71946a2f1567be9b07cb89c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4a4eb332b71946a2f1567be9b07cb89c\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:17:24.444797 kubelet[2405]: I0129 11:17:24.444642 2405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4a4eb332b71946a2f1567be9b07cb89c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4a4eb332b71946a2f1567be9b07cb89c\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:17:24.444915 kubelet[2405]: I0129 11:17:24.444654 2405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:17:24.444915 kubelet[2405]: I0129 11:17:24.444830 2405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:17:24.444915 kubelet[2405]: I0129 11:17:24.444840 2405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:17:24.444915 kubelet[2405]: I0129 11:17:24.444849 2405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4a4eb332b71946a2f1567be9b07cb89c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4a4eb332b71946a2f1567be9b07cb89c\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:17:24.444915 kubelet[2405]: I0129 11:17:24.444857 2405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:17:24.444991 kubelet[2405]: I0129 11:17:24.444866 2405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:17:24.444991 kubelet[2405]: I0129 11:17:24.444874 2405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb981ecac1bbdbbdd50082f31745642c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"eb981ecac1bbdbbdd50082f31745642c\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:17:24.533109 kubelet[2405]: E0129 11:17:24.532990 2405 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.108:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.108:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f25b5d5c86e84 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:17:24.094291588 +0000 UTC m=+0.473693138,LastTimestamp:2025-01-29 11:17:24.094291588 +0000 UTC m=+0.473693138,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:17:24.546528 kubelet[2405]: I0129 11:17:24.546416 2405 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 11:17:24.557670 kubelet[2405]: E0129 11:17:24.546649 2405 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://139.178.70.108:6443/api/v1/nodes\": dial tcp 139.178.70.108:6443: connect: connection refused" node="localhost" Jan 29 11:17:24.574409 containerd[1540]: time="2025-01-29T11:17:24.574361198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4a4eb332b71946a2f1567be9b07cb89c,Namespace:kube-system,Attempt:0,}" Jan 29 11:17:24.578859 containerd[1540]: time="2025-01-29T11:17:24.578730620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ba8773e418c2bbf5a955ad3b2b2e16,Namespace:kube-system,Attempt:0,}" Jan 29 11:17:24.582420 containerd[1540]: time="2025-01-29T11:17:24.582405877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:eb981ecac1bbdbbdd50082f31745642c,Namespace:kube-system,Attempt:0,}" Jan 29 11:17:24.745940 kubelet[2405]: E0129 11:17:24.745901 2405 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="800ms" Jan 29 11:17:24.947895 kubelet[2405]: I0129 11:17:24.947837 2405 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 11:17:24.948084 kubelet[2405]: E0129 11:17:24.948062 2405 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://139.178.70.108:6443/api/v1/nodes\": dial tcp 139.178.70.108:6443: connect: connection refused" node="localhost" Jan 29 11:17:25.184753 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2382449369.mount: Deactivated successfully. Jan 29 11:17:25.218452 kubelet[2405]: W0129 11:17:25.218387 2405 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Jan 29 11:17:25.218452 kubelet[2405]: E0129 11:17:25.218420 2405 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:17:25.230193 containerd[1540]: time="2025-01-29T11:17:25.230138102Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:17:25.260610 containerd[1540]: time="2025-01-29T11:17:25.260559013Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 29 11:17:25.268281 containerd[1540]: time="2025-01-29T11:17:25.268253924Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:17:25.274613 containerd[1540]: time="2025-01-29T11:17:25.274406146Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:17:25.284686 containerd[1540]: time="2025-01-29T11:17:25.284590267Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:17:25.286495 containerd[1540]: time="2025-01-29T11:17:25.286446827Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:17:25.287428 containerd[1540]: time="2025-01-29T11:17:25.287394463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:17:25.287428 containerd[1540]: time="2025-01-29T11:17:25.287387574Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:17:25.288300 containerd[1540]: time="2025-01-29T11:17:25.287938767Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 709.1488ms" Jan 29 11:17:25.293923 containerd[1540]: time="2025-01-29T11:17:25.293707523Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 719.260148ms" Jan 29 11:17:25.294079 containerd[1540]: time="2025-01-29T11:17:25.294059173Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 711.548275ms" Jan 29 11:17:25.368822 kubelet[2405]: W0129 11:17:25.368738 2405 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Jan 29 11:17:25.368822 kubelet[2405]: E0129 11:17:25.368796 2405 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:17:25.424921 kubelet[2405]: W0129 11:17:25.424849 2405 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Jan 29 11:17:25.424921 kubelet[2405]: E0129 11:17:25.424905 2405 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:17:25.516415 containerd[1540]: time="2025-01-29T11:17:25.515532248Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:17:25.516415 containerd[1540]: time="2025-01-29T11:17:25.515657974Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:17:25.516415 containerd[1540]: time="2025-01-29T11:17:25.515668801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:17:25.516668 containerd[1540]: time="2025-01-29T11:17:25.516508010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:17:25.516964 containerd[1540]: time="2025-01-29T11:17:25.512067323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:17:25.517006 containerd[1540]: time="2025-01-29T11:17:25.516954268Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:17:25.517061 containerd[1540]: time="2025-01-29T11:17:25.517048499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:17:25.517165 containerd[1540]: time="2025-01-29T11:17:25.517145991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:17:25.517979 containerd[1540]: time="2025-01-29T11:17:25.517914426Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:17:25.517979 containerd[1540]: time="2025-01-29T11:17:25.517950093Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:17:25.517979 containerd[1540]: time="2025-01-29T11:17:25.517958443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:17:25.518129 containerd[1540]: time="2025-01-29T11:17:25.518010216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:17:25.525580 kubelet[2405]: W0129 11:17:25.525533 2405 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Jan 29 11:17:25.525580 kubelet[2405]: E0129 11:17:25.525560 2405 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:17:25.536532 systemd[1]: Started cri-containerd-065fa19f50ed32ce6c2b7b94c166dd6cad781a0401d688a500d79ea2653829e7.scope - libcontainer container 065fa19f50ed32ce6c2b7b94c166dd6cad781a0401d688a500d79ea2653829e7. Jan 29 11:17:25.540283 systemd[1]: Started cri-containerd-55482c3b8e304bd082b176c250618d00e289e27f2c31e153a8e8820a15357507.scope - libcontainer container 55482c3b8e304bd082b176c250618d00e289e27f2c31e153a8e8820a15357507. Jan 29 11:17:25.541971 systemd[1]: Started cri-containerd-e6f4245ae712dad1559bb234dc6f8c388a3aad323d07be5b58e67e68eb7cacb0.scope - libcontainer container e6f4245ae712dad1559bb234dc6f8c388a3aad323d07be5b58e67e68eb7cacb0. Jan 29 11:17:25.548849 kubelet[2405]: E0129 11:17:25.548710 2405 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="1.6s" Jan 29 11:17:25.577426 containerd[1540]: time="2025-01-29T11:17:25.577192085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ba8773e418c2bbf5a955ad3b2b2e16,Namespace:kube-system,Attempt:0,} returns sandbox id \"065fa19f50ed32ce6c2b7b94c166dd6cad781a0401d688a500d79ea2653829e7\"" Jan 29 11:17:25.591109 containerd[1540]: time="2025-01-29T11:17:25.590748777Z" level=info msg="CreateContainer within sandbox \"065fa19f50ed32ce6c2b7b94c166dd6cad781a0401d688a500d79ea2653829e7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 11:17:25.609015 containerd[1540]: time="2025-01-29T11:17:25.592106009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4a4eb332b71946a2f1567be9b07cb89c,Namespace:kube-system,Attempt:0,} returns sandbox id \"55482c3b8e304bd082b176c250618d00e289e27f2c31e153a8e8820a15357507\"" Jan 29 11:17:25.609015 containerd[1540]: time="2025-01-29T11:17:25.594475179Z" level=info msg="CreateContainer within sandbox \"55482c3b8e304bd082b176c250618d00e289e27f2c31e153a8e8820a15357507\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 11:17:25.609015 containerd[1540]: time="2025-01-29T11:17:25.594680344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:eb981ecac1bbdbbdd50082f31745642c,Namespace:kube-system,Attempt:0,} returns sandbox id \"e6f4245ae712dad1559bb234dc6f8c388a3aad323d07be5b58e67e68eb7cacb0\"" Jan 29 11:17:25.609015 containerd[1540]: time="2025-01-29T11:17:25.596184390Z" level=info msg="CreateContainer within sandbox \"e6f4245ae712dad1559bb234dc6f8c388a3aad323d07be5b58e67e68eb7cacb0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 11:17:25.658939 containerd[1540]: time="2025-01-29T11:17:25.658772741Z" level=info msg="CreateContainer within sandbox \"55482c3b8e304bd082b176c250618d00e289e27f2c31e153a8e8820a15357507\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5f3093f07590b462353cf8f86ccf3e448529f57f691712efbee8c5618a4a01ad\"" Jan 29 11:17:25.659251 containerd[1540]: time="2025-01-29T11:17:25.659229915Z" level=info msg="CreateContainer within sandbox \"e6f4245ae712dad1559bb234dc6f8c388a3aad323d07be5b58e67e68eb7cacb0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"989baa3337fdc8a5c1f2b449f0a7bfca79a12489a8c061c96e5354199f07f20a\"" Jan 29 11:17:25.659694 containerd[1540]: time="2025-01-29T11:17:25.659681727Z" level=info msg="CreateContainer within sandbox \"065fa19f50ed32ce6c2b7b94c166dd6cad781a0401d688a500d79ea2653829e7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"790f9b2e89c087f099a179f35265ed76c95e2c8020270357efc42507282d3223\"" Jan 29 11:17:25.660418 containerd[1540]: time="2025-01-29T11:17:25.659780920Z" level=info msg="StartContainer for \"5f3093f07590b462353cf8f86ccf3e448529f57f691712efbee8c5618a4a01ad\"" Jan 29 11:17:25.660418 containerd[1540]: time="2025-01-29T11:17:25.659808180Z" level=info msg="StartContainer for \"989baa3337fdc8a5c1f2b449f0a7bfca79a12489a8c061c96e5354199f07f20a\"" Jan 29 11:17:25.665361 containerd[1540]: time="2025-01-29T11:17:25.665341784Z" level=info msg="StartContainer for \"790f9b2e89c087f099a179f35265ed76c95e2c8020270357efc42507282d3223\"" Jan 29 11:17:25.683659 systemd[1]: Started cri-containerd-5f3093f07590b462353cf8f86ccf3e448529f57f691712efbee8c5618a4a01ad.scope - libcontainer container 5f3093f07590b462353cf8f86ccf3e448529f57f691712efbee8c5618a4a01ad. Jan 29 11:17:25.686388 systemd[1]: Started cri-containerd-989baa3337fdc8a5c1f2b449f0a7bfca79a12489a8c061c96e5354199f07f20a.scope - libcontainer container 989baa3337fdc8a5c1f2b449f0a7bfca79a12489a8c061c96e5354199f07f20a. Jan 29 11:17:25.696499 systemd[1]: Started cri-containerd-790f9b2e89c087f099a179f35265ed76c95e2c8020270357efc42507282d3223.scope - libcontainer container 790f9b2e89c087f099a179f35265ed76c95e2c8020270357efc42507282d3223. Jan 29 11:17:25.744121 containerd[1540]: time="2025-01-29T11:17:25.744098635Z" level=info msg="StartContainer for \"5f3093f07590b462353cf8f86ccf3e448529f57f691712efbee8c5618a4a01ad\" returns successfully" Jan 29 11:17:25.749253 kubelet[2405]: I0129 11:17:25.748973 2405 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 11:17:25.749253 kubelet[2405]: E0129 11:17:25.749176 2405 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://139.178.70.108:6443/api/v1/nodes\": dial tcp 139.178.70.108:6443: connect: connection refused" node="localhost" Jan 29 11:17:25.755218 containerd[1540]: time="2025-01-29T11:17:25.755181846Z" level=info msg="StartContainer for \"790f9b2e89c087f099a179f35265ed76c95e2c8020270357efc42507282d3223\" returns successfully" Jan 29 11:17:25.756516 containerd[1540]: time="2025-01-29T11:17:25.755337381Z" level=info msg="StartContainer for \"989baa3337fdc8a5c1f2b449f0a7bfca79a12489a8c061c96e5354199f07f20a\" returns successfully" Jan 29 11:17:26.171670 kubelet[2405]: E0129 11:17:26.171650 2405 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 11:17:26.174847 kubelet[2405]: E0129 11:17:26.174828 2405 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:17:26.174971 kubelet[2405]: E0129 11:17:26.174961 2405 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 11:17:26.176222 kubelet[2405]: E0129 11:17:26.176212 2405 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 11:17:27.177843 kubelet[2405]: E0129 11:17:27.177822 2405 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 11:17:27.178094 kubelet[2405]: E0129 11:17:27.178031 2405 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 11:17:27.351016 kubelet[2405]: I0129 11:17:27.350958 2405 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 11:17:27.767551 kubelet[2405]: E0129 11:17:27.767518 2405 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 29 11:17:27.851142 kubelet[2405]: E0129 11:17:27.851056 2405 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 11:17:27.890188 kubelet[2405]: I0129 11:17:27.890151 2405 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Jan 29 11:17:27.941599 kubelet[2405]: I0129 11:17:27.941414 2405 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 29 11:17:27.963365 kubelet[2405]: E0129 11:17:27.963235 2405 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 29 11:17:27.963365 kubelet[2405]: I0129 11:17:27.963254 2405 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 29 11:17:27.964514 kubelet[2405]: E0129 11:17:27.964450 2405 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 29 11:17:27.964514 kubelet[2405]: I0129 11:17:27.964469 2405 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 29 11:17:27.965461 kubelet[2405]: E0129 11:17:27.965439 2405 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 29 11:17:28.079236 kubelet[2405]: I0129 11:17:28.079097 2405 apiserver.go:52] "Watching apiserver" Jan 29 11:17:28.143730 kubelet[2405]: I0129 11:17:28.143706 2405 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 11:17:30.353852 systemd[1]: Reloading requested from client PID 2676 ('systemctl') (unit session-9.scope)... Jan 29 11:17:30.353866 systemd[1]: Reloading... Jan 29 11:17:30.422401 zram_generator::config[2717]: No configuration found. Jan 29 11:17:30.483407 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jan 29 11:17:30.499335 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:17:30.553513 systemd[1]: Reloading finished in 199 ms. Jan 29 11:17:30.575242 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:17:30.586043 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:17:30.586174 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:17:30.590573 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:17:30.870200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:17:30.885699 (kubelet)[2781]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:17:30.927809 kubelet[2781]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:17:30.927809 kubelet[2781]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 29 11:17:30.927809 kubelet[2781]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:17:30.928036 kubelet[2781]: I0129 11:17:30.927847 2781 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:17:30.932944 kubelet[2781]: I0129 11:17:30.932922 2781 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 29 11:17:30.932944 kubelet[2781]: I0129 11:17:30.932938 2781 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:17:30.933090 kubelet[2781]: I0129 11:17:30.933079 2781 server.go:954] "Client rotation is on, will bootstrap in background" Jan 29 11:17:30.933801 kubelet[2781]: I0129 11:17:30.933788 2781 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 11:17:30.935156 kubelet[2781]: I0129 11:17:30.935145 2781 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:17:30.937856 kubelet[2781]: E0129 11:17:30.937835 2781 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:17:30.937856 kubelet[2781]: I0129 11:17:30.937852 2781 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:17:30.939530 kubelet[2781]: I0129 11:17:30.939519 2781 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:17:30.939656 kubelet[2781]: I0129 11:17:30.939637 2781 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:17:30.939753 kubelet[2781]: I0129 11:17:30.939658 2781 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:17:30.939816 kubelet[2781]: I0129 11:17:30.939756 2781 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:17:30.939816 kubelet[2781]: I0129 11:17:30.939762 2781 container_manager_linux.go:304] "Creating device plugin manager" Jan 29 11:17:30.939816 kubelet[2781]: I0129 11:17:30.939785 2781 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:17:30.939905 kubelet[2781]: I0129 11:17:30.939895 2781 kubelet.go:446] "Attempting to sync node with API server" Jan 29 11:17:30.939927 kubelet[2781]: I0129 11:17:30.939906 2781 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:17:30.939927 kubelet[2781]: I0129 11:17:30.939918 2781 kubelet.go:352] "Adding apiserver pod source" Jan 29 11:17:30.939927 kubelet[2781]: I0129 11:17:30.939924 2781 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:17:30.954276 kubelet[2781]: I0129 11:17:30.953591 2781 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:17:30.954276 kubelet[2781]: I0129 11:17:30.954232 2781 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:17:30.955723 kubelet[2781]: I0129 11:17:30.955670 2781 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 29 11:17:30.955723 kubelet[2781]: I0129 11:17:30.955697 2781 server.go:1287] "Started kubelet" Jan 29 11:17:30.955790 kubelet[2781]: I0129 11:17:30.955757 2781 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:17:30.956494 kubelet[2781]: I0129 11:17:30.956481 2781 server.go:490] "Adding debug handlers to kubelet server" Jan 29 11:17:30.957485 kubelet[2781]: I0129 11:17:30.957451 2781 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:17:30.957584 kubelet[2781]: I0129 11:17:30.957573 2781 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:17:30.959030 kubelet[2781]: I0129 11:17:30.958612 2781 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:17:30.959030 kubelet[2781]: I0129 11:17:30.958688 2781 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 29 11:17:30.959030 kubelet[2781]: I0129 11:17:30.958728 2781 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:17:30.961096 kubelet[2781]: I0129 11:17:30.960981 2781 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:17:30.961096 kubelet[2781]: I0129 11:17:30.961053 2781 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:17:30.965202 kubelet[2781]: I0129 11:17:30.965183 2781 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:17:30.965335 kubelet[2781]: I0129 11:17:30.965245 2781 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:17:30.966021 kubelet[2781]: E0129 11:17:30.965960 2781 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:17:30.967260 kubelet[2781]: I0129 11:17:30.967244 2781 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:17:30.967411 kubelet[2781]: I0129 11:17:30.967348 2781 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:17:30.968209 kubelet[2781]: I0129 11:17:30.968076 2781 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:17:30.968209 kubelet[2781]: I0129 11:17:30.968092 2781 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 29 11:17:30.968209 kubelet[2781]: I0129 11:17:30.968103 2781 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 29 11:17:30.968209 kubelet[2781]: I0129 11:17:30.968106 2781 kubelet.go:2388] "Starting kubelet main sync loop" Jan 29 11:17:30.977741 kubelet[2781]: E0129 11:17:30.977591 2781 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:17:31.029748 kubelet[2781]: I0129 11:17:31.029734 2781 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 29 11:17:31.030050 kubelet[2781]: I0129 11:17:31.029875 2781 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 29 11:17:31.030050 kubelet[2781]: I0129 11:17:31.029889 2781 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:17:31.039001 kubelet[2781]: I0129 11:17:31.038963 2781 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 11:17:31.039001 kubelet[2781]: I0129 11:17:31.038974 2781 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 11:17:31.039001 kubelet[2781]: I0129 11:17:31.038987 2781 policy_none.go:49] "None policy: Start" Jan 29 11:17:31.039001 kubelet[2781]: I0129 11:17:31.038994 2781 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 29 11:17:31.039001 kubelet[2781]: I0129 11:17:31.039001 2781 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:17:31.039116 kubelet[2781]: I0129 11:17:31.039069 2781 state_mem.go:75] "Updated machine memory state" Jan 29 11:17:31.041174 kubelet[2781]: I0129 11:17:31.041161 2781 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:17:31.041387 kubelet[2781]: I0129 11:17:31.041245 2781 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:17:31.041387 kubelet[2781]: I0129 11:17:31.041253 2781 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:17:31.041387 kubelet[2781]: I0129 11:17:31.041353 2781 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:17:31.043048 kubelet[2781]: E0129 11:17:31.041895 2781 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 29 11:17:31.078571 kubelet[2781]: I0129 11:17:31.078542 2781 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 29 11:17:31.079105 kubelet[2781]: I0129 11:17:31.078782 2781 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 29 11:17:31.080322 kubelet[2781]: I0129 11:17:31.079578 2781 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 29 11:17:31.144039 kubelet[2781]: I0129 11:17:31.144016 2781 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 11:17:31.148782 kubelet[2781]: I0129 11:17:31.148726 2781 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Jan 29 11:17:31.149592 kubelet[2781]: I0129 11:17:31.148799 2781 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Jan 29 11:17:31.261791 kubelet[2781]: I0129 11:17:31.261771 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:17:31.262068 kubelet[2781]: I0129 11:17:31.261975 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:17:31.262068 kubelet[2781]: I0129 11:17:31.261993 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:17:31.262068 kubelet[2781]: I0129 11:17:31.262005 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4a4eb332b71946a2f1567be9b07cb89c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4a4eb332b71946a2f1567be9b07cb89c\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:17:31.262068 kubelet[2781]: I0129 11:17:31.262014 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4a4eb332b71946a2f1567be9b07cb89c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4a4eb332b71946a2f1567be9b07cb89c\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:17:31.262068 kubelet[2781]: I0129 11:17:31.262024 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4a4eb332b71946a2f1567be9b07cb89c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4a4eb332b71946a2f1567be9b07cb89c\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:17:31.262167 kubelet[2781]: I0129 11:17:31.262033 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:17:31.262167 kubelet[2781]: I0129 11:17:31.262041 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:17:31.262167 kubelet[2781]: I0129 11:17:31.262053 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb981ecac1bbdbbdd50082f31745642c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"eb981ecac1bbdbbdd50082f31745642c\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:17:31.942150 kubelet[2781]: I0129 11:17:31.942097 2781 apiserver.go:52] "Watching apiserver" Jan 29 11:17:31.961216 kubelet[2781]: I0129 11:17:31.961173 2781 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 11:17:31.998649 kubelet[2781]: I0129 11:17:31.997185 2781 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 29 11:17:32.014426 kubelet[2781]: E0129 11:17:32.013562 2781 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 11:17:32.068584 kubelet[2781]: I0129 11:17:32.068547 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.068529868 podStartE2EDuration="1.068529868s" podCreationTimestamp="2025-01-29 11:17:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:17:32.050310345 +0000 UTC m=+1.151700996" watchObservedRunningTime="2025-01-29 11:17:32.068529868 +0000 UTC m=+1.169920520" Jan 29 11:17:32.082006 kubelet[2781]: I0129 11:17:32.081971 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.0819597970000001 podStartE2EDuration="1.081959797s" podCreationTimestamp="2025-01-29 11:17:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:17:32.070617672 +0000 UTC m=+1.172008316" watchObservedRunningTime="2025-01-29 11:17:32.081959797 +0000 UTC m=+1.183350443" Jan 29 11:17:32.097883 kubelet[2781]: I0129 11:17:32.097832 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.097820818 podStartE2EDuration="1.097820818s" podCreationTimestamp="2025-01-29 11:17:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:17:32.084446714 +0000 UTC m=+1.185837366" watchObservedRunningTime="2025-01-29 11:17:32.097820818 +0000 UTC m=+1.199211461" Jan 29 11:17:35.196788 sudo[1850]: pam_unix(sudo:session): session closed for user root Jan 29 11:17:35.198468 sshd[1849]: Connection closed by 147.75.109.163 port 46376 Jan 29 11:17:35.198866 sshd-session[1847]: pam_unix(sshd:session): session closed for user core Jan 29 11:17:35.201017 systemd[1]: sshd@6-139.178.70.108:22-147.75.109.163:46376.service: Deactivated successfully. Jan 29 11:17:35.202240 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 11:17:35.202741 systemd[1]: session-9.scope: Consumed 2.552s CPU time, 135.5M memory peak, 0B memory swap peak. Jan 29 11:17:35.203270 systemd-logind[1521]: Session 9 logged out. Waiting for processes to exit. Jan 29 11:17:35.204002 systemd-logind[1521]: Removed session 9. Jan 29 11:17:36.854115 kubelet[2781]: I0129 11:17:36.854040 2781 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 11:17:36.854935 containerd[1540]: time="2025-01-29T11:17:36.854895481Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 11:17:36.855338 kubelet[2781]: I0129 11:17:36.855083 2781 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 11:17:37.546945 systemd[1]: Created slice kubepods-besteffort-pod9d9afd9f_9d65_4308_b3b4_7dd291d5d5e9.slice - libcontainer container kubepods-besteffort-pod9d9afd9f_9d65_4308_b3b4_7dd291d5d5e9.slice. Jan 29 11:17:37.601523 kubelet[2781]: I0129 11:17:37.601442 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9d9afd9f-9d65-4308-b3b4-7dd291d5d5e9-kube-proxy\") pod \"kube-proxy-hmdrx\" (UID: \"9d9afd9f-9d65-4308-b3b4-7dd291d5d5e9\") " pod="kube-system/kube-proxy-hmdrx" Jan 29 11:17:37.601523 kubelet[2781]: I0129 11:17:37.601470 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d9afd9f-9d65-4308-b3b4-7dd291d5d5e9-xtables-lock\") pod \"kube-proxy-hmdrx\" (UID: \"9d9afd9f-9d65-4308-b3b4-7dd291d5d5e9\") " pod="kube-system/kube-proxy-hmdrx" Jan 29 11:17:37.601523 kubelet[2781]: I0129 11:17:37.601482 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d9afd9f-9d65-4308-b3b4-7dd291d5d5e9-lib-modules\") pod \"kube-proxy-hmdrx\" (UID: \"9d9afd9f-9d65-4308-b3b4-7dd291d5d5e9\") " pod="kube-system/kube-proxy-hmdrx" Jan 29 11:17:37.601523 kubelet[2781]: I0129 11:17:37.601493 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m46md\" (UniqueName: \"kubernetes.io/projected/9d9afd9f-9d65-4308-b3b4-7dd291d5d5e9-kube-api-access-m46md\") pod \"kube-proxy-hmdrx\" (UID: \"9d9afd9f-9d65-4308-b3b4-7dd291d5d5e9\") " pod="kube-system/kube-proxy-hmdrx" Jan 29 11:17:37.853862 containerd[1540]: time="2025-01-29T11:17:37.853783314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hmdrx,Uid:9d9afd9f-9d65-4308-b3b4-7dd291d5d5e9,Namespace:kube-system,Attempt:0,}" Jan 29 11:17:37.899409 containerd[1540]: time="2025-01-29T11:17:37.899322863Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:17:37.899906 containerd[1540]: time="2025-01-29T11:17:37.899472568Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:17:37.899906 containerd[1540]: time="2025-01-29T11:17:37.899627432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:17:37.900028 containerd[1540]: time="2025-01-29T11:17:37.899804661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:17:37.913743 systemd[1]: run-containerd-runc-k8s.io-cfc70f705f13211e035139dd9ee31261c433b79e3abc67f6f4acae3bfc7d6197-runc.pG4soG.mount: Deactivated successfully. Jan 29 11:17:37.922502 systemd[1]: Started cri-containerd-cfc70f705f13211e035139dd9ee31261c433b79e3abc67f6f4acae3bfc7d6197.scope - libcontainer container cfc70f705f13211e035139dd9ee31261c433b79e3abc67f6f4acae3bfc7d6197. Jan 29 11:17:37.937548 containerd[1540]: time="2025-01-29T11:17:37.937493909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hmdrx,Uid:9d9afd9f-9d65-4308-b3b4-7dd291d5d5e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"cfc70f705f13211e035139dd9ee31261c433b79e3abc67f6f4acae3bfc7d6197\"" Jan 29 11:17:37.940025 containerd[1540]: time="2025-01-29T11:17:37.939911762Z" level=info msg="CreateContainer within sandbox \"cfc70f705f13211e035139dd9ee31261c433b79e3abc67f6f4acae3bfc7d6197\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 11:17:37.982698 containerd[1540]: time="2025-01-29T11:17:37.982626104Z" level=info msg="CreateContainer within sandbox \"cfc70f705f13211e035139dd9ee31261c433b79e3abc67f6f4acae3bfc7d6197\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a2b0205f76cc24c6ac5a622a777e57447cee8e82dc857accaaf45cd5562543ba\"" Jan 29 11:17:37.983402 containerd[1540]: time="2025-01-29T11:17:37.983061341Z" level=info msg="StartContainer for \"a2b0205f76cc24c6ac5a622a777e57447cee8e82dc857accaaf45cd5562543ba\"" Jan 29 11:17:38.003310 systemd[1]: Created slice kubepods-besteffort-pod6accc164_1bc5_47c6_9c69_dfff7857eebd.slice - libcontainer container kubepods-besteffort-pod6accc164_1bc5_47c6_9c69_dfff7857eebd.slice. Jan 29 11:17:38.004499 kubelet[2781]: I0129 11:17:38.004486 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6accc164-1bc5-47c6-9c69-dfff7857eebd-var-lib-calico\") pod \"tigera-operator-7d68577dc5-5qqlw\" (UID: \"6accc164-1bc5-47c6-9c69-dfff7857eebd\") " pod="tigera-operator/tigera-operator-7d68577dc5-5qqlw" Jan 29 11:17:38.004862 kubelet[2781]: I0129 11:17:38.004700 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9wjb\" (UniqueName: \"kubernetes.io/projected/6accc164-1bc5-47c6-9c69-dfff7857eebd-kube-api-access-n9wjb\") pod \"tigera-operator-7d68577dc5-5qqlw\" (UID: \"6accc164-1bc5-47c6-9c69-dfff7857eebd\") " pod="tigera-operator/tigera-operator-7d68577dc5-5qqlw" Jan 29 11:17:38.015541 systemd[1]: Started cri-containerd-a2b0205f76cc24c6ac5a622a777e57447cee8e82dc857accaaf45cd5562543ba.scope - libcontainer container a2b0205f76cc24c6ac5a622a777e57447cee8e82dc857accaaf45cd5562543ba. Jan 29 11:17:38.037949 containerd[1540]: time="2025-01-29T11:17:38.037920134Z" level=info msg="StartContainer for \"a2b0205f76cc24c6ac5a622a777e57447cee8e82dc857accaaf45cd5562543ba\" returns successfully" Jan 29 11:17:38.311035 containerd[1540]: time="2025-01-29T11:17:38.311009786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-5qqlw,Uid:6accc164-1bc5-47c6-9c69-dfff7857eebd,Namespace:tigera-operator,Attempt:0,}" Jan 29 11:17:38.352645 containerd[1540]: time="2025-01-29T11:17:38.352491095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:17:38.352645 containerd[1540]: time="2025-01-29T11:17:38.352527868Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:17:38.352645 containerd[1540]: time="2025-01-29T11:17:38.352543584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:17:38.352645 containerd[1540]: time="2025-01-29T11:17:38.352612336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:17:38.368609 systemd[1]: Started cri-containerd-936aeebcf3e5d86e1dd33ee8f0217bed66ce045d5856fbef9b6e7b89b22fa252.scope - libcontainer container 936aeebcf3e5d86e1dd33ee8f0217bed66ce045d5856fbef9b6e7b89b22fa252. Jan 29 11:17:38.395776 containerd[1540]: time="2025-01-29T11:17:38.395755179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-5qqlw,Uid:6accc164-1bc5-47c6-9c69-dfff7857eebd,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"936aeebcf3e5d86e1dd33ee8f0217bed66ce045d5856fbef9b6e7b89b22fa252\"" Jan 29 11:17:38.396850 containerd[1540]: time="2025-01-29T11:17:38.396806961Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 29 11:17:39.034313 kubelet[2781]: I0129 11:17:39.034234 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hmdrx" podStartSLOduration=2.034208414 podStartE2EDuration="2.034208414s" podCreationTimestamp="2025-01-29 11:17:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:17:39.033748073 +0000 UTC m=+8.135138735" watchObservedRunningTime="2025-01-29 11:17:39.034208414 +0000 UTC m=+8.135599076" Jan 29 11:17:40.047056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3440165651.mount: Deactivated successfully. Jan 29 11:17:40.406188 containerd[1540]: time="2025-01-29T11:17:40.406143704Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:17:40.406739 containerd[1540]: time="2025-01-29T11:17:40.406702012Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 29 11:17:40.407363 containerd[1540]: time="2025-01-29T11:17:40.407035285Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:17:40.409122 containerd[1540]: time="2025-01-29T11:17:40.409085926Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:17:40.409687 containerd[1540]: time="2025-01-29T11:17:40.409575259Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.012743731s" Jan 29 11:17:40.409687 containerd[1540]: time="2025-01-29T11:17:40.409595017Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 29 11:17:40.415085 containerd[1540]: time="2025-01-29T11:17:40.414961694Z" level=info msg="CreateContainer within sandbox \"936aeebcf3e5d86e1dd33ee8f0217bed66ce045d5856fbef9b6e7b89b22fa252\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 29 11:17:40.450800 containerd[1540]: time="2025-01-29T11:17:40.450734976Z" level=info msg="CreateContainer within sandbox \"936aeebcf3e5d86e1dd33ee8f0217bed66ce045d5856fbef9b6e7b89b22fa252\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f0c677ee835fc5c63cf62fd89a8d1e5db64a4334935e897ab826d5e6745b0230\"" Jan 29 11:17:40.451226 containerd[1540]: time="2025-01-29T11:17:40.451036550Z" level=info msg="StartContainer for \"f0c677ee835fc5c63cf62fd89a8d1e5db64a4334935e897ab826d5e6745b0230\"" Jan 29 11:17:40.477502 systemd[1]: Started cri-containerd-f0c677ee835fc5c63cf62fd89a8d1e5db64a4334935e897ab826d5e6745b0230.scope - libcontainer container f0c677ee835fc5c63cf62fd89a8d1e5db64a4334935e897ab826d5e6745b0230. Jan 29 11:17:40.496806 containerd[1540]: time="2025-01-29T11:17:40.496674098Z" level=info msg="StartContainer for \"f0c677ee835fc5c63cf62fd89a8d1e5db64a4334935e897ab826d5e6745b0230\" returns successfully" Jan 29 11:17:41.032568 kubelet[2781]: I0129 11:17:41.032268 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7d68577dc5-5qqlw" podStartSLOduration=2.014439356 podStartE2EDuration="4.032255092s" podCreationTimestamp="2025-01-29 11:17:37 +0000 UTC" firstStartedPulling="2025-01-29 11:17:38.396477406 +0000 UTC m=+7.497868047" lastFinishedPulling="2025-01-29 11:17:40.414293139 +0000 UTC m=+9.515683783" observedRunningTime="2025-01-29 11:17:41.032227259 +0000 UTC m=+10.133617911" watchObservedRunningTime="2025-01-29 11:17:41.032255092 +0000 UTC m=+10.133645744" Jan 29 11:17:43.583702 systemd[1]: Created slice kubepods-besteffort-poda4869b33_1cec_4a79_ae4b_d1f4151f9af2.slice - libcontainer container kubepods-besteffort-poda4869b33_1cec_4a79_ae4b_d1f4151f9af2.slice. Jan 29 11:17:43.603289 systemd[1]: Created slice kubepods-besteffort-pode1036735_e788_40b9_8234_5a0f41b3103c.slice - libcontainer container kubepods-besteffort-pode1036735_e788_40b9_8234_5a0f41b3103c.slice. Jan 29 11:17:43.638391 kubelet[2781]: I0129 11:17:43.636885 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e1036735-e788-40b9-8234-5a0f41b3103c-cni-log-dir\") pod \"calico-node-zql6b\" (UID: \"e1036735-e788-40b9-8234-5a0f41b3103c\") " pod="calico-system/calico-node-zql6b" Jan 29 11:17:43.638391 kubelet[2781]: I0129 11:17:43.636918 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a4869b33-1cec-4a79-ae4b-d1f4151f9af2-typha-certs\") pod \"calico-typha-785ddcdd54-vh4qh\" (UID: \"a4869b33-1cec-4a79-ae4b-d1f4151f9af2\") " pod="calico-system/calico-typha-785ddcdd54-vh4qh" Jan 29 11:17:43.638391 kubelet[2781]: I0129 11:17:43.636930 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e1036735-e788-40b9-8234-5a0f41b3103c-policysync\") pod \"calico-node-zql6b\" (UID: \"e1036735-e788-40b9-8234-5a0f41b3103c\") " pod="calico-system/calico-node-zql6b" Jan 29 11:17:43.638391 kubelet[2781]: I0129 11:17:43.636943 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e1036735-e788-40b9-8234-5a0f41b3103c-cni-bin-dir\") pod \"calico-node-zql6b\" (UID: \"e1036735-e788-40b9-8234-5a0f41b3103c\") " pod="calico-system/calico-node-zql6b" Jan 29 11:17:43.638391 kubelet[2781]: I0129 11:17:43.636953 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e1036735-e788-40b9-8234-5a0f41b3103c-var-lib-calico\") pod \"calico-node-zql6b\" (UID: \"e1036735-e788-40b9-8234-5a0f41b3103c\") " pod="calico-system/calico-node-zql6b" Jan 29 11:17:43.638779 kubelet[2781]: I0129 11:17:43.636961 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4869b33-1cec-4a79-ae4b-d1f4151f9af2-tigera-ca-bundle\") pod \"calico-typha-785ddcdd54-vh4qh\" (UID: \"a4869b33-1cec-4a79-ae4b-d1f4151f9af2\") " pod="calico-system/calico-typha-785ddcdd54-vh4qh" Jan 29 11:17:43.638779 kubelet[2781]: I0129 11:17:43.636970 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e1036735-e788-40b9-8234-5a0f41b3103c-cni-net-dir\") pod \"calico-node-zql6b\" (UID: \"e1036735-e788-40b9-8234-5a0f41b3103c\") " pod="calico-system/calico-node-zql6b" Jan 29 11:17:43.638779 kubelet[2781]: I0129 11:17:43.636979 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1036735-e788-40b9-8234-5a0f41b3103c-lib-modules\") pod \"calico-node-zql6b\" (UID: \"e1036735-e788-40b9-8234-5a0f41b3103c\") " pod="calico-system/calico-node-zql6b" Jan 29 11:17:43.638779 kubelet[2781]: I0129 11:17:43.636988 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e1036735-e788-40b9-8234-5a0f41b3103c-node-certs\") pod \"calico-node-zql6b\" (UID: \"e1036735-e788-40b9-8234-5a0f41b3103c\") " pod="calico-system/calico-node-zql6b" Jan 29 11:17:43.638779 kubelet[2781]: I0129 11:17:43.637007 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqg6l\" (UniqueName: \"kubernetes.io/projected/a4869b33-1cec-4a79-ae4b-d1f4151f9af2-kube-api-access-cqg6l\") pod \"calico-typha-785ddcdd54-vh4qh\" (UID: \"a4869b33-1cec-4a79-ae4b-d1f4151f9af2\") " pod="calico-system/calico-typha-785ddcdd54-vh4qh" Jan 29 11:17:43.638908 kubelet[2781]: I0129 11:17:43.637020 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e1036735-e788-40b9-8234-5a0f41b3103c-xtables-lock\") pod \"calico-node-zql6b\" (UID: \"e1036735-e788-40b9-8234-5a0f41b3103c\") " pod="calico-system/calico-node-zql6b" Jan 29 11:17:43.638908 kubelet[2781]: I0129 11:17:43.637035 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e1036735-e788-40b9-8234-5a0f41b3103c-flexvol-driver-host\") pod \"calico-node-zql6b\" (UID: \"e1036735-e788-40b9-8234-5a0f41b3103c\") " pod="calico-system/calico-node-zql6b" Jan 29 11:17:43.638908 kubelet[2781]: I0129 11:17:43.637053 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx9hg\" (UniqueName: \"kubernetes.io/projected/e1036735-e788-40b9-8234-5a0f41b3103c-kube-api-access-jx9hg\") pod \"calico-node-zql6b\" (UID: \"e1036735-e788-40b9-8234-5a0f41b3103c\") " pod="calico-system/calico-node-zql6b" Jan 29 11:17:43.638908 kubelet[2781]: I0129 11:17:43.637064 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1036735-e788-40b9-8234-5a0f41b3103c-tigera-ca-bundle\") pod \"calico-node-zql6b\" (UID: \"e1036735-e788-40b9-8234-5a0f41b3103c\") " pod="calico-system/calico-node-zql6b" Jan 29 11:17:43.638908 kubelet[2781]: I0129 11:17:43.637079 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e1036735-e788-40b9-8234-5a0f41b3103c-var-run-calico\") pod \"calico-node-zql6b\" (UID: \"e1036735-e788-40b9-8234-5a0f41b3103c\") " pod="calico-system/calico-node-zql6b" Jan 29 11:17:43.653922 kubelet[2781]: E0129 11:17:43.653887 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wnr5k" podUID="7d847577-abb5-4daf-916b-6334c86beb77" Jan 29 11:17:43.737611 kubelet[2781]: I0129 11:17:43.737579 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7d847577-abb5-4daf-916b-6334c86beb77-varrun\") pod \"csi-node-driver-wnr5k\" (UID: \"7d847577-abb5-4daf-916b-6334c86beb77\") " pod="calico-system/csi-node-driver-wnr5k" Jan 29 11:17:43.737611 kubelet[2781]: I0129 11:17:43.737610 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzckm\" (UniqueName: \"kubernetes.io/projected/7d847577-abb5-4daf-916b-6334c86beb77-kube-api-access-kzckm\") pod \"csi-node-driver-wnr5k\" (UID: \"7d847577-abb5-4daf-916b-6334c86beb77\") " pod="calico-system/csi-node-driver-wnr5k" Jan 29 11:17:43.737736 kubelet[2781]: I0129 11:17:43.737646 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7d847577-abb5-4daf-916b-6334c86beb77-socket-dir\") pod \"csi-node-driver-wnr5k\" (UID: \"7d847577-abb5-4daf-916b-6334c86beb77\") " pod="calico-system/csi-node-driver-wnr5k" Jan 29 11:17:43.737736 kubelet[2781]: I0129 11:17:43.737673 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7d847577-abb5-4daf-916b-6334c86beb77-kubelet-dir\") pod \"csi-node-driver-wnr5k\" (UID: \"7d847577-abb5-4daf-916b-6334c86beb77\") " pod="calico-system/csi-node-driver-wnr5k" Jan 29 11:17:43.737736 kubelet[2781]: I0129 11:17:43.737685 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7d847577-abb5-4daf-916b-6334c86beb77-registration-dir\") pod \"csi-node-driver-wnr5k\" (UID: \"7d847577-abb5-4daf-916b-6334c86beb77\") " pod="calico-system/csi-node-driver-wnr5k" Jan 29 11:17:43.843161 kubelet[2781]: E0129 11:17:43.842982 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:43.843161 kubelet[2781]: W0129 11:17:43.843008 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:43.843161 kubelet[2781]: E0129 11:17:43.843035 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:43.843806 kubelet[2781]: E0129 11:17:43.843787 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:43.843806 kubelet[2781]: W0129 11:17:43.843805 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:43.843868 kubelet[2781]: E0129 11:17:43.843860 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:43.844029 kubelet[2781]: E0129 11:17:43.844017 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:43.844029 kubelet[2781]: W0129 11:17:43.844026 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:43.844133 kubelet[2781]: E0129 11:17:43.844034 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:43.844165 kubelet[2781]: E0129 11:17:43.844154 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:43.844165 kubelet[2781]: W0129 11:17:43.844162 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:43.844199 kubelet[2781]: E0129 11:17:43.844167 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:43.844505 kubelet[2781]: E0129 11:17:43.844494 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:43.844505 kubelet[2781]: W0129 11:17:43.844504 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:43.844725 kubelet[2781]: E0129 11:17:43.844553 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:43.844751 kubelet[2781]: E0129 11:17:43.844726 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:43.844751 kubelet[2781]: W0129 11:17:43.844732 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:43.844751 kubelet[2781]: E0129 11:17:43.844741 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:43.844851 kubelet[2781]: E0129 11:17:43.844841 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:43.844851 kubelet[2781]: W0129 11:17:43.844848 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:43.851114 kubelet[2781]: E0129 11:17:43.844855 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:43.851114 kubelet[2781]: E0129 11:17:43.844946 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:43.851114 kubelet[2781]: W0129 11:17:43.844951 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:43.851114 kubelet[2781]: E0129 11:17:43.844958 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:43.851114 kubelet[2781]: E0129 11:17:43.845081 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:43.851114 kubelet[2781]: W0129 11:17:43.845086 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:43.851114 kubelet[2781]: E0129 11:17:43.845091 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:43.851114 kubelet[2781]: E0129 11:17:43.845410 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:43.851114 kubelet[2781]: W0129 11:17:43.845416 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:43.851114 kubelet[2781]: E0129 11:17:43.845425 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:43.851311 kubelet[2781]: E0129 11:17:43.845539 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:43.851311 kubelet[2781]: W0129 11:17:43.845545 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:43.851311 kubelet[2781]: E0129 11:17:43.845617 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:43.851311 kubelet[2781]: E0129 11:17:43.846020 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:43.851311 kubelet[2781]: W0129 11:17:43.846026 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:43.851311 kubelet[2781]: E0129 11:17:43.846045 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:43.851311 kubelet[2781]: E0129 11:17:43.846145 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:43.851311 kubelet[2781]: W0129 11:17:43.846149 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:43.851311 kubelet[2781]: E0129 11:17:43.846239 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:43.851311 kubelet[2781]: W0129 11:17:43.846244 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:43.851311 kubelet[2781]: E0129 11:17:43.846320 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:43.851555 kubelet[2781]: W0129 11:17:43.846325 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:43.851555 kubelet[2781]: E0129 11:17:43.846330 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:43.851555 kubelet[2781]: E0129 11:17:43.846397 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:43.851555 kubelet[2781]: E0129 11:17:43.846520 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:43.851555 kubelet[2781]: W0129 11:17:43.846527 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:43.851555 kubelet[2781]: E0129 11:17:43.846549 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:43.851555 kubelet[2781]: E0129 11:17:43.846646 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:43.851555 kubelet[2781]: W0129 11:17:43.846650 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:43.851555 kubelet[2781]: E0129 11:17:43.847147 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:43.851555 kubelet[2781]: E0129 11:17:43.847279 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:43.851715 kubelet[2781]: W0129 11:17:43.847284 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:43.851715 kubelet[2781]: E0129 11:17:43.847291 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:43.851715 kubelet[2781]: E0129 11:17:43.847303 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:43.851715 kubelet[2781]: E0129 11:17:43.847431 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:43.851715 kubelet[2781]: W0129 11:17:43.847437 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:43.851715 kubelet[2781]: E0129 11:17:43.847443 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:43.851715 kubelet[2781]: E0129 11:17:43.847544 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:43.851715 kubelet[2781]: W0129 11:17:43.847550 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:43.851715 kubelet[2781]: E0129 11:17:43.847556 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:43.851715 kubelet[2781]: E0129 11:17:43.847889 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:43.854459 kubelet[2781]: W0129 11:17:43.847894 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:43.854459 kubelet[2781]: E0129 11:17:43.847900 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:43.854459 kubelet[2781]: E0129 11:17:43.848016 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:43.854459 kubelet[2781]: W0129 11:17:43.848021 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:43.854459 kubelet[2781]: E0129 11:17:43.848027 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:43.854459 kubelet[2781]: E0129 11:17:43.848119 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:43.854459 kubelet[2781]: W0129 11:17:43.848123 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:43.854459 kubelet[2781]: E0129 11:17:43.848128 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:43.854459 kubelet[2781]: E0129 11:17:43.848502 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:43.854459 kubelet[2781]: W0129 11:17:43.848507 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:43.854626 kubelet[2781]: E0129 11:17:43.848513 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:43.854626 kubelet[2781]: E0129 11:17:43.851556 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:43.854626 kubelet[2781]: W0129 11:17:43.851566 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:43.854626 kubelet[2781]: E0129 11:17:43.851578 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:43.857900 kubelet[2781]: E0129 11:17:43.857878 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:43.857966 kubelet[2781]: W0129 11:17:43.857909 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:43.857966 kubelet[2781]: E0129 11:17:43.857925 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:43.895761 containerd[1540]: time="2025-01-29T11:17:43.895725288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-785ddcdd54-vh4qh,Uid:a4869b33-1cec-4a79-ae4b-d1f4151f9af2,Namespace:calico-system,Attempt:0,}" Jan 29 11:17:43.909482 containerd[1540]: time="2025-01-29T11:17:43.909434348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zql6b,Uid:e1036735-e788-40b9-8234-5a0f41b3103c,Namespace:calico-system,Attempt:0,}" Jan 29 11:17:43.942736 containerd[1540]: time="2025-01-29T11:17:43.942520511Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:17:43.942869 containerd[1540]: time="2025-01-29T11:17:43.942714622Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:17:43.943474 containerd[1540]: time="2025-01-29T11:17:43.942794910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:17:43.944597 containerd[1540]: time="2025-01-29T11:17:43.944437028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:17:43.950722 containerd[1540]: time="2025-01-29T11:17:43.950600065Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:17:43.950917 containerd[1540]: time="2025-01-29T11:17:43.950817749Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:17:43.951001 containerd[1540]: time="2025-01-29T11:17:43.950909469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:17:43.951594 containerd[1540]: time="2025-01-29T11:17:43.951446649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:17:43.959874 systemd[1]: Started cri-containerd-f4c82573c3b1ab1cbe69440633300c9cf1776790079b7d9ec778d89a50a83e6b.scope - libcontainer container f4c82573c3b1ab1cbe69440633300c9cf1776790079b7d9ec778d89a50a83e6b. Jan 29 11:17:43.974513 systemd[1]: Started cri-containerd-4517a69d8d0be70a022488052992f4e79dc71c5e48beed48ac35d8a6809f95f2.scope - libcontainer container 4517a69d8d0be70a022488052992f4e79dc71c5e48beed48ac35d8a6809f95f2. Jan 29 11:17:43.989831 containerd[1540]: time="2025-01-29T11:17:43.989799823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zql6b,Uid:e1036735-e788-40b9-8234-5a0f41b3103c,Namespace:calico-system,Attempt:0,} returns sandbox id \"4517a69d8d0be70a022488052992f4e79dc71c5e48beed48ac35d8a6809f95f2\"" Jan 29 11:17:44.000620 containerd[1540]: time="2025-01-29T11:17:44.000557172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-785ddcdd54-vh4qh,Uid:a4869b33-1cec-4a79-ae4b-d1f4151f9af2,Namespace:calico-system,Attempt:0,} returns sandbox id \"f4c82573c3b1ab1cbe69440633300c9cf1776790079b7d9ec778d89a50a83e6b\"" Jan 29 11:17:44.062591 containerd[1540]: time="2025-01-29T11:17:44.062325807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 29 11:17:44.978477 kubelet[2781]: E0129 11:17:44.978450 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wnr5k" podUID="7d847577-abb5-4daf-916b-6334c86beb77" Jan 29 11:17:45.792763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3952869260.mount: Deactivated successfully. Jan 29 11:17:46.495564 containerd[1540]: time="2025-01-29T11:17:46.495530659Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:17:46.526585 containerd[1540]: time="2025-01-29T11:17:46.526550008Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 29 11:17:46.577612 containerd[1540]: time="2025-01-29T11:17:46.577544888Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:17:46.659653 containerd[1540]: time="2025-01-29T11:17:46.659607907Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:17:46.660239 containerd[1540]: time="2025-01-29T11:17:46.659939290Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.597588366s" Jan 29 11:17:46.660239 containerd[1540]: time="2025-01-29T11:17:46.659958435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 29 11:17:46.666362 containerd[1540]: time="2025-01-29T11:17:46.666334709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 11:17:46.717964 containerd[1540]: time="2025-01-29T11:17:46.717939437Z" level=info msg="CreateContainer within sandbox \"f4c82573c3b1ab1cbe69440633300c9cf1776790079b7d9ec778d89a50a83e6b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 29 11:17:46.896246 containerd[1540]: time="2025-01-29T11:17:46.896171221Z" level=info msg="CreateContainer within sandbox \"f4c82573c3b1ab1cbe69440633300c9cf1776790079b7d9ec778d89a50a83e6b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d3a5561e0e3dc04d72b55d6c045edbb46252f13de9bc58848285ebf230a7fdf0\"" Jan 29 11:17:46.896842 containerd[1540]: time="2025-01-29T11:17:46.896593523Z" level=info msg="StartContainer for \"d3a5561e0e3dc04d72b55d6c045edbb46252f13de9bc58848285ebf230a7fdf0\"" Jan 29 11:17:46.936539 systemd[1]: Started cri-containerd-d3a5561e0e3dc04d72b55d6c045edbb46252f13de9bc58848285ebf230a7fdf0.scope - libcontainer container d3a5561e0e3dc04d72b55d6c045edbb46252f13de9bc58848285ebf230a7fdf0. Jan 29 11:17:46.969115 kubelet[2781]: E0129 11:17:46.969089 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wnr5k" podUID="7d847577-abb5-4daf-916b-6334c86beb77" Jan 29 11:17:46.989902 containerd[1540]: time="2025-01-29T11:17:46.989870256Z" level=info msg="StartContainer for \"d3a5561e0e3dc04d72b55d6c045edbb46252f13de9bc58848285ebf230a7fdf0\" returns successfully" Jan 29 11:17:47.284151 kubelet[2781]: E0129 11:17:47.284023 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:47.284151 kubelet[2781]: W0129 11:17:47.284043 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:47.284151 kubelet[2781]: E0129 11:17:47.284058 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:47.284507 kubelet[2781]: E0129 11:17:47.284478 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:47.284507 kubelet[2781]: W0129 11:17:47.284487 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:47.284507 kubelet[2781]: E0129 11:17:47.284494 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:47.284620 kubelet[2781]: E0129 11:17:47.284603 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:47.284620 kubelet[2781]: W0129 11:17:47.284610 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:47.284620 kubelet[2781]: E0129 11:17:47.284616 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:47.286056 kubelet[2781]: E0129 11:17:47.286046 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:47.286056 kubelet[2781]: W0129 11:17:47.286056 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:47.286104 kubelet[2781]: E0129 11:17:47.286064 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:47.286200 kubelet[2781]: E0129 11:17:47.286191 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:47.286225 kubelet[2781]: W0129 11:17:47.286205 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:47.286225 kubelet[2781]: E0129 11:17:47.286214 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:47.286331 kubelet[2781]: E0129 11:17:47.286319 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:47.286331 kubelet[2781]: W0129 11:17:47.286329 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:47.286408 kubelet[2781]: E0129 11:17:47.286337 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:47.286465 kubelet[2781]: E0129 11:17:47.286456 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:47.286488 kubelet[2781]: W0129 11:17:47.286465 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:47.286488 kubelet[2781]: E0129 11:17:47.286471 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:47.286583 kubelet[2781]: E0129 11:17:47.286574 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:47.286583 kubelet[2781]: W0129 11:17:47.286582 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:47.286627 kubelet[2781]: E0129 11:17:47.286588 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:47.286719 kubelet[2781]: E0129 11:17:47.286708 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:47.286719 kubelet[2781]: W0129 11:17:47.286718 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:47.286767 kubelet[2781]: E0129 11:17:47.286724 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:47.286836 kubelet[2781]: E0129 11:17:47.286826 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:47.286836 kubelet[2781]: W0129 11:17:47.286835 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:47.286883 kubelet[2781]: E0129 11:17:47.286842 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:47.286950 kubelet[2781]: E0129 11:17:47.286941 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:47.286950 kubelet[2781]: W0129 11:17:47.286949 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:47.286998 kubelet[2781]: E0129 11:17:47.286955 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:47.287064 kubelet[2781]: E0129 11:17:47.287055 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:47.287089 kubelet[2781]: W0129 11:17:47.287064 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:47.287089 kubelet[2781]: E0129 11:17:47.287072 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:47.287185 kubelet[2781]: E0129 11:17:47.287176 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:47.287185 kubelet[2781]: W0129 11:17:47.287185 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:47.287487 kubelet[2781]: E0129 11:17:47.287191 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:47.287487 kubelet[2781]: E0129 11:17:47.287290 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:47.287487 kubelet[2781]: W0129 11:17:47.287295 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:47.287487 kubelet[2781]: E0129 11:17:47.287301 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:47.287487 kubelet[2781]: E0129 11:17:47.287422 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:47.287487 kubelet[2781]: W0129 11:17:47.287428 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:47.287487 kubelet[2781]: E0129 11:17:47.287436 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:47.384107 kubelet[2781]: E0129 11:17:47.384084 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:47.384107 kubelet[2781]: W0129 11:17:47.384100 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:47.384107 kubelet[2781]: E0129 11:17:47.384114 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:47.384399 kubelet[2781]: E0129 11:17:47.384244 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:47.384399 kubelet[2781]: W0129 11:17:47.384249 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:47.384399 kubelet[2781]: E0129 11:17:47.384259 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:47.384604 kubelet[2781]: E0129 11:17:47.384520 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:47.384604 kubelet[2781]: W0129 11:17:47.384532 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:47.384604 kubelet[2781]: E0129 11:17:47.384544 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:47.384769 kubelet[2781]: E0129 11:17:47.384661 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:47.384769 kubelet[2781]: W0129 11:17:47.384667 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:47.384769 kubelet[2781]: E0129 11:17:47.384674 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:47.385113 kubelet[2781]: E0129 11:17:47.384972 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:47.385113 kubelet[2781]: W0129 11:17:47.384980 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:47.385113 kubelet[2781]: E0129 11:17:47.384992 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:47.385313 kubelet[2781]: E0129 11:17:47.385233 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:47.385313 kubelet[2781]: W0129 11:17:47.385242 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:47.385313 kubelet[2781]: E0129 11:17:47.385257 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:47.385989 kubelet[2781]: E0129 11:17:47.385877 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:47.385989 kubelet[2781]: W0129 11:17:47.385889 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:47.385989 kubelet[2781]: E0129 11:17:47.385898 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:47.387997 kubelet[2781]: E0129 11:17:47.387985 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:47.388071 kubelet[2781]: W0129 11:17:47.388054 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:47.388132 kubelet[2781]: E0129 11:17:47.388124 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:47.388323 kubelet[2781]: E0129 11:17:47.388315 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:47.388392 kubelet[2781]: W0129 11:17:47.388366 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:47.389052 kubelet[2781]: E0129 11:17:47.389044 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:47.389110 kubelet[2781]: W0129 11:17:47.389098 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:47.389265 kubelet[2781]: E0129 11:17:47.389258 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:47.389314 kubelet[2781]: W0129 11:17:47.389308 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:47.389408 kubelet[2781]: E0129 11:17:47.389360 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:47.389563 kubelet[2781]: E0129 11:17:47.389555 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:47.389619 kubelet[2781]: W0129 11:17:47.389606 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:47.389669 kubelet[2781]: E0129 11:17:47.389662 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:47.389862 kubelet[2781]: E0129 11:17:47.389854 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:47.389922 kubelet[2781]: W0129 11:17:47.389909 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:47.389970 kubelet[2781]: E0129 11:17:47.389963 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:47.393933 kubelet[2781]: E0129 11:17:47.393921 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:47.394014 kubelet[2781]: E0129 11:17:47.394001 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:47.394074 kubelet[2781]: W0129 11:17:47.394063 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:47.394111 kubelet[2781]: E0129 11:17:47.394075 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:47.394284 kubelet[2781]: E0129 11:17:47.394273 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:47.394284 kubelet[2781]: W0129 11:17:47.394281 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:47.394341 kubelet[2781]: E0129 11:17:47.394288 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:47.394394 kubelet[2781]: E0129 11:17:47.394386 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:47.394394 kubelet[2781]: W0129 11:17:47.394392 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:47.394452 kubelet[2781]: E0129 11:17:47.394397 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:47.394500 kubelet[2781]: E0129 11:17:47.394490 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:47.394500 kubelet[2781]: W0129 11:17:47.394498 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:47.394560 kubelet[2781]: E0129 11:17:47.394504 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:47.394649 kubelet[2781]: E0129 11:17:47.394639 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:17:47.394649 kubelet[2781]: W0129 11:17:47.394645 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:17:47.394709 kubelet[2781]: E0129 11:17:47.394650 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:47.394709 kubelet[2781]: E0129 11:17:47.394004 2781 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:17:47.940431 containerd[1540]: time="2025-01-29T11:17:47.940395150Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:17:47.940880 containerd[1540]: time="2025-01-29T11:17:47.940852112Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 29 11:17:47.941220 containerd[1540]: time="2025-01-29T11:17:47.941120518Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:17:47.942315 containerd[1540]: time="2025-01-29T11:17:47.942302084Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:17:47.942734 containerd[1540]: time="2025-01-29T11:17:47.942718423Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.276365525s" Jan 29 11:17:47.942765 containerd[1540]: time="2025-01-29T11:17:47.942734993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 29 11:17:47.943929 containerd[1540]: time="2025-01-29T11:17:47.943801494Z" level=info msg="CreateContainer within sandbox \"4517a69d8d0be70a022488052992f4e79dc71c5e48beed48ac35d8a6809f95f2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 11:17:47.952345 containerd[1540]: time="2025-01-29T11:17:47.952326595Z" level=info msg="CreateContainer within sandbox \"4517a69d8d0be70a022488052992f4e79dc71c5e48beed48ac35d8a6809f95f2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4adb4c71835320749fd3c883d160fb7d4f0600e8606f637c4d14baccd8ebb053\"" Jan 29 11:17:47.953462 containerd[1540]: time="2025-01-29T11:17:47.952675941Z" level=info msg="StartContainer for \"4adb4c71835320749fd3c883d160fb7d4f0600e8606f637c4d14baccd8ebb053\"" Jan 29 11:17:47.986482 systemd[1]: Started cri-containerd-4adb4c71835320749fd3c883d160fb7d4f0600e8606f637c4d14baccd8ebb053.scope - libcontainer container 4adb4c71835320749fd3c883d160fb7d4f0600e8606f637c4d14baccd8ebb053. Jan 29 11:17:48.006672 containerd[1540]: time="2025-01-29T11:17:48.006251042Z" level=info msg="StartContainer for \"4adb4c71835320749fd3c883d160fb7d4f0600e8606f637c4d14baccd8ebb053\" returns successfully" Jan 29 11:17:48.020559 systemd[1]: cri-containerd-4adb4c71835320749fd3c883d160fb7d4f0600e8606f637c4d14baccd8ebb053.scope: Deactivated successfully. Jan 29 11:17:48.034455 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4adb4c71835320749fd3c883d160fb7d4f0600e8606f637c4d14baccd8ebb053-rootfs.mount: Deactivated successfully. Jan 29 11:17:48.096047 containerd[1540]: time="2025-01-29T11:17:48.078890940Z" level=info msg="shim disconnected" id=4adb4c71835320749fd3c883d160fb7d4f0600e8606f637c4d14baccd8ebb053 namespace=k8s.io Jan 29 11:17:48.096047 containerd[1540]: time="2025-01-29T11:17:48.095929234Z" level=warning msg="cleaning up after shim disconnected" id=4adb4c71835320749fd3c883d160fb7d4f0600e8606f637c4d14baccd8ebb053 namespace=k8s.io Jan 29 11:17:48.096047 containerd[1540]: time="2025-01-29T11:17:48.095938707Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:17:48.189772 kubelet[2781]: I0129 11:17:48.189745 2781 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:17:48.202419 kubelet[2781]: I0129 11:17:48.201281 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-785ddcdd54-vh4qh" podStartSLOduration=2.552057301 podStartE2EDuration="5.20127034s" podCreationTimestamp="2025-01-29 11:17:43 +0000 UTC" firstStartedPulling="2025-01-29 11:17:44.017003101 +0000 UTC m=+13.118393745" lastFinishedPulling="2025-01-29 11:17:46.66621614 +0000 UTC m=+15.767606784" observedRunningTime="2025-01-29 11:17:47.203761236 +0000 UTC m=+16.305151906" watchObservedRunningTime="2025-01-29 11:17:48.20127034 +0000 UTC m=+17.302660992" Jan 29 11:17:48.204149 containerd[1540]: time="2025-01-29T11:17:48.203944202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 11:17:48.968840 kubelet[2781]: E0129 11:17:48.968458 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wnr5k" podUID="7d847577-abb5-4daf-916b-6334c86beb77" Jan 29 11:17:50.968533 kubelet[2781]: E0129 11:17:50.968505 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wnr5k" podUID="7d847577-abb5-4daf-916b-6334c86beb77" Jan 29 11:17:51.728303 containerd[1540]: time="2025-01-29T11:17:51.728272681Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:17:51.728917 containerd[1540]: time="2025-01-29T11:17:51.728893618Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 29 11:17:51.729256 containerd[1540]: time="2025-01-29T11:17:51.729240204Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:17:51.735224 containerd[1540]: time="2025-01-29T11:17:51.735185056Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:17:51.735745 containerd[1540]: time="2025-01-29T11:17:51.735683018Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 3.53171466s" Jan 29 11:17:51.735745 containerd[1540]: time="2025-01-29T11:17:51.735698578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 29 11:17:51.737031 containerd[1540]: time="2025-01-29T11:17:51.737021769Z" level=info msg="CreateContainer within sandbox \"4517a69d8d0be70a022488052992f4e79dc71c5e48beed48ac35d8a6809f95f2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 11:17:51.756089 containerd[1540]: time="2025-01-29T11:17:51.756061998Z" level=info msg="CreateContainer within sandbox \"4517a69d8d0be70a022488052992f4e79dc71c5e48beed48ac35d8a6809f95f2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"cfec429fd12f51f17c048f45e3c480db2462a22b73820ae96c42ee7fba96b4b5\"" Jan 29 11:17:51.756482 containerd[1540]: time="2025-01-29T11:17:51.756447481Z" level=info msg="StartContainer for \"cfec429fd12f51f17c048f45e3c480db2462a22b73820ae96c42ee7fba96b4b5\"" Jan 29 11:17:51.801476 systemd[1]: Started cri-containerd-cfec429fd12f51f17c048f45e3c480db2462a22b73820ae96c42ee7fba96b4b5.scope - libcontainer container cfec429fd12f51f17c048f45e3c480db2462a22b73820ae96c42ee7fba96b4b5. Jan 29 11:17:51.821606 containerd[1540]: time="2025-01-29T11:17:51.821497589Z" level=info msg="StartContainer for \"cfec429fd12f51f17c048f45e3c480db2462a22b73820ae96c42ee7fba96b4b5\" returns successfully" Jan 29 11:17:52.968857 kubelet[2781]: E0129 11:17:52.968819 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wnr5k" podUID="7d847577-abb5-4daf-916b-6334c86beb77" Jan 29 11:17:53.078544 systemd[1]: cri-containerd-cfec429fd12f51f17c048f45e3c480db2462a22b73820ae96c42ee7fba96b4b5.scope: Deactivated successfully. Jan 29 11:17:53.095737 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cfec429fd12f51f17c048f45e3c480db2462a22b73820ae96c42ee7fba96b4b5-rootfs.mount: Deactivated successfully. Jan 29 11:17:53.132923 containerd[1540]: time="2025-01-29T11:17:53.132848591Z" level=info msg="shim disconnected" id=cfec429fd12f51f17c048f45e3c480db2462a22b73820ae96c42ee7fba96b4b5 namespace=k8s.io Jan 29 11:17:53.133287 containerd[1540]: time="2025-01-29T11:17:53.133186982Z" level=warning msg="cleaning up after shim disconnected" id=cfec429fd12f51f17c048f45e3c480db2462a22b73820ae96c42ee7fba96b4b5 namespace=k8s.io Jan 29 11:17:53.133287 containerd[1540]: time="2025-01-29T11:17:53.133197665Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:17:53.141115 kubelet[2781]: I0129 11:17:53.141098 2781 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 29 11:17:53.181265 systemd[1]: Created slice kubepods-burstable-podfad30e8c_8ba2_44d9_9978_b8d6342a6efd.slice - libcontainer container kubepods-burstable-podfad30e8c_8ba2_44d9_9978_b8d6342a6efd.slice. Jan 29 11:17:53.187309 systemd[1]: Created slice kubepods-besteffort-pod228d172c_bed5_421d_bbfc_69e399249629.slice - libcontainer container kubepods-besteffort-pod228d172c_bed5_421d_bbfc_69e399249629.slice. Jan 29 11:17:53.192079 systemd[1]: Created slice kubepods-besteffort-poddfc4733c_b2d3_4e5f_a242_756778fe7626.slice - libcontainer container kubepods-besteffort-poddfc4733c_b2d3_4e5f_a242_756778fe7626.slice. Jan 29 11:17:53.197649 systemd[1]: Created slice kubepods-burstable-pod34b7fb00_2637_448b_9a6c_d8fe087ac46d.slice - libcontainer container kubepods-burstable-pod34b7fb00_2637_448b_9a6c_d8fe087ac46d.slice. Jan 29 11:17:53.204179 systemd[1]: Created slice kubepods-besteffort-podf5dc6b2c_ba1b_4039_8947_278e73fda781.slice - libcontainer container kubepods-besteffort-podf5dc6b2c_ba1b_4039_8947_278e73fda781.slice. Jan 29 11:17:53.211700 containerd[1540]: time="2025-01-29T11:17:53.211667289Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 11:17:53.320696 kubelet[2781]: I0129 11:17:53.320613 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dfc4733c-b2d3-4e5f-a242-756778fe7626-calico-apiserver-certs\") pod \"calico-apiserver-567645b7d8-9spgk\" (UID: \"dfc4733c-b2d3-4e5f-a242-756778fe7626\") " pod="calico-apiserver/calico-apiserver-567645b7d8-9spgk" Jan 29 11:17:53.320696 kubelet[2781]: I0129 11:17:53.320642 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f5dc6b2c-ba1b-4039-8947-278e73fda781-calico-apiserver-certs\") pod \"calico-apiserver-567645b7d8-fwkgz\" (UID: \"f5dc6b2c-ba1b-4039-8947-278e73fda781\") " pod="calico-apiserver/calico-apiserver-567645b7d8-fwkgz" Jan 29 11:17:53.320696 kubelet[2781]: I0129 11:17:53.320656 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsh8p\" (UniqueName: \"kubernetes.io/projected/fad30e8c-8ba2-44d9-9978-b8d6342a6efd-kube-api-access-fsh8p\") pod \"coredns-668d6bf9bc-54nng\" (UID: \"fad30e8c-8ba2-44d9-9978-b8d6342a6efd\") " pod="kube-system/coredns-668d6bf9bc-54nng" Jan 29 11:17:53.320696 kubelet[2781]: I0129 11:17:53.320670 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/228d172c-bed5-421d-bbfc-69e399249629-tigera-ca-bundle\") pod \"calico-kube-controllers-bd9c8f59d-nldcx\" (UID: \"228d172c-bed5-421d-bbfc-69e399249629\") " pod="calico-system/calico-kube-controllers-bd9c8f59d-nldcx" Jan 29 11:17:53.320920 kubelet[2781]: I0129 11:17:53.320707 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34b7fb00-2637-448b-9a6c-d8fe087ac46d-config-volume\") pod \"coredns-668d6bf9bc-9fbsh\" (UID: \"34b7fb00-2637-448b-9a6c-d8fe087ac46d\") " pod="kube-system/coredns-668d6bf9bc-9fbsh" Jan 29 11:17:53.320920 kubelet[2781]: I0129 11:17:53.320728 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppt6c\" (UniqueName: \"kubernetes.io/projected/f5dc6b2c-ba1b-4039-8947-278e73fda781-kube-api-access-ppt6c\") pod \"calico-apiserver-567645b7d8-fwkgz\" (UID: \"f5dc6b2c-ba1b-4039-8947-278e73fda781\") " pod="calico-apiserver/calico-apiserver-567645b7d8-fwkgz" Jan 29 11:17:53.320920 kubelet[2781]: I0129 11:17:53.320739 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmvxk\" (UniqueName: \"kubernetes.io/projected/34b7fb00-2637-448b-9a6c-d8fe087ac46d-kube-api-access-vmvxk\") pod \"coredns-668d6bf9bc-9fbsh\" (UID: \"34b7fb00-2637-448b-9a6c-d8fe087ac46d\") " pod="kube-system/coredns-668d6bf9bc-9fbsh" Jan 29 11:17:53.320920 kubelet[2781]: I0129 11:17:53.320750 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fad30e8c-8ba2-44d9-9978-b8d6342a6efd-config-volume\") pod \"coredns-668d6bf9bc-54nng\" (UID: \"fad30e8c-8ba2-44d9-9978-b8d6342a6efd\") " pod="kube-system/coredns-668d6bf9bc-54nng" Jan 29 11:17:53.320920 kubelet[2781]: I0129 11:17:53.320760 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwpgq\" (UniqueName: \"kubernetes.io/projected/228d172c-bed5-421d-bbfc-69e399249629-kube-api-access-cwpgq\") pod \"calico-kube-controllers-bd9c8f59d-nldcx\" (UID: \"228d172c-bed5-421d-bbfc-69e399249629\") " pod="calico-system/calico-kube-controllers-bd9c8f59d-nldcx" Jan 29 11:17:53.321014 kubelet[2781]: I0129 11:17:53.320771 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf6dn\" (UniqueName: \"kubernetes.io/projected/dfc4733c-b2d3-4e5f-a242-756778fe7626-kube-api-access-pf6dn\") pod \"calico-apiserver-567645b7d8-9spgk\" (UID: \"dfc4733c-b2d3-4e5f-a242-756778fe7626\") " pod="calico-apiserver/calico-apiserver-567645b7d8-9spgk" Jan 29 11:17:53.485482 containerd[1540]: time="2025-01-29T11:17:53.485445361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-54nng,Uid:fad30e8c-8ba2-44d9-9978-b8d6342a6efd,Namespace:kube-system,Attempt:0,}" Jan 29 11:17:53.490872 containerd[1540]: time="2025-01-29T11:17:53.490849943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bd9c8f59d-nldcx,Uid:228d172c-bed5-421d-bbfc-69e399249629,Namespace:calico-system,Attempt:0,}" Jan 29 11:17:53.498059 containerd[1540]: time="2025-01-29T11:17:53.497987539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567645b7d8-9spgk,Uid:dfc4733c-b2d3-4e5f-a242-756778fe7626,Namespace:calico-apiserver,Attempt:0,}" Jan 29 11:17:53.502101 containerd[1540]: time="2025-01-29T11:17:53.501912870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9fbsh,Uid:34b7fb00-2637-448b-9a6c-d8fe087ac46d,Namespace:kube-system,Attempt:0,}" Jan 29 11:17:53.511226 containerd[1540]: time="2025-01-29T11:17:53.510835926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567645b7d8-fwkgz,Uid:f5dc6b2c-ba1b-4039-8947-278e73fda781,Namespace:calico-apiserver,Attempt:0,}" Jan 29 11:17:53.828701 containerd[1540]: time="2025-01-29T11:17:53.828615835Z" level=error msg="Failed to destroy network for sandbox \"6355f039de9944dc68083e3a8b5e984d252c2bc0493f76ac2e44d33241168d1e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:53.833287 containerd[1540]: time="2025-01-29T11:17:53.833231812Z" level=error msg="Failed to destroy network for sandbox \"5094348e9dfe0fbb3475ca8e30b0dcbcf250044f8a46205cbc8abcea1336af20\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:53.833537 containerd[1540]: time="2025-01-29T11:17:53.833521166Z" level=error msg="Failed to destroy network for sandbox \"fe8823acf14dafe1127b302a3b7f03db3aacf62b65e1c7d73654ab4f6e3c73ac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:53.833770 containerd[1540]: time="2025-01-29T11:17:53.833708350Z" level=error msg="encountered an error cleaning up failed sandbox \"fe8823acf14dafe1127b302a3b7f03db3aacf62b65e1c7d73654ab4f6e3c73ac\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:53.833770 containerd[1540]: time="2025-01-29T11:17:53.833747058Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567645b7d8-9spgk,Uid:dfc4733c-b2d3-4e5f-a242-756778fe7626,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fe8823acf14dafe1127b302a3b7f03db3aacf62b65e1c7d73654ab4f6e3c73ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:53.833971 containerd[1540]: time="2025-01-29T11:17:53.833902397Z" level=error msg="encountered an error cleaning up failed sandbox \"6355f039de9944dc68083e3a8b5e984d252c2bc0493f76ac2e44d33241168d1e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:53.833971 containerd[1540]: time="2025-01-29T11:17:53.833946832Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567645b7d8-fwkgz,Uid:f5dc6b2c-ba1b-4039-8947-278e73fda781,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6355f039de9944dc68083e3a8b5e984d252c2bc0493f76ac2e44d33241168d1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:53.835981 containerd[1540]: time="2025-01-29T11:17:53.835956967Z" level=error msg="encountered an error cleaning up failed sandbox \"5094348e9dfe0fbb3475ca8e30b0dcbcf250044f8a46205cbc8abcea1336af20\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:53.836128 containerd[1540]: time="2025-01-29T11:17:53.836076323Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9fbsh,Uid:34b7fb00-2637-448b-9a6c-d8fe087ac46d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5094348e9dfe0fbb3475ca8e30b0dcbcf250044f8a46205cbc8abcea1336af20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:53.836250 containerd[1540]: time="2025-01-29T11:17:53.836201004Z" level=error msg="Failed to destroy network for sandbox \"2f4a59700a7437677b25a8b32a15088b17d16fda3963a67cd5a23b1ad2cf6bfa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:53.836511 containerd[1540]: time="2025-01-29T11:17:53.836458447Z" level=error msg="encountered an error cleaning up failed sandbox \"2f4a59700a7437677b25a8b32a15088b17d16fda3963a67cd5a23b1ad2cf6bfa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:53.836511 containerd[1540]: time="2025-01-29T11:17:53.836487323Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-54nng,Uid:fad30e8c-8ba2-44d9-9978-b8d6342a6efd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2f4a59700a7437677b25a8b32a15088b17d16fda3963a67cd5a23b1ad2cf6bfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:53.836650 containerd[1540]: time="2025-01-29T11:17:53.836610127Z" level=error msg="Failed to destroy network for sandbox \"e8e80bf7e3d0dddb74351ce9d80ce6d0e4e17b790a9d4eeb36e6b229131f792d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:53.836954 containerd[1540]: time="2025-01-29T11:17:53.836906896Z" level=error msg="encountered an error cleaning up failed sandbox \"e8e80bf7e3d0dddb74351ce9d80ce6d0e4e17b790a9d4eeb36e6b229131f792d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:53.836954 containerd[1540]: time="2025-01-29T11:17:53.836931230Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bd9c8f59d-nldcx,Uid:228d172c-bed5-421d-bbfc-69e399249629,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e8e80bf7e3d0dddb74351ce9d80ce6d0e4e17b790a9d4eeb36e6b229131f792d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:53.837683 kubelet[2781]: E0129 11:17:53.837637 2781 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe8823acf14dafe1127b302a3b7f03db3aacf62b65e1c7d73654ab4f6e3c73ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:53.837733 kubelet[2781]: E0129 11:17:53.837687 2781 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe8823acf14dafe1127b302a3b7f03db3aacf62b65e1c7d73654ab4f6e3c73ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-567645b7d8-9spgk" Jan 29 11:17:53.837733 kubelet[2781]: E0129 11:17:53.837703 2781 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe8823acf14dafe1127b302a3b7f03db3aacf62b65e1c7d73654ab4f6e3c73ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-567645b7d8-9spgk" Jan 29 11:17:53.837786 kubelet[2781]: E0129 11:17:53.837730 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-567645b7d8-9spgk_calico-apiserver(dfc4733c-b2d3-4e5f-a242-756778fe7626)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-567645b7d8-9spgk_calico-apiserver(dfc4733c-b2d3-4e5f-a242-756778fe7626)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe8823acf14dafe1127b302a3b7f03db3aacf62b65e1c7d73654ab4f6e3c73ac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-567645b7d8-9spgk" podUID="dfc4733c-b2d3-4e5f-a242-756778fe7626" Jan 29 11:17:53.837859 kubelet[2781]: E0129 11:17:53.837834 2781 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8e80bf7e3d0dddb74351ce9d80ce6d0e4e17b790a9d4eeb36e6b229131f792d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:53.837887 kubelet[2781]: E0129 11:17:53.837863 2781 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8e80bf7e3d0dddb74351ce9d80ce6d0e4e17b790a9d4eeb36e6b229131f792d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bd9c8f59d-nldcx" Jan 29 11:17:53.837887 kubelet[2781]: E0129 11:17:53.837873 2781 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8e80bf7e3d0dddb74351ce9d80ce6d0e4e17b790a9d4eeb36e6b229131f792d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bd9c8f59d-nldcx" Jan 29 11:17:53.837932 kubelet[2781]: E0129 11:17:53.837892 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-bd9c8f59d-nldcx_calico-system(228d172c-bed5-421d-bbfc-69e399249629)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-bd9c8f59d-nldcx_calico-system(228d172c-bed5-421d-bbfc-69e399249629)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e8e80bf7e3d0dddb74351ce9d80ce6d0e4e17b790a9d4eeb36e6b229131f792d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-bd9c8f59d-nldcx" podUID="228d172c-bed5-421d-bbfc-69e399249629" Jan 29 11:17:53.837932 kubelet[2781]: E0129 11:17:53.837917 2781 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6355f039de9944dc68083e3a8b5e984d252c2bc0493f76ac2e44d33241168d1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:53.837932 kubelet[2781]: E0129 11:17:53.837927 2781 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6355f039de9944dc68083e3a8b5e984d252c2bc0493f76ac2e44d33241168d1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-567645b7d8-fwkgz" Jan 29 11:17:53.837998 kubelet[2781]: E0129 11:17:53.837935 2781 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6355f039de9944dc68083e3a8b5e984d252c2bc0493f76ac2e44d33241168d1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-567645b7d8-fwkgz" Jan 29 11:17:53.837998 kubelet[2781]: E0129 11:17:53.837948 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-567645b7d8-fwkgz_calico-apiserver(f5dc6b2c-ba1b-4039-8947-278e73fda781)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-567645b7d8-fwkgz_calico-apiserver(f5dc6b2c-ba1b-4039-8947-278e73fda781)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6355f039de9944dc68083e3a8b5e984d252c2bc0493f76ac2e44d33241168d1e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-567645b7d8-fwkgz" podUID="f5dc6b2c-ba1b-4039-8947-278e73fda781" Jan 29 11:17:53.837998 kubelet[2781]: E0129 11:17:53.837961 2781 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5094348e9dfe0fbb3475ca8e30b0dcbcf250044f8a46205cbc8abcea1336af20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:53.838065 kubelet[2781]: E0129 11:17:53.837970 2781 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5094348e9dfe0fbb3475ca8e30b0dcbcf250044f8a46205cbc8abcea1336af20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9fbsh" Jan 29 11:17:53.838065 kubelet[2781]: E0129 11:17:53.837978 2781 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5094348e9dfe0fbb3475ca8e30b0dcbcf250044f8a46205cbc8abcea1336af20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9fbsh" Jan 29 11:17:53.838065 kubelet[2781]: E0129 11:17:53.837993 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-9fbsh_kube-system(34b7fb00-2637-448b-9a6c-d8fe087ac46d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-9fbsh_kube-system(34b7fb00-2637-448b-9a6c-d8fe087ac46d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5094348e9dfe0fbb3475ca8e30b0dcbcf250044f8a46205cbc8abcea1336af20\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-9fbsh" podUID="34b7fb00-2637-448b-9a6c-d8fe087ac46d" Jan 29 11:17:53.838620 kubelet[2781]: E0129 11:17:53.838006 2781 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f4a59700a7437677b25a8b32a15088b17d16fda3963a67cd5a23b1ad2cf6bfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:53.838620 kubelet[2781]: E0129 11:17:53.838015 2781 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f4a59700a7437677b25a8b32a15088b17d16fda3963a67cd5a23b1ad2cf6bfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-54nng" Jan 29 11:17:53.838620 kubelet[2781]: E0129 11:17:53.838023 2781 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f4a59700a7437677b25a8b32a15088b17d16fda3963a67cd5a23b1ad2cf6bfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-54nng" Jan 29 11:17:53.838706 kubelet[2781]: E0129 11:17:53.838036 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-54nng_kube-system(fad30e8c-8ba2-44d9-9978-b8d6342a6efd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-54nng_kube-system(fad30e8c-8ba2-44d9-9978-b8d6342a6efd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f4a59700a7437677b25a8b32a15088b17d16fda3963a67cd5a23b1ad2cf6bfa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-54nng" podUID="fad30e8c-8ba2-44d9-9978-b8d6342a6efd" Jan 29 11:17:54.096928 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e8e80bf7e3d0dddb74351ce9d80ce6d0e4e17b790a9d4eeb36e6b229131f792d-shm.mount: Deactivated successfully. Jan 29 11:17:54.097573 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2f4a59700a7437677b25a8b32a15088b17d16fda3963a67cd5a23b1ad2cf6bfa-shm.mount: Deactivated successfully. Jan 29 11:17:54.213158 kubelet[2781]: I0129 11:17:54.213135 2781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6355f039de9944dc68083e3a8b5e984d252c2bc0493f76ac2e44d33241168d1e" Jan 29 11:17:54.214001 containerd[1540]: time="2025-01-29T11:17:54.213713489Z" level=info msg="StopPodSandbox for \"6355f039de9944dc68083e3a8b5e984d252c2bc0493f76ac2e44d33241168d1e\"" Jan 29 11:17:54.214893 kubelet[2781]: I0129 11:17:54.214528 2781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5094348e9dfe0fbb3475ca8e30b0dcbcf250044f8a46205cbc8abcea1336af20" Jan 29 11:17:54.214953 containerd[1540]: time="2025-01-29T11:17:54.214725701Z" level=info msg="StopPodSandbox for \"5094348e9dfe0fbb3475ca8e30b0dcbcf250044f8a46205cbc8abcea1336af20\"" Jan 29 11:17:54.217097 containerd[1540]: time="2025-01-29T11:17:54.217084085Z" level=info msg="Ensure that sandbox 5094348e9dfe0fbb3475ca8e30b0dcbcf250044f8a46205cbc8abcea1336af20 in task-service has been cleanup successfully" Jan 29 11:17:54.218601 containerd[1540]: time="2025-01-29T11:17:54.218591347Z" level=info msg="TearDown network for sandbox \"5094348e9dfe0fbb3475ca8e30b0dcbcf250044f8a46205cbc8abcea1336af20\" successfully" Jan 29 11:17:54.218730 containerd[1540]: time="2025-01-29T11:17:54.218687389Z" level=info msg="StopPodSandbox for \"5094348e9dfe0fbb3475ca8e30b0dcbcf250044f8a46205cbc8abcea1336af20\" returns successfully" Jan 29 11:17:54.220195 containerd[1540]: time="2025-01-29T11:17:54.219089957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9fbsh,Uid:34b7fb00-2637-448b-9a6c-d8fe087ac46d,Namespace:kube-system,Attempt:1,}" Jan 29 11:17:54.220067 systemd[1]: run-netns-cni\x2df8bcbb91\x2d22a3\x2d3bff\x2ddf7f\x2da3fdb7866e84.mount: Deactivated successfully. Jan 29 11:17:54.220517 containerd[1540]: time="2025-01-29T11:17:54.220257433Z" level=info msg="Ensure that sandbox 6355f039de9944dc68083e3a8b5e984d252c2bc0493f76ac2e44d33241168d1e in task-service has been cleanup successfully" Jan 29 11:17:54.221067 containerd[1540]: time="2025-01-29T11:17:54.221031516Z" level=info msg="TearDown network for sandbox \"6355f039de9944dc68083e3a8b5e984d252c2bc0493f76ac2e44d33241168d1e\" successfully" Jan 29 11:17:54.221067 containerd[1540]: time="2025-01-29T11:17:54.221042550Z" level=info msg="StopPodSandbox for \"6355f039de9944dc68083e3a8b5e984d252c2bc0493f76ac2e44d33241168d1e\" returns successfully" Jan 29 11:17:54.222216 kubelet[2781]: I0129 11:17:54.222186 2781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8e80bf7e3d0dddb74351ce9d80ce6d0e4e17b790a9d4eeb36e6b229131f792d" Jan 29 11:17:54.222611 containerd[1540]: time="2025-01-29T11:17:54.222480150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567645b7d8-fwkgz,Uid:f5dc6b2c-ba1b-4039-8947-278e73fda781,Namespace:calico-apiserver,Attempt:1,}" Jan 29 11:17:54.223537 systemd[1]: run-netns-cni\x2d0884c726\x2d3948\x2d31df\x2d774d\x2db44d0d8dbedc.mount: Deactivated successfully. Jan 29 11:17:54.226822 systemd[1]: run-netns-cni\x2d36858ba2\x2d956b\x2d6313\x2dce61\x2d5d56de556c04.mount: Deactivated successfully. Jan 29 11:17:54.235996 kubelet[2781]: I0129 11:17:54.224680 2781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe8823acf14dafe1127b302a3b7f03db3aacf62b65e1c7d73654ab4f6e3c73ac" Jan 29 11:17:54.235996 kubelet[2781]: I0129 11:17:54.227492 2781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f4a59700a7437677b25a8b32a15088b17d16fda3963a67cd5a23b1ad2cf6bfa" Jan 29 11:17:54.236045 containerd[1540]: time="2025-01-29T11:17:54.224128178Z" level=info msg="StopPodSandbox for \"e8e80bf7e3d0dddb74351ce9d80ce6d0e4e17b790a9d4eeb36e6b229131f792d\"" Jan 29 11:17:54.236045 containerd[1540]: time="2025-01-29T11:17:54.224249616Z" level=info msg="Ensure that sandbox e8e80bf7e3d0dddb74351ce9d80ce6d0e4e17b790a9d4eeb36e6b229131f792d in task-service has been cleanup successfully" Jan 29 11:17:54.236045 containerd[1540]: time="2025-01-29T11:17:54.224444580Z" level=info msg="TearDown network for sandbox \"e8e80bf7e3d0dddb74351ce9d80ce6d0e4e17b790a9d4eeb36e6b229131f792d\" successfully" Jan 29 11:17:54.236045 containerd[1540]: time="2025-01-29T11:17:54.224453306Z" level=info msg="StopPodSandbox for \"e8e80bf7e3d0dddb74351ce9d80ce6d0e4e17b790a9d4eeb36e6b229131f792d\" returns successfully" Jan 29 11:17:54.236045 containerd[1540]: time="2025-01-29T11:17:54.224987701Z" level=info msg="StopPodSandbox for \"fe8823acf14dafe1127b302a3b7f03db3aacf62b65e1c7d73654ab4f6e3c73ac\"" Jan 29 11:17:54.236045 containerd[1540]: time="2025-01-29T11:17:54.225135374Z" level=info msg="Ensure that sandbox fe8823acf14dafe1127b302a3b7f03db3aacf62b65e1c7d73654ab4f6e3c73ac in task-service has been cleanup successfully" Jan 29 11:17:54.236045 containerd[1540]: time="2025-01-29T11:17:54.225277850Z" level=info msg="TearDown network for sandbox \"fe8823acf14dafe1127b302a3b7f03db3aacf62b65e1c7d73654ab4f6e3c73ac\" successfully" Jan 29 11:17:54.236045 containerd[1540]: time="2025-01-29T11:17:54.225285655Z" level=info msg="StopPodSandbox for \"fe8823acf14dafe1127b302a3b7f03db3aacf62b65e1c7d73654ab4f6e3c73ac\" returns successfully" Jan 29 11:17:54.236045 containerd[1540]: time="2025-01-29T11:17:54.225558105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bd9c8f59d-nldcx,Uid:228d172c-bed5-421d-bbfc-69e399249629,Namespace:calico-system,Attempt:1,}" Jan 29 11:17:54.236045 containerd[1540]: time="2025-01-29T11:17:54.225619617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567645b7d8-9spgk,Uid:dfc4733c-b2d3-4e5f-a242-756778fe7626,Namespace:calico-apiserver,Attempt:1,}" Jan 29 11:17:54.236045 containerd[1540]: time="2025-01-29T11:17:54.227679798Z" level=info msg="StopPodSandbox for \"2f4a59700a7437677b25a8b32a15088b17d16fda3963a67cd5a23b1ad2cf6bfa\"" Jan 29 11:17:54.236045 containerd[1540]: time="2025-01-29T11:17:54.227787122Z" level=info msg="Ensure that sandbox 2f4a59700a7437677b25a8b32a15088b17d16fda3963a67cd5a23b1ad2cf6bfa in task-service has been cleanup successfully" Jan 29 11:17:54.236045 containerd[1540]: time="2025-01-29T11:17:54.227901662Z" level=info msg="TearDown network for sandbox \"2f4a59700a7437677b25a8b32a15088b17d16fda3963a67cd5a23b1ad2cf6bfa\" successfully" Jan 29 11:17:54.236045 containerd[1540]: time="2025-01-29T11:17:54.227909455Z" level=info msg="StopPodSandbox for \"2f4a59700a7437677b25a8b32a15088b17d16fda3963a67cd5a23b1ad2cf6bfa\" returns successfully" Jan 29 11:17:54.236045 containerd[1540]: time="2025-01-29T11:17:54.228090741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-54nng,Uid:fad30e8c-8ba2-44d9-9978-b8d6342a6efd,Namespace:kube-system,Attempt:1,}" Jan 29 11:17:54.226878 systemd[1]: run-netns-cni\x2de57612c0\x2dbb9a\x2dfddc\x2d5aba\x2dfb28b3b76c2f.mount: Deactivated successfully. Jan 29 11:17:54.229277 systemd[1]: run-netns-cni\x2dc03ba457\x2d5602\x2d6182\x2d593c\x2ddb3a735da254.mount: Deactivated successfully. Jan 29 11:17:54.743786 containerd[1540]: time="2025-01-29T11:17:54.743011766Z" level=error msg="Failed to destroy network for sandbox \"86e28ef1ae145ad1a45ac419c458722cc93d669f5a926e1e4b0d549e71fac0d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:54.743786 containerd[1540]: time="2025-01-29T11:17:54.743761496Z" level=error msg="encountered an error cleaning up failed sandbox \"86e28ef1ae145ad1a45ac419c458722cc93d669f5a926e1e4b0d549e71fac0d7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:54.743957 containerd[1540]: time="2025-01-29T11:17:54.743809593Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567645b7d8-9spgk,Uid:dfc4733c-b2d3-4e5f-a242-756778fe7626,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"86e28ef1ae145ad1a45ac419c458722cc93d669f5a926e1e4b0d549e71fac0d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:54.755333 kubelet[2781]: E0129 11:17:54.755292 2781 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86e28ef1ae145ad1a45ac419c458722cc93d669f5a926e1e4b0d549e71fac0d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:54.755333 kubelet[2781]: E0129 11:17:54.755347 2781 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86e28ef1ae145ad1a45ac419c458722cc93d669f5a926e1e4b0d549e71fac0d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-567645b7d8-9spgk" Jan 29 11:17:54.755333 kubelet[2781]: E0129 11:17:54.755413 2781 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86e28ef1ae145ad1a45ac419c458722cc93d669f5a926e1e4b0d549e71fac0d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-567645b7d8-9spgk" Jan 29 11:17:54.755601 kubelet[2781]: E0129 11:17:54.755462 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-567645b7d8-9spgk_calico-apiserver(dfc4733c-b2d3-4e5f-a242-756778fe7626)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-567645b7d8-9spgk_calico-apiserver(dfc4733c-b2d3-4e5f-a242-756778fe7626)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"86e28ef1ae145ad1a45ac419c458722cc93d669f5a926e1e4b0d549e71fac0d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-567645b7d8-9spgk" podUID="dfc4733c-b2d3-4e5f-a242-756778fe7626" Jan 29 11:17:54.801988 containerd[1540]: time="2025-01-29T11:17:54.801898292Z" level=error msg="Failed to destroy network for sandbox \"23007204d967495cee1d23fcee3f19723efcffe924a0f2bc5376ef57c876772e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:54.802311 containerd[1540]: time="2025-01-29T11:17:54.802204539Z" level=error msg="encountered an error cleaning up failed sandbox \"23007204d967495cee1d23fcee3f19723efcffe924a0f2bc5376ef57c876772e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:54.802311 containerd[1540]: time="2025-01-29T11:17:54.802255930Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567645b7d8-fwkgz,Uid:f5dc6b2c-ba1b-4039-8947-278e73fda781,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"23007204d967495cee1d23fcee3f19723efcffe924a0f2bc5376ef57c876772e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:54.805210 kubelet[2781]: E0129 11:17:54.802776 2781 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23007204d967495cee1d23fcee3f19723efcffe924a0f2bc5376ef57c876772e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:54.805210 kubelet[2781]: E0129 11:17:54.802824 2781 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23007204d967495cee1d23fcee3f19723efcffe924a0f2bc5376ef57c876772e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-567645b7d8-fwkgz" Jan 29 11:17:54.805210 kubelet[2781]: E0129 11:17:54.802839 2781 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23007204d967495cee1d23fcee3f19723efcffe924a0f2bc5376ef57c876772e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-567645b7d8-fwkgz" Jan 29 11:17:54.805291 kubelet[2781]: E0129 11:17:54.803492 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-567645b7d8-fwkgz_calico-apiserver(f5dc6b2c-ba1b-4039-8947-278e73fda781)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-567645b7d8-fwkgz_calico-apiserver(f5dc6b2c-ba1b-4039-8947-278e73fda781)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23007204d967495cee1d23fcee3f19723efcffe924a0f2bc5376ef57c876772e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-567645b7d8-fwkgz" podUID="f5dc6b2c-ba1b-4039-8947-278e73fda781" Jan 29 11:17:54.827383 containerd[1540]: time="2025-01-29T11:17:54.827097843Z" level=error msg="Failed to destroy network for sandbox \"c793c1c171159802ca61909f642594f684634e7885bae0aff26a8a8fd1ff1a5a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:54.827468 containerd[1540]: time="2025-01-29T11:17:54.827437102Z" level=error msg="encountered an error cleaning up failed sandbox \"c793c1c171159802ca61909f642594f684634e7885bae0aff26a8a8fd1ff1a5a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:54.827584 containerd[1540]: time="2025-01-29T11:17:54.827567339Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9fbsh,Uid:34b7fb00-2637-448b-9a6c-d8fe087ac46d,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"c793c1c171159802ca61909f642594f684634e7885bae0aff26a8a8fd1ff1a5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:54.827891 kubelet[2781]: E0129 11:17:54.827864 2781 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c793c1c171159802ca61909f642594f684634e7885bae0aff26a8a8fd1ff1a5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:54.827929 kubelet[2781]: E0129 11:17:54.827906 2781 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c793c1c171159802ca61909f642594f684634e7885bae0aff26a8a8fd1ff1a5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9fbsh" Jan 29 11:17:54.827953 kubelet[2781]: E0129 11:17:54.827931 2781 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c793c1c171159802ca61909f642594f684634e7885bae0aff26a8a8fd1ff1a5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9fbsh" Jan 29 11:17:54.827983 kubelet[2781]: E0129 11:17:54.827967 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-9fbsh_kube-system(34b7fb00-2637-448b-9a6c-d8fe087ac46d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-9fbsh_kube-system(34b7fb00-2637-448b-9a6c-d8fe087ac46d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c793c1c171159802ca61909f642594f684634e7885bae0aff26a8a8fd1ff1a5a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-9fbsh" podUID="34b7fb00-2637-448b-9a6c-d8fe087ac46d" Jan 29 11:17:54.831108 containerd[1540]: time="2025-01-29T11:17:54.831064762Z" level=error msg="Failed to destroy network for sandbox \"965f0e21cd0c998502fe33972ceb0aa8588cdd62fe9c0d1228276afb07663b3a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:54.831295 containerd[1540]: time="2025-01-29T11:17:54.831276485Z" level=error msg="encountered an error cleaning up failed sandbox \"965f0e21cd0c998502fe33972ceb0aa8588cdd62fe9c0d1228276afb07663b3a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:54.831328 containerd[1540]: time="2025-01-29T11:17:54.831314934Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-54nng,Uid:fad30e8c-8ba2-44d9-9978-b8d6342a6efd,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"965f0e21cd0c998502fe33972ceb0aa8588cdd62fe9c0d1228276afb07663b3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:54.831476 kubelet[2781]: E0129 11:17:54.831451 2781 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"965f0e21cd0c998502fe33972ceb0aa8588cdd62fe9c0d1228276afb07663b3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:54.831510 kubelet[2781]: E0129 11:17:54.831496 2781 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"965f0e21cd0c998502fe33972ceb0aa8588cdd62fe9c0d1228276afb07663b3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-54nng" Jan 29 11:17:54.831531 kubelet[2781]: E0129 11:17:54.831510 2781 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"965f0e21cd0c998502fe33972ceb0aa8588cdd62fe9c0d1228276afb07663b3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-54nng" Jan 29 11:17:54.831665 kubelet[2781]: E0129 11:17:54.831537 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-54nng_kube-system(fad30e8c-8ba2-44d9-9978-b8d6342a6efd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-54nng_kube-system(fad30e8c-8ba2-44d9-9978-b8d6342a6efd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"965f0e21cd0c998502fe33972ceb0aa8588cdd62fe9c0d1228276afb07663b3a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-54nng" podUID="fad30e8c-8ba2-44d9-9978-b8d6342a6efd" Jan 29 11:17:54.835872 containerd[1540]: time="2025-01-29T11:17:54.835828246Z" level=error msg="Failed to destroy network for sandbox \"623e36285bbc054c04bc3646932f77256d498dc809af594fa14fc84d90ab4c97\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:54.836324 containerd[1540]: time="2025-01-29T11:17:54.836303968Z" level=error msg="encountered an error cleaning up failed sandbox \"623e36285bbc054c04bc3646932f77256d498dc809af594fa14fc84d90ab4c97\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:54.836424 containerd[1540]: time="2025-01-29T11:17:54.836348279Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bd9c8f59d-nldcx,Uid:228d172c-bed5-421d-bbfc-69e399249629,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"623e36285bbc054c04bc3646932f77256d498dc809af594fa14fc84d90ab4c97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:54.836652 kubelet[2781]: E0129 11:17:54.836624 2781 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"623e36285bbc054c04bc3646932f77256d498dc809af594fa14fc84d90ab4c97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:54.836698 kubelet[2781]: E0129 11:17:54.836665 2781 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"623e36285bbc054c04bc3646932f77256d498dc809af594fa14fc84d90ab4c97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bd9c8f59d-nldcx" Jan 29 11:17:54.836698 kubelet[2781]: E0129 11:17:54.836678 2781 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"623e36285bbc054c04bc3646932f77256d498dc809af594fa14fc84d90ab4c97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bd9c8f59d-nldcx" Jan 29 11:17:54.836773 kubelet[2781]: E0129 11:17:54.836707 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-bd9c8f59d-nldcx_calico-system(228d172c-bed5-421d-bbfc-69e399249629)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-bd9c8f59d-nldcx_calico-system(228d172c-bed5-421d-bbfc-69e399249629)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"623e36285bbc054c04bc3646932f77256d498dc809af594fa14fc84d90ab4c97\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-bd9c8f59d-nldcx" podUID="228d172c-bed5-421d-bbfc-69e399249629" Jan 29 11:17:54.980219 systemd[1]: Created slice kubepods-besteffort-pod7d847577_abb5_4daf_916b_6334c86beb77.slice - libcontainer container kubepods-besteffort-pod7d847577_abb5_4daf_916b_6334c86beb77.slice. Jan 29 11:17:54.989529 containerd[1540]: time="2025-01-29T11:17:54.982438659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wnr5k,Uid:7d847577-abb5-4daf-916b-6334c86beb77,Namespace:calico-system,Attempt:0,}" Jan 29 11:17:55.229651 kubelet[2781]: I0129 11:17:55.229527 2781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23007204d967495cee1d23fcee3f19723efcffe924a0f2bc5376ef57c876772e" Jan 29 11:17:55.231027 containerd[1540]: time="2025-01-29T11:17:55.230600786Z" level=info msg="StopPodSandbox for \"23007204d967495cee1d23fcee3f19723efcffe924a0f2bc5376ef57c876772e\"" Jan 29 11:17:55.231027 containerd[1540]: time="2025-01-29T11:17:55.230931591Z" level=info msg="Ensure that sandbox 23007204d967495cee1d23fcee3f19723efcffe924a0f2bc5376ef57c876772e in task-service has been cleanup successfully" Jan 29 11:17:55.232698 containerd[1540]: time="2025-01-29T11:17:55.231244596Z" level=info msg="StopPodSandbox for \"c793c1c171159802ca61909f642594f684634e7885bae0aff26a8a8fd1ff1a5a\"" Jan 29 11:17:55.232698 containerd[1540]: time="2025-01-29T11:17:55.232396584Z" level=info msg="TearDown network for sandbox \"23007204d967495cee1d23fcee3f19723efcffe924a0f2bc5376ef57c876772e\" successfully" Jan 29 11:17:55.232698 containerd[1540]: time="2025-01-29T11:17:55.232406220Z" level=info msg="StopPodSandbox for \"23007204d967495cee1d23fcee3f19723efcffe924a0f2bc5376ef57c876772e\" returns successfully" Jan 29 11:17:55.232767 kubelet[2781]: I0129 11:17:55.230952 2781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c793c1c171159802ca61909f642594f684634e7885bae0aff26a8a8fd1ff1a5a" Jan 29 11:17:55.233137 containerd[1540]: time="2025-01-29T11:17:55.233083798Z" level=info msg="StopPodSandbox for \"6355f039de9944dc68083e3a8b5e984d252c2bc0493f76ac2e44d33241168d1e\"" Jan 29 11:17:55.233108 systemd[1]: run-netns-cni\x2df067025b\x2d4842\x2d80c2\x2d202b\x2d65a88fec6fb0.mount: Deactivated successfully. Jan 29 11:17:55.233593 containerd[1540]: time="2025-01-29T11:17:55.233233591Z" level=info msg="TearDown network for sandbox \"6355f039de9944dc68083e3a8b5e984d252c2bc0493f76ac2e44d33241168d1e\" successfully" Jan 29 11:17:55.233593 containerd[1540]: time="2025-01-29T11:17:55.233241572Z" level=info msg="StopPodSandbox for \"6355f039de9944dc68083e3a8b5e984d252c2bc0493f76ac2e44d33241168d1e\" returns successfully" Jan 29 11:17:55.233863 containerd[1540]: time="2025-01-29T11:17:55.233684399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567645b7d8-fwkgz,Uid:f5dc6b2c-ba1b-4039-8947-278e73fda781,Namespace:calico-apiserver,Attempt:2,}" Jan 29 11:17:55.234950 kubelet[2781]: I0129 11:17:55.234236 2781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="623e36285bbc054c04bc3646932f77256d498dc809af594fa14fc84d90ab4c97" Jan 29 11:17:55.235479 containerd[1540]: time="2025-01-29T11:17:55.235313397Z" level=info msg="StopPodSandbox for \"623e36285bbc054c04bc3646932f77256d498dc809af594fa14fc84d90ab4c97\"" Jan 29 11:17:55.235697 containerd[1540]: time="2025-01-29T11:17:55.235649423Z" level=info msg="Ensure that sandbox 623e36285bbc054c04bc3646932f77256d498dc809af594fa14fc84d90ab4c97 in task-service has been cleanup successfully" Jan 29 11:17:55.235967 containerd[1540]: time="2025-01-29T11:17:55.235833129Z" level=info msg="TearDown network for sandbox \"623e36285bbc054c04bc3646932f77256d498dc809af594fa14fc84d90ab4c97\" successfully" Jan 29 11:17:55.235967 containerd[1540]: time="2025-01-29T11:17:55.235843046Z" level=info msg="StopPodSandbox for \"623e36285bbc054c04bc3646932f77256d498dc809af594fa14fc84d90ab4c97\" returns successfully" Jan 29 11:17:55.236247 containerd[1540]: time="2025-01-29T11:17:55.236145242Z" level=info msg="StopPodSandbox for \"e8e80bf7e3d0dddb74351ce9d80ce6d0e4e17b790a9d4eeb36e6b229131f792d\"" Jan 29 11:17:55.236247 containerd[1540]: time="2025-01-29T11:17:55.236193843Z" level=info msg="TearDown network for sandbox \"e8e80bf7e3d0dddb74351ce9d80ce6d0e4e17b790a9d4eeb36e6b229131f792d\" successfully" Jan 29 11:17:55.236247 containerd[1540]: time="2025-01-29T11:17:55.236200359Z" level=info msg="StopPodSandbox for \"e8e80bf7e3d0dddb74351ce9d80ce6d0e4e17b790a9d4eeb36e6b229131f792d\" returns successfully" Jan 29 11:17:55.236633 containerd[1540]: time="2025-01-29T11:17:55.236542091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bd9c8f59d-nldcx,Uid:228d172c-bed5-421d-bbfc-69e399249629,Namespace:calico-system,Attempt:2,}" Jan 29 11:17:55.236907 kubelet[2781]: I0129 11:17:55.236854 2781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86e28ef1ae145ad1a45ac419c458722cc93d669f5a926e1e4b0d549e71fac0d7" Jan 29 11:17:55.237397 containerd[1540]: time="2025-01-29T11:17:55.237193177Z" level=info msg="StopPodSandbox for \"86e28ef1ae145ad1a45ac419c458722cc93d669f5a926e1e4b0d549e71fac0d7\"" Jan 29 11:17:55.237397 containerd[1540]: time="2025-01-29T11:17:55.237294450Z" level=info msg="Ensure that sandbox 86e28ef1ae145ad1a45ac419c458722cc93d669f5a926e1e4b0d549e71fac0d7 in task-service has been cleanup successfully" Jan 29 11:17:55.237539 containerd[1540]: time="2025-01-29T11:17:55.237525539Z" level=info msg="TearDown network for sandbox \"86e28ef1ae145ad1a45ac419c458722cc93d669f5a926e1e4b0d549e71fac0d7\" successfully" Jan 29 11:17:55.237665 containerd[1540]: time="2025-01-29T11:17:55.237582866Z" level=info msg="StopPodSandbox for \"86e28ef1ae145ad1a45ac419c458722cc93d669f5a926e1e4b0d549e71fac0d7\" returns successfully" Jan 29 11:17:55.239364 containerd[1540]: time="2025-01-29T11:17:55.238505230Z" level=info msg="StopPodSandbox for \"fe8823acf14dafe1127b302a3b7f03db3aacf62b65e1c7d73654ab4f6e3c73ac\"" Jan 29 11:17:55.239103 systemd[1]: run-netns-cni\x2d4223246a\x2d4327\x2d71d6\x2d92fb\x2d4325bf86288a.mount: Deactivated successfully. Jan 29 11:17:55.239584 containerd[1540]: time="2025-01-29T11:17:55.239501312Z" level=info msg="TearDown network for sandbox \"fe8823acf14dafe1127b302a3b7f03db3aacf62b65e1c7d73654ab4f6e3c73ac\" successfully" Jan 29 11:17:55.239814 containerd[1540]: time="2025-01-29T11:17:55.239620334Z" level=info msg="StopPodSandbox for \"fe8823acf14dafe1127b302a3b7f03db3aacf62b65e1c7d73654ab4f6e3c73ac\" returns successfully" Jan 29 11:17:55.240136 containerd[1540]: time="2025-01-29T11:17:55.239984516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567645b7d8-9spgk,Uid:dfc4733c-b2d3-4e5f-a242-756778fe7626,Namespace:calico-apiserver,Attempt:2,}" Jan 29 11:17:55.246223 containerd[1540]: time="2025-01-29T11:17:55.240698479Z" level=info msg="StopPodSandbox for \"965f0e21cd0c998502fe33972ceb0aa8588cdd62fe9c0d1228276afb07663b3a\"" Jan 29 11:17:55.246223 containerd[1540]: time="2025-01-29T11:17:55.240820446Z" level=info msg="Ensure that sandbox 965f0e21cd0c998502fe33972ceb0aa8588cdd62fe9c0d1228276afb07663b3a in task-service has been cleanup successfully" Jan 29 11:17:55.246223 containerd[1540]: time="2025-01-29T11:17:55.240940100Z" level=info msg="TearDown network for sandbox \"965f0e21cd0c998502fe33972ceb0aa8588cdd62fe9c0d1228276afb07663b3a\" successfully" Jan 29 11:17:55.246223 containerd[1540]: time="2025-01-29T11:17:55.240948293Z" level=info msg="StopPodSandbox for \"965f0e21cd0c998502fe33972ceb0aa8588cdd62fe9c0d1228276afb07663b3a\" returns successfully" Jan 29 11:17:55.246223 containerd[1540]: time="2025-01-29T11:17:55.241068551Z" level=info msg="StopPodSandbox for \"2f4a59700a7437677b25a8b32a15088b17d16fda3963a67cd5a23b1ad2cf6bfa\"" Jan 29 11:17:55.246223 containerd[1540]: time="2025-01-29T11:17:55.241103939Z" level=info msg="TearDown network for sandbox \"2f4a59700a7437677b25a8b32a15088b17d16fda3963a67cd5a23b1ad2cf6bfa\" successfully" Jan 29 11:17:55.246223 containerd[1540]: time="2025-01-29T11:17:55.241109601Z" level=info msg="StopPodSandbox for \"2f4a59700a7437677b25a8b32a15088b17d16fda3963a67cd5a23b1ad2cf6bfa\" returns successfully" Jan 29 11:17:55.246223 containerd[1540]: time="2025-01-29T11:17:55.241299096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-54nng,Uid:fad30e8c-8ba2-44d9-9978-b8d6342a6efd,Namespace:kube-system,Attempt:2,}" Jan 29 11:17:55.246223 containerd[1540]: time="2025-01-29T11:17:55.241912705Z" level=info msg="Ensure that sandbox c793c1c171159802ca61909f642594f684634e7885bae0aff26a8a8fd1ff1a5a in task-service has been cleanup successfully" Jan 29 11:17:55.246223 containerd[1540]: time="2025-01-29T11:17:55.242052166Z" level=info msg="TearDown network for sandbox \"c793c1c171159802ca61909f642594f684634e7885bae0aff26a8a8fd1ff1a5a\" successfully" Jan 29 11:17:55.246223 containerd[1540]: time="2025-01-29T11:17:55.242060776Z" level=info msg="StopPodSandbox for \"c793c1c171159802ca61909f642594f684634e7885bae0aff26a8a8fd1ff1a5a\" returns successfully" Jan 29 11:17:55.246223 containerd[1540]: time="2025-01-29T11:17:55.242208868Z" level=info msg="StopPodSandbox for \"5094348e9dfe0fbb3475ca8e30b0dcbcf250044f8a46205cbc8abcea1336af20\"" Jan 29 11:17:55.246223 containerd[1540]: time="2025-01-29T11:17:55.242249986Z" level=info msg="TearDown network for sandbox \"5094348e9dfe0fbb3475ca8e30b0dcbcf250044f8a46205cbc8abcea1336af20\" successfully" Jan 29 11:17:55.246223 containerd[1540]: time="2025-01-29T11:17:55.242255770Z" level=info msg="StopPodSandbox for \"5094348e9dfe0fbb3475ca8e30b0dcbcf250044f8a46205cbc8abcea1336af20\" returns successfully" Jan 29 11:17:55.246223 containerd[1540]: time="2025-01-29T11:17:55.242592375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9fbsh,Uid:34b7fb00-2637-448b-9a6c-d8fe087ac46d,Namespace:kube-system,Attempt:2,}" Jan 29 11:17:55.240898 systemd[1]: run-netns-cni\x2d61618491\x2dc7b5\x2d4696\x2dde3b\x2d9da800e5ff5d.mount: Deactivated successfully. Jan 29 11:17:55.246695 kubelet[2781]: I0129 11:17:55.240452 2781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="965f0e21cd0c998502fe33972ceb0aa8588cdd62fe9c0d1228276afb07663b3a" Jan 29 11:17:55.243603 systemd[1]: run-netns-cni\x2dc3e928c5\x2d6613\x2dc25b\x2d7d46\x2da391182c6582.mount: Deactivated successfully. Jan 29 11:17:55.243656 systemd[1]: run-netns-cni\x2dd8a3c93f\x2d1c9b\x2de491\x2d0f46\x2d9fdd50705b0a.mount: Deactivated successfully. Jan 29 11:17:55.952363 containerd[1540]: time="2025-01-29T11:17:55.952315944Z" level=error msg="Failed to destroy network for sandbox \"47bf1d5f1716fb7c348c76e99ea30c8ab02ab1badf7a9b5eb161467c74089f8f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:55.958474 containerd[1540]: time="2025-01-29T11:17:55.958437099Z" level=error msg="encountered an error cleaning up failed sandbox \"47bf1d5f1716fb7c348c76e99ea30c8ab02ab1badf7a9b5eb161467c74089f8f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:55.958594 containerd[1540]: time="2025-01-29T11:17:55.958499954Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wnr5k,Uid:7d847577-abb5-4daf-916b-6334c86beb77,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"47bf1d5f1716fb7c348c76e99ea30c8ab02ab1badf7a9b5eb161467c74089f8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:55.959076 kubelet[2781]: E0129 11:17:55.958759 2781 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47bf1d5f1716fb7c348c76e99ea30c8ab02ab1badf7a9b5eb161467c74089f8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:55.959076 kubelet[2781]: E0129 11:17:55.958805 2781 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47bf1d5f1716fb7c348c76e99ea30c8ab02ab1badf7a9b5eb161467c74089f8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wnr5k" Jan 29 11:17:55.959076 kubelet[2781]: E0129 11:17:55.958827 2781 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47bf1d5f1716fb7c348c76e99ea30c8ab02ab1badf7a9b5eb161467c74089f8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wnr5k" Jan 29 11:17:55.959455 kubelet[2781]: E0129 11:17:55.958857 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wnr5k_calico-system(7d847577-abb5-4daf-916b-6334c86beb77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wnr5k_calico-system(7d847577-abb5-4daf-916b-6334c86beb77)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"47bf1d5f1716fb7c348c76e99ea30c8ab02ab1badf7a9b5eb161467c74089f8f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wnr5k" podUID="7d847577-abb5-4daf-916b-6334c86beb77" Jan 29 11:17:55.967412 containerd[1540]: time="2025-01-29T11:17:55.967353004Z" level=error msg="Failed to destroy network for sandbox \"afe373a75fe2a4b1b152fa178d97c445f327537cf2f6cf3c4b15fab4554474b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:55.967651 containerd[1540]: time="2025-01-29T11:17:55.967632652Z" level=error msg="encountered an error cleaning up failed sandbox \"afe373a75fe2a4b1b152fa178d97c445f327537cf2f6cf3c4b15fab4554474b4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:55.967706 containerd[1540]: time="2025-01-29T11:17:55.967675807Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-54nng,Uid:fad30e8c-8ba2-44d9-9978-b8d6342a6efd,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"afe373a75fe2a4b1b152fa178d97c445f327537cf2f6cf3c4b15fab4554474b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:55.968206 kubelet[2781]: E0129 11:17:55.968165 2781 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afe373a75fe2a4b1b152fa178d97c445f327537cf2f6cf3c4b15fab4554474b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:55.968299 kubelet[2781]: E0129 11:17:55.968288 2781 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afe373a75fe2a4b1b152fa178d97c445f327537cf2f6cf3c4b15fab4554474b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-54nng" Jan 29 11:17:55.968361 kubelet[2781]: E0129 11:17:55.968352 2781 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afe373a75fe2a4b1b152fa178d97c445f327537cf2f6cf3c4b15fab4554474b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-54nng" Jan 29 11:17:55.968493 kubelet[2781]: E0129 11:17:55.968458 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-54nng_kube-system(fad30e8c-8ba2-44d9-9978-b8d6342a6efd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-54nng_kube-system(fad30e8c-8ba2-44d9-9978-b8d6342a6efd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"afe373a75fe2a4b1b152fa178d97c445f327537cf2f6cf3c4b15fab4554474b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-54nng" podUID="fad30e8c-8ba2-44d9-9978-b8d6342a6efd" Jan 29 11:17:55.993323 containerd[1540]: time="2025-01-29T11:17:55.993281181Z" level=error msg="Failed to destroy network for sandbox \"3400f88c6211011833ab2611d8fc78659e89d2a3afff7f350188be4e22083966\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:55.993751 containerd[1540]: time="2025-01-29T11:17:55.993726126Z" level=error msg="encountered an error cleaning up failed sandbox \"3400f88c6211011833ab2611d8fc78659e89d2a3afff7f350188be4e22083966\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:55.994197 containerd[1540]: time="2025-01-29T11:17:55.994181027Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bd9c8f59d-nldcx,Uid:228d172c-bed5-421d-bbfc-69e399249629,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"3400f88c6211011833ab2611d8fc78659e89d2a3afff7f350188be4e22083966\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:55.995670 kubelet[2781]: E0129 11:17:55.995633 2781 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3400f88c6211011833ab2611d8fc78659e89d2a3afff7f350188be4e22083966\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:55.995778 kubelet[2781]: E0129 11:17:55.995685 2781 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3400f88c6211011833ab2611d8fc78659e89d2a3afff7f350188be4e22083966\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bd9c8f59d-nldcx" Jan 29 11:17:55.995778 kubelet[2781]: E0129 11:17:55.995702 2781 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3400f88c6211011833ab2611d8fc78659e89d2a3afff7f350188be4e22083966\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bd9c8f59d-nldcx" Jan 29 11:17:55.995778 kubelet[2781]: E0129 11:17:55.995756 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-bd9c8f59d-nldcx_calico-system(228d172c-bed5-421d-bbfc-69e399249629)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-bd9c8f59d-nldcx_calico-system(228d172c-bed5-421d-bbfc-69e399249629)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3400f88c6211011833ab2611d8fc78659e89d2a3afff7f350188be4e22083966\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-bd9c8f59d-nldcx" podUID="228d172c-bed5-421d-bbfc-69e399249629" Jan 29 11:17:56.004020 containerd[1540]: time="2025-01-29T11:17:56.003986066Z" level=error msg="Failed to destroy network for sandbox \"4e9baf12bd4a7720a7c345a2beb0ed0da0a10437addd844ba1511d811e6143c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:56.004442 containerd[1540]: time="2025-01-29T11:17:56.004354261Z" level=error msg="encountered an error cleaning up failed sandbox \"4e9baf12bd4a7720a7c345a2beb0ed0da0a10437addd844ba1511d811e6143c2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:56.004622 containerd[1540]: time="2025-01-29T11:17:56.004588337Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567645b7d8-fwkgz,Uid:f5dc6b2c-ba1b-4039-8947-278e73fda781,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"4e9baf12bd4a7720a7c345a2beb0ed0da0a10437addd844ba1511d811e6143c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:56.004940 containerd[1540]: time="2025-01-29T11:17:56.004538829Z" level=error msg="Failed to destroy network for sandbox \"46da7b273569014509eee1c5dd3ecf0e5a52dbc395649a8495ea0ee0e6141444\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:56.005452 kubelet[2781]: E0129 11:17:56.005166 2781 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e9baf12bd4a7720a7c345a2beb0ed0da0a10437addd844ba1511d811e6143c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:56.005452 kubelet[2781]: E0129 11:17:56.005208 2781 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e9baf12bd4a7720a7c345a2beb0ed0da0a10437addd844ba1511d811e6143c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-567645b7d8-fwkgz" Jan 29 11:17:56.005452 kubelet[2781]: E0129 11:17:56.005224 2781 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e9baf12bd4a7720a7c345a2beb0ed0da0a10437addd844ba1511d811e6143c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-567645b7d8-fwkgz" Jan 29 11:17:56.005547 kubelet[2781]: E0129 11:17:56.005264 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-567645b7d8-fwkgz_calico-apiserver(f5dc6b2c-ba1b-4039-8947-278e73fda781)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-567645b7d8-fwkgz_calico-apiserver(f5dc6b2c-ba1b-4039-8947-278e73fda781)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e9baf12bd4a7720a7c345a2beb0ed0da0a10437addd844ba1511d811e6143c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-567645b7d8-fwkgz" podUID="f5dc6b2c-ba1b-4039-8947-278e73fda781" Jan 29 11:17:56.008040 kubelet[2781]: E0129 11:17:56.006799 2781 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46da7b273569014509eee1c5dd3ecf0e5a52dbc395649a8495ea0ee0e6141444\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:56.008040 kubelet[2781]: E0129 11:17:56.006836 2781 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46da7b273569014509eee1c5dd3ecf0e5a52dbc395649a8495ea0ee0e6141444\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9fbsh" Jan 29 11:17:56.008040 kubelet[2781]: E0129 11:17:56.006854 2781 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46da7b273569014509eee1c5dd3ecf0e5a52dbc395649a8495ea0ee0e6141444\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9fbsh" Jan 29 11:17:56.008837 containerd[1540]: time="2025-01-29T11:17:56.006304025Z" level=error msg="encountered an error cleaning up failed sandbox \"46da7b273569014509eee1c5dd3ecf0e5a52dbc395649a8495ea0ee0e6141444\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:56.008837 containerd[1540]: time="2025-01-29T11:17:56.006407878Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9fbsh,Uid:34b7fb00-2637-448b-9a6c-d8fe087ac46d,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"46da7b273569014509eee1c5dd3ecf0e5a52dbc395649a8495ea0ee0e6141444\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:56.008903 kubelet[2781]: E0129 11:17:56.006898 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-9fbsh_kube-system(34b7fb00-2637-448b-9a6c-d8fe087ac46d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-9fbsh_kube-system(34b7fb00-2637-448b-9a6c-d8fe087ac46d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"46da7b273569014509eee1c5dd3ecf0e5a52dbc395649a8495ea0ee0e6141444\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-9fbsh" podUID="34b7fb00-2637-448b-9a6c-d8fe087ac46d" Jan 29 11:17:56.021026 containerd[1540]: time="2025-01-29T11:17:56.020999346Z" level=error msg="Failed to destroy network for sandbox \"34d5f81b6bf32fd103a059f7e8e471b0f64115105110ab425bd26ae6525d30f5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:56.021932 containerd[1540]: time="2025-01-29T11:17:56.021904126Z" level=error msg="encountered an error cleaning up failed sandbox \"34d5f81b6bf32fd103a059f7e8e471b0f64115105110ab425bd26ae6525d30f5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:56.021978 containerd[1540]: time="2025-01-29T11:17:56.021955491Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567645b7d8-9spgk,Uid:dfc4733c-b2d3-4e5f-a242-756778fe7626,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"34d5f81b6bf32fd103a059f7e8e471b0f64115105110ab425bd26ae6525d30f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:56.022380 kubelet[2781]: E0129 11:17:56.022080 2781 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34d5f81b6bf32fd103a059f7e8e471b0f64115105110ab425bd26ae6525d30f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:56.022380 kubelet[2781]: E0129 11:17:56.022123 2781 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34d5f81b6bf32fd103a059f7e8e471b0f64115105110ab425bd26ae6525d30f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-567645b7d8-9spgk" Jan 29 11:17:56.022380 kubelet[2781]: E0129 11:17:56.022137 2781 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34d5f81b6bf32fd103a059f7e8e471b0f64115105110ab425bd26ae6525d30f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-567645b7d8-9spgk" Jan 29 11:17:56.022482 kubelet[2781]: E0129 11:17:56.022167 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-567645b7d8-9spgk_calico-apiserver(dfc4733c-b2d3-4e5f-a242-756778fe7626)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-567645b7d8-9spgk_calico-apiserver(dfc4733c-b2d3-4e5f-a242-756778fe7626)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"34d5f81b6bf32fd103a059f7e8e471b0f64115105110ab425bd26ae6525d30f5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-567645b7d8-9spgk" podUID="dfc4733c-b2d3-4e5f-a242-756778fe7626" Jan 29 11:17:56.245060 kubelet[2781]: I0129 11:17:56.244145 2781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afe373a75fe2a4b1b152fa178d97c445f327537cf2f6cf3c4b15fab4554474b4" Jan 29 11:17:56.246154 containerd[1540]: time="2025-01-29T11:17:56.245625436Z" level=info msg="StopPodSandbox for \"afe373a75fe2a4b1b152fa178d97c445f327537cf2f6cf3c4b15fab4554474b4\"" Jan 29 11:17:56.246154 containerd[1540]: time="2025-01-29T11:17:56.245752074Z" level=info msg="Ensure that sandbox afe373a75fe2a4b1b152fa178d97c445f327537cf2f6cf3c4b15fab4554474b4 in task-service has been cleanup successfully" Jan 29 11:17:56.247827 containerd[1540]: time="2025-01-29T11:17:56.247727517Z" level=info msg="TearDown network for sandbox \"afe373a75fe2a4b1b152fa178d97c445f327537cf2f6cf3c4b15fab4554474b4\" successfully" Jan 29 11:17:56.247827 containerd[1540]: time="2025-01-29T11:17:56.247741222Z" level=info msg="StopPodSandbox for \"afe373a75fe2a4b1b152fa178d97c445f327537cf2f6cf3c4b15fab4554474b4\" returns successfully" Jan 29 11:17:56.248704 systemd[1]: run-netns-cni\x2df3d8bc4b\x2d9589\x2d6d35\x2d0787\x2df8d8e299af71.mount: Deactivated successfully. Jan 29 11:17:56.250657 kubelet[2781]: I0129 11:17:56.250159 2781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34d5f81b6bf32fd103a059f7e8e471b0f64115105110ab425bd26ae6525d30f5" Jan 29 11:17:56.250805 containerd[1540]: time="2025-01-29T11:17:56.250738757Z" level=info msg="StopPodSandbox for \"34d5f81b6bf32fd103a059f7e8e471b0f64115105110ab425bd26ae6525d30f5\"" Jan 29 11:17:56.250865 containerd[1540]: time="2025-01-29T11:17:56.250854073Z" level=info msg="Ensure that sandbox 34d5f81b6bf32fd103a059f7e8e471b0f64115105110ab425bd26ae6525d30f5 in task-service has been cleanup successfully" Jan 29 11:17:56.251443 containerd[1540]: time="2025-01-29T11:17:56.251385172Z" level=info msg="StopPodSandbox for \"965f0e21cd0c998502fe33972ceb0aa8588cdd62fe9c0d1228276afb07663b3a\"" Jan 29 11:17:56.251561 containerd[1540]: time="2025-01-29T11:17:56.251490865Z" level=info msg="TearDown network for sandbox \"965f0e21cd0c998502fe33972ceb0aa8588cdd62fe9c0d1228276afb07663b3a\" successfully" Jan 29 11:17:56.251561 containerd[1540]: time="2025-01-29T11:17:56.251500193Z" level=info msg="StopPodSandbox for \"965f0e21cd0c998502fe33972ceb0aa8588cdd62fe9c0d1228276afb07663b3a\" returns successfully" Jan 29 11:17:56.251561 containerd[1540]: time="2025-01-29T11:17:56.251532339Z" level=info msg="TearDown network for sandbox \"34d5f81b6bf32fd103a059f7e8e471b0f64115105110ab425bd26ae6525d30f5\" successfully" Jan 29 11:17:56.251561 containerd[1540]: time="2025-01-29T11:17:56.251538539Z" level=info msg="StopPodSandbox for \"34d5f81b6bf32fd103a059f7e8e471b0f64115105110ab425bd26ae6525d30f5\" returns successfully" Jan 29 11:17:56.251936 containerd[1540]: time="2025-01-29T11:17:56.251854844Z" level=info msg="StopPodSandbox for \"2f4a59700a7437677b25a8b32a15088b17d16fda3963a67cd5a23b1ad2cf6bfa\"" Jan 29 11:17:56.251936 containerd[1540]: time="2025-01-29T11:17:56.251904885Z" level=info msg="TearDown network for sandbox \"2f4a59700a7437677b25a8b32a15088b17d16fda3963a67cd5a23b1ad2cf6bfa\" successfully" Jan 29 11:17:56.251936 containerd[1540]: time="2025-01-29T11:17:56.251912344Z" level=info msg="StopPodSandbox for \"2f4a59700a7437677b25a8b32a15088b17d16fda3963a67cd5a23b1ad2cf6bfa\" returns successfully" Jan 29 11:17:56.251936 containerd[1540]: time="2025-01-29T11:17:56.251920852Z" level=info msg="StopPodSandbox for \"86e28ef1ae145ad1a45ac419c458722cc93d669f5a926e1e4b0d549e71fac0d7\"" Jan 29 11:17:56.252080 containerd[1540]: time="2025-01-29T11:17:56.251968951Z" level=info msg="TearDown network for sandbox \"86e28ef1ae145ad1a45ac419c458722cc93d669f5a926e1e4b0d549e71fac0d7\" successfully" Jan 29 11:17:56.252080 containerd[1540]: time="2025-01-29T11:17:56.251998918Z" level=info msg="StopPodSandbox for \"86e28ef1ae145ad1a45ac419c458722cc93d669f5a926e1e4b0d549e71fac0d7\" returns successfully" Jan 29 11:17:56.253019 containerd[1540]: time="2025-01-29T11:17:56.252907419Z" level=info msg="StopPodSandbox for \"fe8823acf14dafe1127b302a3b7f03db3aacf62b65e1c7d73654ab4f6e3c73ac\"" Jan 29 11:17:56.253019 containerd[1540]: time="2025-01-29T11:17:56.252946172Z" level=info msg="TearDown network for sandbox \"fe8823acf14dafe1127b302a3b7f03db3aacf62b65e1c7d73654ab4f6e3c73ac\" successfully" Jan 29 11:17:56.253693 containerd[1540]: time="2025-01-29T11:17:56.252952176Z" level=info msg="StopPodSandbox for \"fe8823acf14dafe1127b302a3b7f03db3aacf62b65e1c7d73654ab4f6e3c73ac\" returns successfully" Jan 29 11:17:56.253693 containerd[1540]: time="2025-01-29T11:17:56.253487176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-54nng,Uid:fad30e8c-8ba2-44d9-9978-b8d6342a6efd,Namespace:kube-system,Attempt:3,}" Jan 29 11:17:56.253743 kubelet[2781]: I0129 11:17:56.253685 2781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e9baf12bd4a7720a7c345a2beb0ed0da0a10437addd844ba1511d811e6143c2" Jan 29 11:17:56.254013 containerd[1540]: time="2025-01-29T11:17:56.253999703Z" level=info msg="StopPodSandbox for \"4e9baf12bd4a7720a7c345a2beb0ed0da0a10437addd844ba1511d811e6143c2\"" Jan 29 11:17:56.254157 containerd[1540]: time="2025-01-29T11:17:56.254098899Z" level=info msg="Ensure that sandbox 4e9baf12bd4a7720a7c345a2beb0ed0da0a10437addd844ba1511d811e6143c2 in task-service has been cleanup successfully" Jan 29 11:17:56.254210 containerd[1540]: time="2025-01-29T11:17:56.254197423Z" level=info msg="TearDown network for sandbox \"4e9baf12bd4a7720a7c345a2beb0ed0da0a10437addd844ba1511d811e6143c2\" successfully" Jan 29 11:17:56.254210 containerd[1540]: time="2025-01-29T11:17:56.254207956Z" level=info msg="StopPodSandbox for \"4e9baf12bd4a7720a7c345a2beb0ed0da0a10437addd844ba1511d811e6143c2\" returns successfully" Jan 29 11:17:56.254420 containerd[1540]: time="2025-01-29T11:17:56.254302901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567645b7d8-9spgk,Uid:dfc4733c-b2d3-4e5f-a242-756778fe7626,Namespace:calico-apiserver,Attempt:3,}" Jan 29 11:17:56.255914 systemd[1]: run-netns-cni\x2d2544b4f1\x2d595d\x2de325\x2d5a7f\x2d060df9082ec3.mount: Deactivated successfully. Jan 29 11:17:56.256948 containerd[1540]: time="2025-01-29T11:17:56.256929937Z" level=info msg="StopPodSandbox for \"23007204d967495cee1d23fcee3f19723efcffe924a0f2bc5376ef57c876772e\"" Jan 29 11:17:56.261699 containerd[1540]: time="2025-01-29T11:17:56.257022564Z" level=info msg="TearDown network for sandbox \"23007204d967495cee1d23fcee3f19723efcffe924a0f2bc5376ef57c876772e\" successfully" Jan 29 11:17:56.261699 containerd[1540]: time="2025-01-29T11:17:56.257030616Z" level=info msg="StopPodSandbox for \"23007204d967495cee1d23fcee3f19723efcffe924a0f2bc5376ef57c876772e\" returns successfully" Jan 29 11:17:56.261699 containerd[1540]: time="2025-01-29T11:17:56.257480149Z" level=info msg="StopPodSandbox for \"6355f039de9944dc68083e3a8b5e984d252c2bc0493f76ac2e44d33241168d1e\"" Jan 29 11:17:56.261699 containerd[1540]: time="2025-01-29T11:17:56.257623897Z" level=info msg="TearDown network for sandbox \"6355f039de9944dc68083e3a8b5e984d252c2bc0493f76ac2e44d33241168d1e\" successfully" Jan 29 11:17:56.261699 containerd[1540]: time="2025-01-29T11:17:56.257632065Z" level=info msg="StopPodSandbox for \"6355f039de9944dc68083e3a8b5e984d252c2bc0493f76ac2e44d33241168d1e\" returns successfully" Jan 29 11:17:56.261699 containerd[1540]: time="2025-01-29T11:17:56.259053770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567645b7d8-fwkgz,Uid:f5dc6b2c-ba1b-4039-8947-278e73fda781,Namespace:calico-apiserver,Attempt:3,}" Jan 29 11:17:56.261699 containerd[1540]: time="2025-01-29T11:17:56.261293436Z" level=info msg="StopPodSandbox for \"46da7b273569014509eee1c5dd3ecf0e5a52dbc395649a8495ea0ee0e6141444\"" Jan 29 11:17:56.261699 containerd[1540]: time="2025-01-29T11:17:56.261437092Z" level=info msg="Ensure that sandbox 46da7b273569014509eee1c5dd3ecf0e5a52dbc395649a8495ea0ee0e6141444 in task-service has been cleanup successfully" Jan 29 11:17:56.260409 systemd[1]: run-netns-cni\x2d50d36f55\x2d429d\x2dd86c\x2d035d\x2d35ac9730c971.mount: Deactivated successfully. Jan 29 11:17:56.271058 kubelet[2781]: I0129 11:17:56.260287 2781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46da7b273569014509eee1c5dd3ecf0e5a52dbc395649a8495ea0ee0e6141444" Jan 29 11:17:56.271058 kubelet[2781]: I0129 11:17:56.264759 2781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3400f88c6211011833ab2611d8fc78659e89d2a3afff7f350188be4e22083966" Jan 29 11:17:56.271058 kubelet[2781]: I0129 11:17:56.266744 2781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47bf1d5f1716fb7c348c76e99ea30c8ab02ab1badf7a9b5eb161467c74089f8f" Jan 29 11:17:56.271186 containerd[1540]: time="2025-01-29T11:17:56.262774857Z" level=info msg="TearDown network for sandbox \"46da7b273569014509eee1c5dd3ecf0e5a52dbc395649a8495ea0ee0e6141444\" successfully" Jan 29 11:17:56.271186 containerd[1540]: time="2025-01-29T11:17:56.262784123Z" level=info msg="StopPodSandbox for \"46da7b273569014509eee1c5dd3ecf0e5a52dbc395649a8495ea0ee0e6141444\" returns successfully" Jan 29 11:17:56.271186 containerd[1540]: time="2025-01-29T11:17:56.263532952Z" level=info msg="StopPodSandbox for \"c793c1c171159802ca61909f642594f684634e7885bae0aff26a8a8fd1ff1a5a\"" Jan 29 11:17:56.271186 containerd[1540]: time="2025-01-29T11:17:56.263741149Z" level=info msg="TearDown network for sandbox \"c793c1c171159802ca61909f642594f684634e7885bae0aff26a8a8fd1ff1a5a\" successfully" Jan 29 11:17:56.271186 containerd[1540]: time="2025-01-29T11:17:56.263749183Z" level=info msg="StopPodSandbox for \"c793c1c171159802ca61909f642594f684634e7885bae0aff26a8a8fd1ff1a5a\" returns successfully" Jan 29 11:17:56.271186 containerd[1540]: time="2025-01-29T11:17:56.264341145Z" level=info msg="StopPodSandbox for \"5094348e9dfe0fbb3475ca8e30b0dcbcf250044f8a46205cbc8abcea1336af20\"" Jan 29 11:17:56.271186 containerd[1540]: time="2025-01-29T11:17:56.264628948Z" level=info msg="TearDown network for sandbox \"5094348e9dfe0fbb3475ca8e30b0dcbcf250044f8a46205cbc8abcea1336af20\" successfully" Jan 29 11:17:56.271186 containerd[1540]: time="2025-01-29T11:17:56.264636674Z" level=info msg="StopPodSandbox for \"5094348e9dfe0fbb3475ca8e30b0dcbcf250044f8a46205cbc8abcea1336af20\" returns successfully" Jan 29 11:17:56.271186 containerd[1540]: time="2025-01-29T11:17:56.265249694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9fbsh,Uid:34b7fb00-2637-448b-9a6c-d8fe087ac46d,Namespace:kube-system,Attempt:3,}" Jan 29 11:17:56.271186 containerd[1540]: time="2025-01-29T11:17:56.265664044Z" level=info msg="StopPodSandbox for \"3400f88c6211011833ab2611d8fc78659e89d2a3afff7f350188be4e22083966\"" Jan 29 11:17:56.271186 containerd[1540]: time="2025-01-29T11:17:56.265755739Z" level=info msg="Ensure that sandbox 3400f88c6211011833ab2611d8fc78659e89d2a3afff7f350188be4e22083966 in task-service has been cleanup successfully" Jan 29 11:17:56.271186 containerd[1540]: time="2025-01-29T11:17:56.266541167Z" level=info msg="TearDown network for sandbox \"3400f88c6211011833ab2611d8fc78659e89d2a3afff7f350188be4e22083966\" successfully" Jan 29 11:17:56.271186 containerd[1540]: time="2025-01-29T11:17:56.266568070Z" level=info msg="StopPodSandbox for \"3400f88c6211011833ab2611d8fc78659e89d2a3afff7f350188be4e22083966\" returns successfully" Jan 29 11:17:56.271186 containerd[1540]: time="2025-01-29T11:17:56.266997170Z" level=info msg="StopPodSandbox for \"623e36285bbc054c04bc3646932f77256d498dc809af594fa14fc84d90ab4c97\"" Jan 29 11:17:56.271186 containerd[1540]: time="2025-01-29T11:17:56.267031364Z" level=info msg="StopPodSandbox for \"47bf1d5f1716fb7c348c76e99ea30c8ab02ab1badf7a9b5eb161467c74089f8f\"" Jan 29 11:17:56.271186 containerd[1540]: time="2025-01-29T11:17:56.267037236Z" level=info msg="TearDown network for sandbox \"623e36285bbc054c04bc3646932f77256d498dc809af594fa14fc84d90ab4c97\" successfully" Jan 29 11:17:56.271186 containerd[1540]: time="2025-01-29T11:17:56.267044982Z" level=info msg="StopPodSandbox for \"623e36285bbc054c04bc3646932f77256d498dc809af594fa14fc84d90ab4c97\" returns successfully" Jan 29 11:17:56.271186 containerd[1540]: time="2025-01-29T11:17:56.267125328Z" level=info msg="Ensure that sandbox 47bf1d5f1716fb7c348c76e99ea30c8ab02ab1badf7a9b5eb161467c74089f8f in task-service has been cleanup successfully" Jan 29 11:17:56.271186 containerd[1540]: time="2025-01-29T11:17:56.267325321Z" level=info msg="TearDown network for sandbox \"47bf1d5f1716fb7c348c76e99ea30c8ab02ab1badf7a9b5eb161467c74089f8f\" successfully" Jan 29 11:17:56.271186 containerd[1540]: time="2025-01-29T11:17:56.267341210Z" level=info msg="StopPodSandbox for \"47bf1d5f1716fb7c348c76e99ea30c8ab02ab1badf7a9b5eb161467c74089f8f\" returns successfully" Jan 29 11:17:56.271186 containerd[1540]: time="2025-01-29T11:17:56.267617344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wnr5k,Uid:7d847577-abb5-4daf-916b-6334c86beb77,Namespace:calico-system,Attempt:1,}" Jan 29 11:17:56.271186 containerd[1540]: time="2025-01-29T11:17:56.267653830Z" level=info msg="StopPodSandbox for \"e8e80bf7e3d0dddb74351ce9d80ce6d0e4e17b790a9d4eeb36e6b229131f792d\"" Jan 29 11:17:56.271186 containerd[1540]: time="2025-01-29T11:17:56.267965354Z" level=info msg="TearDown network for sandbox \"e8e80bf7e3d0dddb74351ce9d80ce6d0e4e17b790a9d4eeb36e6b229131f792d\" successfully" Jan 29 11:17:56.271186 containerd[1540]: time="2025-01-29T11:17:56.267971426Z" level=info msg="StopPodSandbox for \"e8e80bf7e3d0dddb74351ce9d80ce6d0e4e17b790a9d4eeb36e6b229131f792d\" returns successfully" Jan 29 11:17:56.271186 containerd[1540]: time="2025-01-29T11:17:56.269028397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bd9c8f59d-nldcx,Uid:228d172c-bed5-421d-bbfc-69e399249629,Namespace:calico-system,Attempt:3,}" Jan 29 11:17:56.264608 systemd[1]: run-netns-cni\x2dc5c26a46\x2d4abe\x2d4b8d\x2dad4a\x2dc7b2bc7b76be.mount: Deactivated successfully. Jan 29 11:17:57.097256 systemd[1]: run-netns-cni\x2df5ee5c52\x2d1a7f\x2d46fe\x2d9251\x2da6ac94043b15.mount: Deactivated successfully. Jan 29 11:17:57.098343 systemd[1]: run-netns-cni\x2dbda2ba4a\x2d4b6b\x2d9a68\x2ddea9\x2de394673b8bd0.mount: Deactivated successfully. Jan 29 11:17:57.457813 containerd[1540]: time="2025-01-29T11:17:57.457781047Z" level=error msg="Failed to destroy network for sandbox \"4c45f66dcec342ba40b80e7c1273944e3ce617f60f85d06b98136ce5ccb070ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:57.459861 containerd[1540]: time="2025-01-29T11:17:57.459761511Z" level=error msg="encountered an error cleaning up failed sandbox \"4c45f66dcec342ba40b80e7c1273944e3ce617f60f85d06b98136ce5ccb070ea\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:57.459861 containerd[1540]: time="2025-01-29T11:17:57.459801908Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567645b7d8-fwkgz,Uid:f5dc6b2c-ba1b-4039-8947-278e73fda781,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"4c45f66dcec342ba40b80e7c1273944e3ce617f60f85d06b98136ce5ccb070ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:57.459956 kubelet[2781]: E0129 11:17:57.459934 2781 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c45f66dcec342ba40b80e7c1273944e3ce617f60f85d06b98136ce5ccb070ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:57.460115 kubelet[2781]: E0129 11:17:57.459976 2781 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c45f66dcec342ba40b80e7c1273944e3ce617f60f85d06b98136ce5ccb070ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-567645b7d8-fwkgz" Jan 29 11:17:57.460115 kubelet[2781]: E0129 11:17:57.459993 2781 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c45f66dcec342ba40b80e7c1273944e3ce617f60f85d06b98136ce5ccb070ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-567645b7d8-fwkgz" Jan 29 11:17:57.460115 kubelet[2781]: E0129 11:17:57.460021 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-567645b7d8-fwkgz_calico-apiserver(f5dc6b2c-ba1b-4039-8947-278e73fda781)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-567645b7d8-fwkgz_calico-apiserver(f5dc6b2c-ba1b-4039-8947-278e73fda781)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c45f66dcec342ba40b80e7c1273944e3ce617f60f85d06b98136ce5ccb070ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-567645b7d8-fwkgz" podUID="f5dc6b2c-ba1b-4039-8947-278e73fda781" Jan 29 11:17:57.499767 containerd[1540]: time="2025-01-29T11:17:57.499727750Z" level=error msg="Failed to destroy network for sandbox \"16821b5a461346d4c370c2363d3281501b6a6e309f883f488c79352736d78222\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:57.500577 containerd[1540]: time="2025-01-29T11:17:57.500560897Z" level=error msg="encountered an error cleaning up failed sandbox \"16821b5a461346d4c370c2363d3281501b6a6e309f883f488c79352736d78222\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:57.500697 containerd[1540]: time="2025-01-29T11:17:57.500684388Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9fbsh,Uid:34b7fb00-2637-448b-9a6c-d8fe087ac46d,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"16821b5a461346d4c370c2363d3281501b6a6e309f883f488c79352736d78222\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:57.500913 containerd[1540]: time="2025-01-29T11:17:57.500902285Z" level=error msg="Failed to destroy network for sandbox \"06c2c0079aaa7b0c1b90ee82c25014d20e7a17675bda51951443971d75dc2991\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:57.501064 kubelet[2781]: E0129 11:17:57.501031 2781 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16821b5a461346d4c370c2363d3281501b6a6e309f883f488c79352736d78222\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:57.501103 kubelet[2781]: E0129 11:17:57.501078 2781 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16821b5a461346d4c370c2363d3281501b6a6e309f883f488c79352736d78222\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9fbsh" Jan 29 11:17:57.501103 kubelet[2781]: E0129 11:17:57.501092 2781 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16821b5a461346d4c370c2363d3281501b6a6e309f883f488c79352736d78222\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9fbsh" Jan 29 11:17:57.501140 kubelet[2781]: E0129 11:17:57.501117 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-9fbsh_kube-system(34b7fb00-2637-448b-9a6c-d8fe087ac46d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-9fbsh_kube-system(34b7fb00-2637-448b-9a6c-d8fe087ac46d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16821b5a461346d4c370c2363d3281501b6a6e309f883f488c79352736d78222\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-9fbsh" podUID="34b7fb00-2637-448b-9a6c-d8fe087ac46d" Jan 29 11:17:57.502039 containerd[1540]: time="2025-01-29T11:17:57.502024062Z" level=error msg="encountered an error cleaning up failed sandbox \"06c2c0079aaa7b0c1b90ee82c25014d20e7a17675bda51951443971d75dc2991\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:57.507590 containerd[1540]: time="2025-01-29T11:17:57.502123049Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-54nng,Uid:fad30e8c-8ba2-44d9-9978-b8d6342a6efd,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"06c2c0079aaa7b0c1b90ee82c25014d20e7a17675bda51951443971d75dc2991\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:57.507590 containerd[1540]: time="2025-01-29T11:17:57.506105936Z" level=error msg="Failed to destroy network for sandbox \"0896878e2f7c90bddadcc3648f0bf177660243f948de176cff22c4a33fd20a5a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:57.507590 containerd[1540]: time="2025-01-29T11:17:57.506302071Z" level=error msg="encountered an error cleaning up failed sandbox \"0896878e2f7c90bddadcc3648f0bf177660243f948de176cff22c4a33fd20a5a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:57.507590 containerd[1540]: time="2025-01-29T11:17:57.506345132Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bd9c8f59d-nldcx,Uid:228d172c-bed5-421d-bbfc-69e399249629,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"0896878e2f7c90bddadcc3648f0bf177660243f948de176cff22c4a33fd20a5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:57.507711 kubelet[2781]: E0129 11:17:57.502207 2781 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06c2c0079aaa7b0c1b90ee82c25014d20e7a17675bda51951443971d75dc2991\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:57.507711 kubelet[2781]: E0129 11:17:57.502227 2781 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06c2c0079aaa7b0c1b90ee82c25014d20e7a17675bda51951443971d75dc2991\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-54nng" Jan 29 11:17:57.507711 kubelet[2781]: E0129 11:17:57.502238 2781 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06c2c0079aaa7b0c1b90ee82c25014d20e7a17675bda51951443971d75dc2991\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-54nng" Jan 29 11:17:57.507791 kubelet[2781]: E0129 11:17:57.502259 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-54nng_kube-system(fad30e8c-8ba2-44d9-9978-b8d6342a6efd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-54nng_kube-system(fad30e8c-8ba2-44d9-9978-b8d6342a6efd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"06c2c0079aaa7b0c1b90ee82c25014d20e7a17675bda51951443971d75dc2991\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-54nng" podUID="fad30e8c-8ba2-44d9-9978-b8d6342a6efd" Jan 29 11:17:57.507791 kubelet[2781]: E0129 11:17:57.506965 2781 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0896878e2f7c90bddadcc3648f0bf177660243f948de176cff22c4a33fd20a5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:57.507791 kubelet[2781]: E0129 11:17:57.507018 2781 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0896878e2f7c90bddadcc3648f0bf177660243f948de176cff22c4a33fd20a5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bd9c8f59d-nldcx" Jan 29 11:17:57.507868 kubelet[2781]: E0129 11:17:57.507029 2781 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0896878e2f7c90bddadcc3648f0bf177660243f948de176cff22c4a33fd20a5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bd9c8f59d-nldcx" Jan 29 11:17:57.507868 kubelet[2781]: E0129 11:17:57.507057 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-bd9c8f59d-nldcx_calico-system(228d172c-bed5-421d-bbfc-69e399249629)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-bd9c8f59d-nldcx_calico-system(228d172c-bed5-421d-bbfc-69e399249629)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0896878e2f7c90bddadcc3648f0bf177660243f948de176cff22c4a33fd20a5a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-bd9c8f59d-nldcx" podUID="228d172c-bed5-421d-bbfc-69e399249629" Jan 29 11:17:57.509096 containerd[1540]: time="2025-01-29T11:17:57.509034009Z" level=error msg="Failed to destroy network for sandbox \"2f143d0f0cd7781b10ac01bdc12ff00154b59685051326862b2b38dfabe185ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:57.509377 containerd[1540]: time="2025-01-29T11:17:57.509281764Z" level=error msg="encountered an error cleaning up failed sandbox \"2f143d0f0cd7781b10ac01bdc12ff00154b59685051326862b2b38dfabe185ba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:57.509586 containerd[1540]: time="2025-01-29T11:17:57.509361314Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567645b7d8-9spgk,Uid:dfc4733c-b2d3-4e5f-a242-756778fe7626,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"2f143d0f0cd7781b10ac01bdc12ff00154b59685051326862b2b38dfabe185ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:57.509741 kubelet[2781]: E0129 11:17:57.509724 2781 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f143d0f0cd7781b10ac01bdc12ff00154b59685051326862b2b38dfabe185ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:57.509810 kubelet[2781]: E0129 11:17:57.509751 2781 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f143d0f0cd7781b10ac01bdc12ff00154b59685051326862b2b38dfabe185ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-567645b7d8-9spgk" Jan 29 11:17:57.509810 kubelet[2781]: E0129 11:17:57.509763 2781 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f143d0f0cd7781b10ac01bdc12ff00154b59685051326862b2b38dfabe185ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-567645b7d8-9spgk" Jan 29 11:17:57.509810 kubelet[2781]: E0129 11:17:57.509789 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-567645b7d8-9spgk_calico-apiserver(dfc4733c-b2d3-4e5f-a242-756778fe7626)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-567645b7d8-9spgk_calico-apiserver(dfc4733c-b2d3-4e5f-a242-756778fe7626)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f143d0f0cd7781b10ac01bdc12ff00154b59685051326862b2b38dfabe185ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-567645b7d8-9spgk" podUID="dfc4733c-b2d3-4e5f-a242-756778fe7626" Jan 29 11:17:57.512285 containerd[1540]: time="2025-01-29T11:17:57.512193226Z" level=error msg="Failed to destroy network for sandbox \"ee54511ace36179b667adeb5410aa0c487826a5fb27e610c6a3fce075f6d248d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:57.512418 containerd[1540]: time="2025-01-29T11:17:57.512386096Z" level=error msg="encountered an error cleaning up failed sandbox \"ee54511ace36179b667adeb5410aa0c487826a5fb27e610c6a3fce075f6d248d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:57.512510 containerd[1540]: time="2025-01-29T11:17:57.512420199Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wnr5k,Uid:7d847577-abb5-4daf-916b-6334c86beb77,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"ee54511ace36179b667adeb5410aa0c487826a5fb27e610c6a3fce075f6d248d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:57.512615 kubelet[2781]: E0129 11:17:57.512535 2781 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee54511ace36179b667adeb5410aa0c487826a5fb27e610c6a3fce075f6d248d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:57.512615 kubelet[2781]: E0129 11:17:57.512565 2781 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee54511ace36179b667adeb5410aa0c487826a5fb27e610c6a3fce075f6d248d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wnr5k" Jan 29 11:17:57.512615 kubelet[2781]: E0129 11:17:57.512581 2781 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee54511ace36179b667adeb5410aa0c487826a5fb27e610c6a3fce075f6d248d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wnr5k" Jan 29 11:17:57.512685 kubelet[2781]: E0129 11:17:57.512601 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wnr5k_calico-system(7d847577-abb5-4daf-916b-6334c86beb77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wnr5k_calico-system(7d847577-abb5-4daf-916b-6334c86beb77)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee54511ace36179b667adeb5410aa0c487826a5fb27e610c6a3fce075f6d248d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wnr5k" podUID="7d847577-abb5-4daf-916b-6334c86beb77" Jan 29 11:17:58.098986 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4c45f66dcec342ba40b80e7c1273944e3ce617f60f85d06b98136ce5ccb070ea-shm.mount: Deactivated successfully. Jan 29 11:17:58.100038 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2f143d0f0cd7781b10ac01bdc12ff00154b59685051326862b2b38dfabe185ba-shm.mount: Deactivated successfully. Jan 29 11:17:58.100080 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-06c2c0079aaa7b0c1b90ee82c25014d20e7a17675bda51951443971d75dc2991-shm.mount: Deactivated successfully. Jan 29 11:17:58.294000 kubelet[2781]: I0129 11:17:58.293837 2781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f143d0f0cd7781b10ac01bdc12ff00154b59685051326862b2b38dfabe185ba" Jan 29 11:17:58.339967 containerd[1540]: time="2025-01-29T11:17:58.339940801Z" level=info msg="StopPodSandbox for \"2f143d0f0cd7781b10ac01bdc12ff00154b59685051326862b2b38dfabe185ba\"" Jan 29 11:17:58.353436 containerd[1540]: time="2025-01-29T11:17:58.351665787Z" level=info msg="Ensure that sandbox 2f143d0f0cd7781b10ac01bdc12ff00154b59685051326862b2b38dfabe185ba in task-service has been cleanup successfully" Jan 29 11:17:58.353436 containerd[1540]: time="2025-01-29T11:17:58.351805362Z" level=info msg="TearDown network for sandbox \"2f143d0f0cd7781b10ac01bdc12ff00154b59685051326862b2b38dfabe185ba\" successfully" Jan 29 11:17:58.353436 containerd[1540]: time="2025-01-29T11:17:58.351815336Z" level=info msg="StopPodSandbox for \"2f143d0f0cd7781b10ac01bdc12ff00154b59685051326862b2b38dfabe185ba\" returns successfully" Jan 29 11:17:58.353915 systemd[1]: run-netns-cni\x2d07c06cf8\x2d691c\x2d4f9e\x2dcc9b\x2d8da2194558bc.mount: Deactivated successfully. Jan 29 11:17:58.453059 containerd[1540]: time="2025-01-29T11:17:58.452630942Z" level=info msg="StopPodSandbox for \"34d5f81b6bf32fd103a059f7e8e471b0f64115105110ab425bd26ae6525d30f5\"" Jan 29 11:17:58.453059 containerd[1540]: time="2025-01-29T11:17:58.452690735Z" level=info msg="TearDown network for sandbox \"34d5f81b6bf32fd103a059f7e8e471b0f64115105110ab425bd26ae6525d30f5\" successfully" Jan 29 11:17:58.453059 containerd[1540]: time="2025-01-29T11:17:58.452720337Z" level=info msg="StopPodSandbox for \"34d5f81b6bf32fd103a059f7e8e471b0f64115105110ab425bd26ae6525d30f5\" returns successfully" Jan 29 11:17:58.453170 containerd[1540]: time="2025-01-29T11:17:58.453132724Z" level=info msg="StopPodSandbox for \"86e28ef1ae145ad1a45ac419c458722cc93d669f5a926e1e4b0d549e71fac0d7\"" Jan 29 11:17:58.453209 containerd[1540]: time="2025-01-29T11:17:58.453167975Z" level=info msg="TearDown network for sandbox \"86e28ef1ae145ad1a45ac419c458722cc93d669f5a926e1e4b0d549e71fac0d7\" successfully" Jan 29 11:17:58.453230 containerd[1540]: time="2025-01-29T11:17:58.453210098Z" level=info msg="StopPodSandbox for \"86e28ef1ae145ad1a45ac419c458722cc93d669f5a926e1e4b0d549e71fac0d7\" returns successfully" Jan 29 11:17:58.490914 containerd[1540]: time="2025-01-29T11:17:58.486679583Z" level=info msg="StopPodSandbox for \"fe8823acf14dafe1127b302a3b7f03db3aacf62b65e1c7d73654ab4f6e3c73ac\"" Jan 29 11:17:58.491226 containerd[1540]: time="2025-01-29T11:17:58.491190991Z" level=info msg="TearDown network for sandbox \"fe8823acf14dafe1127b302a3b7f03db3aacf62b65e1c7d73654ab4f6e3c73ac\" successfully" Jan 29 11:17:58.491275 containerd[1540]: time="2025-01-29T11:17:58.491267200Z" level=info msg="StopPodSandbox for \"fe8823acf14dafe1127b302a3b7f03db3aacf62b65e1c7d73654ab4f6e3c73ac\" returns successfully" Jan 29 11:17:58.496586 containerd[1540]: time="2025-01-29T11:17:58.496008430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567645b7d8-9spgk,Uid:dfc4733c-b2d3-4e5f-a242-756778fe7626,Namespace:calico-apiserver,Attempt:4,}" Jan 29 11:17:58.534347 kubelet[2781]: I0129 11:17:58.534325 2781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06c2c0079aaa7b0c1b90ee82c25014d20e7a17675bda51951443971d75dc2991" Jan 29 11:17:58.534749 containerd[1540]: time="2025-01-29T11:17:58.534727648Z" level=info msg="StopPodSandbox for \"06c2c0079aaa7b0c1b90ee82c25014d20e7a17675bda51951443971d75dc2991\"" Jan 29 11:17:58.534883 containerd[1540]: time="2025-01-29T11:17:58.534862903Z" level=info msg="Ensure that sandbox 06c2c0079aaa7b0c1b90ee82c25014d20e7a17675bda51951443971d75dc2991 in task-service has been cleanup successfully" Jan 29 11:17:58.536738 systemd[1]: run-netns-cni\x2d0a9c0de0\x2d1b63\x2d07a6\x2d359b\x2dc9838fa49d81.mount: Deactivated successfully. Jan 29 11:17:58.539615 containerd[1540]: time="2025-01-29T11:17:58.539520152Z" level=info msg="TearDown network for sandbox \"06c2c0079aaa7b0c1b90ee82c25014d20e7a17675bda51951443971d75dc2991\" successfully" Jan 29 11:17:58.539615 containerd[1540]: time="2025-01-29T11:17:58.539549752Z" level=info msg="StopPodSandbox for \"06c2c0079aaa7b0c1b90ee82c25014d20e7a17675bda51951443971d75dc2991\" returns successfully" Jan 29 11:17:58.588017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3704850921.mount: Deactivated successfully. Jan 29 11:17:58.639345 containerd[1540]: time="2025-01-29T11:17:58.639194316Z" level=info msg="StopPodSandbox for \"afe373a75fe2a4b1b152fa178d97c445f327537cf2f6cf3c4b15fab4554474b4\"" Jan 29 11:17:58.639345 containerd[1540]: time="2025-01-29T11:17:58.639276913Z" level=info msg="TearDown network for sandbox \"afe373a75fe2a4b1b152fa178d97c445f327537cf2f6cf3c4b15fab4554474b4\" successfully" Jan 29 11:17:58.639345 containerd[1540]: time="2025-01-29T11:17:58.639286819Z" level=info msg="StopPodSandbox for \"afe373a75fe2a4b1b152fa178d97c445f327537cf2f6cf3c4b15fab4554474b4\" returns successfully" Jan 29 11:17:58.680283 containerd[1540]: time="2025-01-29T11:17:58.680200128Z" level=info msg="StopPodSandbox for \"965f0e21cd0c998502fe33972ceb0aa8588cdd62fe9c0d1228276afb07663b3a\"" Jan 29 11:17:58.680531 containerd[1540]: time="2025-01-29T11:17:58.680440229Z" level=info msg="TearDown network for sandbox \"965f0e21cd0c998502fe33972ceb0aa8588cdd62fe9c0d1228276afb07663b3a\" successfully" Jan 29 11:17:58.680531 containerd[1540]: time="2025-01-29T11:17:58.680477361Z" level=info msg="StopPodSandbox for \"965f0e21cd0c998502fe33972ceb0aa8588cdd62fe9c0d1228276afb07663b3a\" returns successfully" Jan 29 11:17:58.686524 kubelet[2781]: I0129 11:17:58.686476 2781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee54511ace36179b667adeb5410aa0c487826a5fb27e610c6a3fce075f6d248d" Jan 29 11:17:58.688787 containerd[1540]: time="2025-01-29T11:17:58.688661292Z" level=info msg="StopPodSandbox for \"2f4a59700a7437677b25a8b32a15088b17d16fda3963a67cd5a23b1ad2cf6bfa\"" Jan 29 11:17:58.688787 containerd[1540]: time="2025-01-29T11:17:58.688724444Z" level=info msg="TearDown network for sandbox \"2f4a59700a7437677b25a8b32a15088b17d16fda3963a67cd5a23b1ad2cf6bfa\" successfully" Jan 29 11:17:58.688787 containerd[1540]: time="2025-01-29T11:17:58.688751548Z" level=info msg="StopPodSandbox for \"2f4a59700a7437677b25a8b32a15088b17d16fda3963a67cd5a23b1ad2cf6bfa\" returns successfully" Jan 29 11:17:58.690764 containerd[1540]: time="2025-01-29T11:17:58.690710925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-54nng,Uid:fad30e8c-8ba2-44d9-9978-b8d6342a6efd,Namespace:kube-system,Attempt:4,}" Jan 29 11:17:58.690882 containerd[1540]: time="2025-01-29T11:17:58.690867090Z" level=info msg="StopPodSandbox for \"ee54511ace36179b667adeb5410aa0c487826a5fb27e610c6a3fce075f6d248d\"" Jan 29 11:17:58.693424 containerd[1540]: time="2025-01-29T11:17:58.690975568Z" level=info msg="Ensure that sandbox ee54511ace36179b667adeb5410aa0c487826a5fb27e610c6a3fce075f6d248d in task-service has been cleanup successfully" Jan 29 11:17:58.693599 containerd[1540]: time="2025-01-29T11:17:58.693584851Z" level=info msg="TearDown network for sandbox \"ee54511ace36179b667adeb5410aa0c487826a5fb27e610c6a3fce075f6d248d\" successfully" Jan 29 11:17:58.693641 containerd[1540]: time="2025-01-29T11:17:58.693634188Z" level=info msg="StopPodSandbox for \"ee54511ace36179b667adeb5410aa0c487826a5fb27e610c6a3fce075f6d248d\" returns successfully" Jan 29 11:17:58.693844 systemd[1]: run-netns-cni\x2d9fdd4ec3\x2d0059\x2de874\x2df929\x2d26b23464f0dd.mount: Deactivated successfully. Jan 29 11:17:58.699028 containerd[1540]: time="2025-01-29T11:17:58.698411678Z" level=info msg="StopPodSandbox for \"47bf1d5f1716fb7c348c76e99ea30c8ab02ab1badf7a9b5eb161467c74089f8f\"" Jan 29 11:17:58.699028 containerd[1540]: time="2025-01-29T11:17:58.698478409Z" level=info msg="TearDown network for sandbox \"47bf1d5f1716fb7c348c76e99ea30c8ab02ab1badf7a9b5eb161467c74089f8f\" successfully" Jan 29 11:17:58.699766 containerd[1540]: time="2025-01-29T11:17:58.698488081Z" level=info msg="StopPodSandbox for \"47bf1d5f1716fb7c348c76e99ea30c8ab02ab1badf7a9b5eb161467c74089f8f\" returns successfully" Jan 29 11:17:58.701171 containerd[1540]: time="2025-01-29T11:17:58.701098100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wnr5k,Uid:7d847577-abb5-4daf-916b-6334c86beb77,Namespace:calico-system,Attempt:2,}" Jan 29 11:17:58.712542 kubelet[2781]: I0129 11:17:58.712173 2781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c45f66dcec342ba40b80e7c1273944e3ce617f60f85d06b98136ce5ccb070ea" Jan 29 11:17:58.717486 containerd[1540]: time="2025-01-29T11:17:58.713227714Z" level=info msg="StopPodSandbox for \"4c45f66dcec342ba40b80e7c1273944e3ce617f60f85d06b98136ce5ccb070ea\"" Jan 29 11:17:58.717486 containerd[1540]: time="2025-01-29T11:17:58.713345090Z" level=info msg="Ensure that sandbox 4c45f66dcec342ba40b80e7c1273944e3ce617f60f85d06b98136ce5ccb070ea in task-service has been cleanup successfully" Jan 29 11:17:58.717486 containerd[1540]: time="2025-01-29T11:17:58.713515293Z" level=info msg="TearDown network for sandbox \"4c45f66dcec342ba40b80e7c1273944e3ce617f60f85d06b98136ce5ccb070ea\" successfully" Jan 29 11:17:58.717486 containerd[1540]: time="2025-01-29T11:17:58.713523481Z" level=info msg="StopPodSandbox for \"4c45f66dcec342ba40b80e7c1273944e3ce617f60f85d06b98136ce5ccb070ea\" returns successfully" Jan 29 11:17:58.717486 containerd[1540]: time="2025-01-29T11:17:58.714130930Z" level=info msg="StopPodSandbox for \"4e9baf12bd4a7720a7c345a2beb0ed0da0a10437addd844ba1511d811e6143c2\"" Jan 29 11:17:58.717486 containerd[1540]: time="2025-01-29T11:17:58.714266098Z" level=info msg="TearDown network for sandbox \"4e9baf12bd4a7720a7c345a2beb0ed0da0a10437addd844ba1511d811e6143c2\" successfully" Jan 29 11:17:58.717486 containerd[1540]: time="2025-01-29T11:17:58.714274094Z" level=info msg="StopPodSandbox for \"4e9baf12bd4a7720a7c345a2beb0ed0da0a10437addd844ba1511d811e6143c2\" returns successfully" Jan 29 11:17:58.723254 containerd[1540]: time="2025-01-29T11:17:58.722958145Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:17:58.740299 containerd[1540]: time="2025-01-29T11:17:58.739994692Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 29 11:17:58.752886 containerd[1540]: time="2025-01-29T11:17:58.752859732Z" level=info msg="StopPodSandbox for \"23007204d967495cee1d23fcee3f19723efcffe924a0f2bc5376ef57c876772e\"" Jan 29 11:17:58.753142 containerd[1540]: time="2025-01-29T11:17:58.752934168Z" level=info msg="TearDown network for sandbox \"23007204d967495cee1d23fcee3f19723efcffe924a0f2bc5376ef57c876772e\" successfully" Jan 29 11:17:58.753142 containerd[1540]: time="2025-01-29T11:17:58.752941303Z" level=info msg="StopPodSandbox for \"23007204d967495cee1d23fcee3f19723efcffe924a0f2bc5376ef57c876772e\" returns successfully" Jan 29 11:17:58.758707 containerd[1540]: time="2025-01-29T11:17:58.758691543Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:17:58.762588 containerd[1540]: time="2025-01-29T11:17:58.762567312Z" level=info msg="StopPodSandbox for \"6355f039de9944dc68083e3a8b5e984d252c2bc0493f76ac2e44d33241168d1e\"" Jan 29 11:17:58.762651 containerd[1540]: time="2025-01-29T11:17:58.762619395Z" level=info msg="TearDown network for sandbox \"6355f039de9944dc68083e3a8b5e984d252c2bc0493f76ac2e44d33241168d1e\" successfully" Jan 29 11:17:58.762651 containerd[1540]: time="2025-01-29T11:17:58.762626239Z" level=info msg="StopPodSandbox for \"6355f039de9944dc68083e3a8b5e984d252c2bc0493f76ac2e44d33241168d1e\" returns successfully" Jan 29 11:17:58.763002 containerd[1540]: time="2025-01-29T11:17:58.762892358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567645b7d8-fwkgz,Uid:f5dc6b2c-ba1b-4039-8947-278e73fda781,Namespace:calico-apiserver,Attempt:4,}" Jan 29 11:17:58.763233 kubelet[2781]: I0129 11:17:58.763187 2781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16821b5a461346d4c370c2363d3281501b6a6e309f883f488c79352736d78222" Jan 29 11:17:58.764910 containerd[1540]: time="2025-01-29T11:17:58.764798699Z" level=info msg="StopPodSandbox for \"16821b5a461346d4c370c2363d3281501b6a6e309f883f488c79352736d78222\"" Jan 29 11:17:58.765309 containerd[1540]: time="2025-01-29T11:17:58.765080392Z" level=info msg="Ensure that sandbox 16821b5a461346d4c370c2363d3281501b6a6e309f883f488c79352736d78222 in task-service has been cleanup successfully" Jan 29 11:17:58.766473 containerd[1540]: time="2025-01-29T11:17:58.766440713Z" level=info msg="TearDown network for sandbox \"16821b5a461346d4c370c2363d3281501b6a6e309f883f488c79352736d78222\" successfully" Jan 29 11:17:58.766473 containerd[1540]: time="2025-01-29T11:17:58.766467553Z" level=info msg="StopPodSandbox for \"16821b5a461346d4c370c2363d3281501b6a6e309f883f488c79352736d78222\" returns successfully" Jan 29 11:17:58.768684 containerd[1540]: time="2025-01-29T11:17:58.768568392Z" level=info msg="StopPodSandbox for \"46da7b273569014509eee1c5dd3ecf0e5a52dbc395649a8495ea0ee0e6141444\"" Jan 29 11:17:58.769176 containerd[1540]: time="2025-01-29T11:17:58.768937359Z" level=info msg="TearDown network for sandbox \"46da7b273569014509eee1c5dd3ecf0e5a52dbc395649a8495ea0ee0e6141444\" successfully" Jan 29 11:17:58.769176 containerd[1540]: time="2025-01-29T11:17:58.768947899Z" level=info msg="StopPodSandbox for \"46da7b273569014509eee1c5dd3ecf0e5a52dbc395649a8495ea0ee0e6141444\" returns successfully" Jan 29 11:17:58.770715 containerd[1540]: time="2025-01-29T11:17:58.770460864Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:17:58.772163 containerd[1540]: time="2025-01-29T11:17:58.772151920Z" level=info msg="StopPodSandbox for \"c793c1c171159802ca61909f642594f684634e7885bae0aff26a8a8fd1ff1a5a\"" Jan 29 11:17:58.773533 containerd[1540]: time="2025-01-29T11:17:58.773486772Z" level=info msg="TearDown network for sandbox \"c793c1c171159802ca61909f642594f684634e7885bae0aff26a8a8fd1ff1a5a\" successfully" Jan 29 11:17:58.773533 containerd[1540]: time="2025-01-29T11:17:58.773498175Z" level=info msg="StopPodSandbox for \"c793c1c171159802ca61909f642594f684634e7885bae0aff26a8a8fd1ff1a5a\" returns successfully" Jan 29 11:17:58.773533 containerd[1540]: time="2025-01-29T11:17:58.773215779Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 5.561001144s" Jan 29 11:17:58.773533 containerd[1540]: time="2025-01-29T11:17:58.773530997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 29 11:17:58.774190 containerd[1540]: time="2025-01-29T11:17:58.774179909Z" level=info msg="StopPodSandbox for \"5094348e9dfe0fbb3475ca8e30b0dcbcf250044f8a46205cbc8abcea1336af20\"" Jan 29 11:17:58.774312 containerd[1540]: time="2025-01-29T11:17:58.774251292Z" level=info msg="TearDown network for sandbox \"5094348e9dfe0fbb3475ca8e30b0dcbcf250044f8a46205cbc8abcea1336af20\" successfully" Jan 29 11:17:58.774312 containerd[1540]: time="2025-01-29T11:17:58.774259686Z" level=info msg="StopPodSandbox for \"5094348e9dfe0fbb3475ca8e30b0dcbcf250044f8a46205cbc8abcea1336af20\" returns successfully" Jan 29 11:17:58.775350 containerd[1540]: time="2025-01-29T11:17:58.775316559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9fbsh,Uid:34b7fb00-2637-448b-9a6c-d8fe087ac46d,Namespace:kube-system,Attempt:4,}" Jan 29 11:17:58.776083 kubelet[2781]: I0129 11:17:58.776055 2781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0896878e2f7c90bddadcc3648f0bf177660243f948de176cff22c4a33fd20a5a" Jan 29 11:17:58.779411 containerd[1540]: time="2025-01-29T11:17:58.779388680Z" level=info msg="StopPodSandbox for \"0896878e2f7c90bddadcc3648f0bf177660243f948de176cff22c4a33fd20a5a\"" Jan 29 11:17:58.782222 containerd[1540]: time="2025-01-29T11:17:58.779942533Z" level=info msg="Ensure that sandbox 0896878e2f7c90bddadcc3648f0bf177660243f948de176cff22c4a33fd20a5a in task-service has been cleanup successfully" Jan 29 11:17:58.790698 containerd[1540]: time="2025-01-29T11:17:58.790672102Z" level=info msg="TearDown network for sandbox \"0896878e2f7c90bddadcc3648f0bf177660243f948de176cff22c4a33fd20a5a\" successfully" Jan 29 11:17:58.790698 containerd[1540]: time="2025-01-29T11:17:58.790691675Z" level=info msg="StopPodSandbox for \"0896878e2f7c90bddadcc3648f0bf177660243f948de176cff22c4a33fd20a5a\" returns successfully" Jan 29 11:17:58.792261 containerd[1540]: time="2025-01-29T11:17:58.792118498Z" level=info msg="StopPodSandbox for \"3400f88c6211011833ab2611d8fc78659e89d2a3afff7f350188be4e22083966\"" Jan 29 11:17:58.792690 containerd[1540]: time="2025-01-29T11:17:58.792651924Z" level=info msg="TearDown network for sandbox \"3400f88c6211011833ab2611d8fc78659e89d2a3afff7f350188be4e22083966\" successfully" Jan 29 11:17:58.792690 containerd[1540]: time="2025-01-29T11:17:58.792662111Z" level=info msg="StopPodSandbox for \"3400f88c6211011833ab2611d8fc78659e89d2a3afff7f350188be4e22083966\" returns successfully" Jan 29 11:17:58.793519 containerd[1540]: time="2025-01-29T11:17:58.793489840Z" level=info msg="StopPodSandbox for \"623e36285bbc054c04bc3646932f77256d498dc809af594fa14fc84d90ab4c97\"" Jan 29 11:17:58.793679 containerd[1540]: time="2025-01-29T11:17:58.793640144Z" level=info msg="TearDown network for sandbox \"623e36285bbc054c04bc3646932f77256d498dc809af594fa14fc84d90ab4c97\" successfully" Jan 29 11:17:58.793679 containerd[1540]: time="2025-01-29T11:17:58.793649699Z" level=info msg="StopPodSandbox for \"623e36285bbc054c04bc3646932f77256d498dc809af594fa14fc84d90ab4c97\" returns successfully" Jan 29 11:17:58.794812 containerd[1540]: time="2025-01-29T11:17:58.794704521Z" level=info msg="StopPodSandbox for \"e8e80bf7e3d0dddb74351ce9d80ce6d0e4e17b790a9d4eeb36e6b229131f792d\"" Jan 29 11:17:58.794812 containerd[1540]: time="2025-01-29T11:17:58.794781407Z" level=info msg="TearDown network for sandbox \"e8e80bf7e3d0dddb74351ce9d80ce6d0e4e17b790a9d4eeb36e6b229131f792d\" successfully" Jan 29 11:17:58.794812 containerd[1540]: time="2025-01-29T11:17:58.794789104Z" level=info msg="StopPodSandbox for \"e8e80bf7e3d0dddb74351ce9d80ce6d0e4e17b790a9d4eeb36e6b229131f792d\" returns successfully" Jan 29 11:17:58.796043 containerd[1540]: time="2025-01-29T11:17:58.796030847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bd9c8f59d-nldcx,Uid:228d172c-bed5-421d-bbfc-69e399249629,Namespace:calico-system,Attempt:4,}" Jan 29 11:17:58.816727 containerd[1540]: time="2025-01-29T11:17:58.816700188Z" level=error msg="Failed to destroy network for sandbox \"cc7624e953451f932ad24aac291d9a20dd4aaf5933924fd7644e26be246394f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:58.816946 containerd[1540]: time="2025-01-29T11:17:58.816926239Z" level=error msg="encountered an error cleaning up failed sandbox \"cc7624e953451f932ad24aac291d9a20dd4aaf5933924fd7644e26be246394f2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:58.816978 containerd[1540]: time="2025-01-29T11:17:58.816961885Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567645b7d8-9spgk,Uid:dfc4733c-b2d3-4e5f-a242-756778fe7626,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"cc7624e953451f932ad24aac291d9a20dd4aaf5933924fd7644e26be246394f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:58.817466 kubelet[2781]: E0129 11:17:58.817150 2781 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc7624e953451f932ad24aac291d9a20dd4aaf5933924fd7644e26be246394f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:58.817466 kubelet[2781]: E0129 11:17:58.817189 2781 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc7624e953451f932ad24aac291d9a20dd4aaf5933924fd7644e26be246394f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-567645b7d8-9spgk" Jan 29 11:17:58.817466 kubelet[2781]: E0129 11:17:58.817210 2781 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc7624e953451f932ad24aac291d9a20dd4aaf5933924fd7644e26be246394f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-567645b7d8-9spgk" Jan 29 11:17:58.819278 kubelet[2781]: E0129 11:17:58.817529 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-567645b7d8-9spgk_calico-apiserver(dfc4733c-b2d3-4e5f-a242-756778fe7626)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-567645b7d8-9spgk_calico-apiserver(dfc4733c-b2d3-4e5f-a242-756778fe7626)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc7624e953451f932ad24aac291d9a20dd4aaf5933924fd7644e26be246394f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-567645b7d8-9spgk" podUID="dfc4733c-b2d3-4e5f-a242-756778fe7626" Jan 29 11:17:58.828901 containerd[1540]: time="2025-01-29T11:17:58.828814337Z" level=error msg="Failed to destroy network for sandbox \"20c1d0d43e7d7d6a06775382b021ad3cfdf93ec7e64efa9cb8914a0665eb8c72\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:58.829584 containerd[1540]: time="2025-01-29T11:17:58.829354048Z" level=error msg="encountered an error cleaning up failed sandbox \"20c1d0d43e7d7d6a06775382b021ad3cfdf93ec7e64efa9cb8914a0665eb8c72\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:58.829584 containerd[1540]: time="2025-01-29T11:17:58.829401035Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wnr5k,Uid:7d847577-abb5-4daf-916b-6334c86beb77,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"20c1d0d43e7d7d6a06775382b021ad3cfdf93ec7e64efa9cb8914a0665eb8c72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:58.829778 kubelet[2781]: E0129 11:17:58.829757 2781 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20c1d0d43e7d7d6a06775382b021ad3cfdf93ec7e64efa9cb8914a0665eb8c72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:58.829912 kubelet[2781]: E0129 11:17:58.829844 2781 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20c1d0d43e7d7d6a06775382b021ad3cfdf93ec7e64efa9cb8914a0665eb8c72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wnr5k" Jan 29 11:17:58.829912 kubelet[2781]: E0129 11:17:58.829861 2781 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20c1d0d43e7d7d6a06775382b021ad3cfdf93ec7e64efa9cb8914a0665eb8c72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wnr5k" Jan 29 11:17:58.829912 kubelet[2781]: E0129 11:17:58.829888 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wnr5k_calico-system(7d847577-abb5-4daf-916b-6334c86beb77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wnr5k_calico-system(7d847577-abb5-4daf-916b-6334c86beb77)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"20c1d0d43e7d7d6a06775382b021ad3cfdf93ec7e64efa9cb8914a0665eb8c72\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wnr5k" podUID="7d847577-abb5-4daf-916b-6334c86beb77" Jan 29 11:17:58.843853 containerd[1540]: time="2025-01-29T11:17:58.843756484Z" level=error msg="Failed to destroy network for sandbox \"11fd57254fd350cdf6ceae4fa18ccb31024c084ac553db045d4ee4303cc5857e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:58.844091 containerd[1540]: time="2025-01-29T11:17:58.844074094Z" level=error msg="encountered an error cleaning up failed sandbox \"11fd57254fd350cdf6ceae4fa18ccb31024c084ac553db045d4ee4303cc5857e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:58.844169 containerd[1540]: time="2025-01-29T11:17:58.844157344Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-54nng,Uid:fad30e8c-8ba2-44d9-9978-b8d6342a6efd,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"11fd57254fd350cdf6ceae4fa18ccb31024c084ac553db045d4ee4303cc5857e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:58.844435 kubelet[2781]: E0129 11:17:58.844410 2781 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11fd57254fd350cdf6ceae4fa18ccb31024c084ac553db045d4ee4303cc5857e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:58.844494 kubelet[2781]: E0129 11:17:58.844447 2781 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11fd57254fd350cdf6ceae4fa18ccb31024c084ac553db045d4ee4303cc5857e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-54nng" Jan 29 11:17:58.844494 kubelet[2781]: E0129 11:17:58.844460 2781 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11fd57254fd350cdf6ceae4fa18ccb31024c084ac553db045d4ee4303cc5857e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-54nng" Jan 29 11:17:58.844549 kubelet[2781]: E0129 11:17:58.844488 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-54nng_kube-system(fad30e8c-8ba2-44d9-9978-b8d6342a6efd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-54nng_kube-system(fad30e8c-8ba2-44d9-9978-b8d6342a6efd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"11fd57254fd350cdf6ceae4fa18ccb31024c084ac553db045d4ee4303cc5857e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-54nng" podUID="fad30e8c-8ba2-44d9-9978-b8d6342a6efd" Jan 29 11:17:58.852308 containerd[1540]: time="2025-01-29T11:17:58.852277535Z" level=error msg="Failed to destroy network for sandbox \"2812cb2a492fa5a430e9b05ff132496b2fedfc05205f05db5317733b5bfa19a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:58.852719 containerd[1540]: time="2025-01-29T11:17:58.852597838Z" level=error msg="encountered an error cleaning up failed sandbox \"2812cb2a492fa5a430e9b05ff132496b2fedfc05205f05db5317733b5bfa19a7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:58.852719 containerd[1540]: time="2025-01-29T11:17:58.852638441Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567645b7d8-fwkgz,Uid:f5dc6b2c-ba1b-4039-8947-278e73fda781,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"2812cb2a492fa5a430e9b05ff132496b2fedfc05205f05db5317733b5bfa19a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:58.852835 kubelet[2781]: E0129 11:17:58.852801 2781 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2812cb2a492fa5a430e9b05ff132496b2fedfc05205f05db5317733b5bfa19a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:58.852880 kubelet[2781]: E0129 11:17:58.852835 2781 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2812cb2a492fa5a430e9b05ff132496b2fedfc05205f05db5317733b5bfa19a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-567645b7d8-fwkgz" Jan 29 11:17:58.852880 kubelet[2781]: E0129 11:17:58.852846 2781 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2812cb2a492fa5a430e9b05ff132496b2fedfc05205f05db5317733b5bfa19a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-567645b7d8-fwkgz" Jan 29 11:17:58.852880 kubelet[2781]: E0129 11:17:58.852870 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-567645b7d8-fwkgz_calico-apiserver(f5dc6b2c-ba1b-4039-8947-278e73fda781)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-567645b7d8-fwkgz_calico-apiserver(f5dc6b2c-ba1b-4039-8947-278e73fda781)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2812cb2a492fa5a430e9b05ff132496b2fedfc05205f05db5317733b5bfa19a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-567645b7d8-fwkgz" podUID="f5dc6b2c-ba1b-4039-8947-278e73fda781" Jan 29 11:17:58.977582 containerd[1540]: time="2025-01-29T11:17:58.977456823Z" level=info msg="CreateContainer within sandbox \"4517a69d8d0be70a022488052992f4e79dc71c5e48beed48ac35d8a6809f95f2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 11:17:59.006610 containerd[1540]: time="2025-01-29T11:17:59.006354356Z" level=error msg="Failed to destroy network for sandbox \"399de994fe8e279bd5d935f0d3b71a0ca4cbc60401fd7483c90bc80c2579c64e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:59.006610 containerd[1540]: time="2025-01-29T11:17:59.006551306Z" level=error msg="encountered an error cleaning up failed sandbox \"399de994fe8e279bd5d935f0d3b71a0ca4cbc60401fd7483c90bc80c2579c64e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:59.006610 containerd[1540]: time="2025-01-29T11:17:59.006584048Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9fbsh,Uid:34b7fb00-2637-448b-9a6c-d8fe087ac46d,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"399de994fe8e279bd5d935f0d3b71a0ca4cbc60401fd7483c90bc80c2579c64e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:59.006839 kubelet[2781]: E0129 11:17:59.006712 2781 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"399de994fe8e279bd5d935f0d3b71a0ca4cbc60401fd7483c90bc80c2579c64e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:59.006839 kubelet[2781]: E0129 11:17:59.006747 2781 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"399de994fe8e279bd5d935f0d3b71a0ca4cbc60401fd7483c90bc80c2579c64e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9fbsh" Jan 29 11:17:59.006839 kubelet[2781]: E0129 11:17:59.006764 2781 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"399de994fe8e279bd5d935f0d3b71a0ca4cbc60401fd7483c90bc80c2579c64e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9fbsh" Jan 29 11:17:59.007024 kubelet[2781]: E0129 11:17:59.006790 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-9fbsh_kube-system(34b7fb00-2637-448b-9a6c-d8fe087ac46d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-9fbsh_kube-system(34b7fb00-2637-448b-9a6c-d8fe087ac46d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"399de994fe8e279bd5d935f0d3b71a0ca4cbc60401fd7483c90bc80c2579c64e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-9fbsh" podUID="34b7fb00-2637-448b-9a6c-d8fe087ac46d" Jan 29 11:17:59.020259 containerd[1540]: time="2025-01-29T11:17:59.019521839Z" level=error msg="Failed to destroy network for sandbox \"374ba73e142b2e0590751ebfa9ecb18acad37ef634e9f4b1ba9a38deb4e3a9dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:59.020259 containerd[1540]: time="2025-01-29T11:17:59.019721201Z" level=error msg="encountered an error cleaning up failed sandbox \"374ba73e142b2e0590751ebfa9ecb18acad37ef634e9f4b1ba9a38deb4e3a9dc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:59.020259 containerd[1540]: time="2025-01-29T11:17:59.019753558Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bd9c8f59d-nldcx,Uid:228d172c-bed5-421d-bbfc-69e399249629,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"374ba73e142b2e0590751ebfa9ecb18acad37ef634e9f4b1ba9a38deb4e3a9dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:59.020848 kubelet[2781]: E0129 11:17:59.019843 2781 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"374ba73e142b2e0590751ebfa9ecb18acad37ef634e9f4b1ba9a38deb4e3a9dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:17:59.020848 kubelet[2781]: E0129 11:17:59.019884 2781 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"374ba73e142b2e0590751ebfa9ecb18acad37ef634e9f4b1ba9a38deb4e3a9dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bd9c8f59d-nldcx" Jan 29 11:17:59.020848 kubelet[2781]: E0129 11:17:59.019904 2781 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"374ba73e142b2e0590751ebfa9ecb18acad37ef634e9f4b1ba9a38deb4e3a9dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bd9c8f59d-nldcx" Jan 29 11:17:59.020924 kubelet[2781]: E0129 11:17:59.019937 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-bd9c8f59d-nldcx_calico-system(228d172c-bed5-421d-bbfc-69e399249629)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-bd9c8f59d-nldcx_calico-system(228d172c-bed5-421d-bbfc-69e399249629)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"374ba73e142b2e0590751ebfa9ecb18acad37ef634e9f4b1ba9a38deb4e3a9dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-bd9c8f59d-nldcx" podUID="228d172c-bed5-421d-bbfc-69e399249629" Jan 29 11:17:59.038366 containerd[1540]: time="2025-01-29T11:17:59.038339567Z" level=info msg="CreateContainer within sandbox \"4517a69d8d0be70a022488052992f4e79dc71c5e48beed48ac35d8a6809f95f2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5b1ef961f3faebc1e6546fdfb0341cf5bc88e1fb0b35fc91e6515aa8a12f6423\"" Jan 29 11:17:59.041285 containerd[1540]: time="2025-01-29T11:17:59.041242748Z" level=info msg="StartContainer for \"5b1ef961f3faebc1e6546fdfb0341cf5bc88e1fb0b35fc91e6515aa8a12f6423\"" Jan 29 11:17:59.100597 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cc7624e953451f932ad24aac291d9a20dd4aaf5933924fd7644e26be246394f2-shm.mount: Deactivated successfully. Jan 29 11:17:59.100661 systemd[1]: run-netns-cni\x2ddb9a8bbd\x2d9c46\x2dfe1a\x2dcca4\x2dd712043a8557.mount: Deactivated successfully. Jan 29 11:17:59.100703 systemd[1]: run-netns-cni\x2d4cf18c6f\x2d7c3d\x2d9580\x2d6dc6\x2d2ce2c3fdbe3d.mount: Deactivated successfully. Jan 29 11:17:59.100741 systemd[1]: run-netns-cni\x2d4a07facf\x2ddc9e\x2d3450\x2d62c5\x2d5808ceb33e5b.mount: Deactivated successfully. Jan 29 11:17:59.148389 systemd[1]: run-containerd-runc-k8s.io-5b1ef961f3faebc1e6546fdfb0341cf5bc88e1fb0b35fc91e6515aa8a12f6423-runc.NisF8Z.mount: Deactivated successfully. Jan 29 11:17:59.154456 systemd[1]: Started cri-containerd-5b1ef961f3faebc1e6546fdfb0341cf5bc88e1fb0b35fc91e6515aa8a12f6423.scope - libcontainer container 5b1ef961f3faebc1e6546fdfb0341cf5bc88e1fb0b35fc91e6515aa8a12f6423. Jan 29 11:17:59.175671 containerd[1540]: time="2025-01-29T11:17:59.175616597Z" level=info msg="StartContainer for \"5b1ef961f3faebc1e6546fdfb0341cf5bc88e1fb0b35fc91e6515aa8a12f6423\" returns successfully" Jan 29 11:17:59.326652 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 11:17:59.327252 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 11:17:59.806910 kubelet[2781]: I0129 11:17:59.806866 2781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc7624e953451f932ad24aac291d9a20dd4aaf5933924fd7644e26be246394f2" Jan 29 11:17:59.809763 containerd[1540]: time="2025-01-29T11:17:59.807754393Z" level=info msg="StopPodSandbox for \"cc7624e953451f932ad24aac291d9a20dd4aaf5933924fd7644e26be246394f2\"" Jan 29 11:17:59.809763 containerd[1540]: time="2025-01-29T11:17:59.807895431Z" level=info msg="Ensure that sandbox cc7624e953451f932ad24aac291d9a20dd4aaf5933924fd7644e26be246394f2 in task-service has been cleanup successfully" Jan 29 11:17:59.809331 systemd[1]: run-netns-cni\x2d7e548eff\x2d2c86\x2da6c0\x2d0028\x2d241eec7518a8.mount: Deactivated successfully. Jan 29 11:17:59.810780 containerd[1540]: time="2025-01-29T11:17:59.810750554Z" level=info msg="TearDown network for sandbox \"cc7624e953451f932ad24aac291d9a20dd4aaf5933924fd7644e26be246394f2\" successfully" Jan 29 11:17:59.810780 containerd[1540]: time="2025-01-29T11:17:59.810765294Z" level=info msg="StopPodSandbox for \"cc7624e953451f932ad24aac291d9a20dd4aaf5933924fd7644e26be246394f2\" returns successfully" Jan 29 11:17:59.810985 containerd[1540]: time="2025-01-29T11:17:59.810886547Z" level=info msg="StopPodSandbox for \"2f143d0f0cd7781b10ac01bdc12ff00154b59685051326862b2b38dfabe185ba\"" Jan 29 11:17:59.810985 containerd[1540]: time="2025-01-29T11:17:59.810926801Z" level=info msg="TearDown network for sandbox \"2f143d0f0cd7781b10ac01bdc12ff00154b59685051326862b2b38dfabe185ba\" successfully" Jan 29 11:17:59.810985 containerd[1540]: time="2025-01-29T11:17:59.810932503Z" level=info msg="StopPodSandbox for \"2f143d0f0cd7781b10ac01bdc12ff00154b59685051326862b2b38dfabe185ba\" returns successfully" Jan 29 11:17:59.811138 containerd[1540]: time="2025-01-29T11:17:59.811115373Z" level=info msg="StopPodSandbox for \"34d5f81b6bf32fd103a059f7e8e471b0f64115105110ab425bd26ae6525d30f5\"" Jan 29 11:17:59.811172 containerd[1540]: time="2025-01-29T11:17:59.811151246Z" level=info msg="TearDown network for sandbox \"34d5f81b6bf32fd103a059f7e8e471b0f64115105110ab425bd26ae6525d30f5\" successfully" Jan 29 11:17:59.811172 containerd[1540]: time="2025-01-29T11:17:59.811157665Z" level=info msg="StopPodSandbox for \"34d5f81b6bf32fd103a059f7e8e471b0f64115105110ab425bd26ae6525d30f5\" returns successfully" Jan 29 11:17:59.811322 kubelet[2781]: I0129 11:17:59.811301 2781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11fd57254fd350cdf6ceae4fa18ccb31024c084ac553db045d4ee4303cc5857e" Jan 29 11:17:59.811683 containerd[1540]: time="2025-01-29T11:17:59.811439966Z" level=info msg="StopPodSandbox for \"86e28ef1ae145ad1a45ac419c458722cc93d669f5a926e1e4b0d549e71fac0d7\"" Jan 29 11:17:59.811683 containerd[1540]: time="2025-01-29T11:17:59.811483935Z" level=info msg="TearDown network for sandbox \"86e28ef1ae145ad1a45ac419c458722cc93d669f5a926e1e4b0d549e71fac0d7\" successfully" Jan 29 11:17:59.811683 containerd[1540]: time="2025-01-29T11:17:59.811489771Z" level=info msg="StopPodSandbox for \"86e28ef1ae145ad1a45ac419c458722cc93d669f5a926e1e4b0d549e71fac0d7\" returns successfully" Jan 29 11:17:59.811683 containerd[1540]: time="2025-01-29T11:17:59.811616220Z" level=info msg="StopPodSandbox for \"fe8823acf14dafe1127b302a3b7f03db3aacf62b65e1c7d73654ab4f6e3c73ac\"" Jan 29 11:17:59.811683 containerd[1540]: time="2025-01-29T11:17:59.811651274Z" level=info msg="TearDown network for sandbox \"fe8823acf14dafe1127b302a3b7f03db3aacf62b65e1c7d73654ab4f6e3c73ac\" successfully" Jan 29 11:17:59.811683 containerd[1540]: time="2025-01-29T11:17:59.811656705Z" level=info msg="StopPodSandbox for \"fe8823acf14dafe1127b302a3b7f03db3aacf62b65e1c7d73654ab4f6e3c73ac\" returns successfully" Jan 29 11:17:59.812108 containerd[1540]: time="2025-01-29T11:17:59.812003109Z" level=info msg="StopPodSandbox for \"11fd57254fd350cdf6ceae4fa18ccb31024c084ac553db045d4ee4303cc5857e\"" Jan 29 11:17:59.812324 containerd[1540]: time="2025-01-29T11:17:59.812236639Z" level=info msg="Ensure that sandbox 11fd57254fd350cdf6ceae4fa18ccb31024c084ac553db045d4ee4303cc5857e in task-service has been cleanup successfully" Jan 29 11:17:59.812631 containerd[1540]: time="2025-01-29T11:17:59.812582745Z" level=info msg="TearDown network for sandbox \"11fd57254fd350cdf6ceae4fa18ccb31024c084ac553db045d4ee4303cc5857e\" successfully" Jan 29 11:17:59.812631 containerd[1540]: time="2025-01-29T11:17:59.812598205Z" level=info msg="StopPodSandbox for \"11fd57254fd350cdf6ceae4fa18ccb31024c084ac553db045d4ee4303cc5857e\" returns successfully" Jan 29 11:17:59.813012 containerd[1540]: time="2025-01-29T11:17:59.812900329Z" level=info msg="StopPodSandbox for \"06c2c0079aaa7b0c1b90ee82c25014d20e7a17675bda51951443971d75dc2991\"" Jan 29 11:17:59.813012 containerd[1540]: time="2025-01-29T11:17:59.812948566Z" level=info msg="TearDown network for sandbox \"06c2c0079aaa7b0c1b90ee82c25014d20e7a17675bda51951443971d75dc2991\" successfully" Jan 29 11:17:59.813012 containerd[1540]: time="2025-01-29T11:17:59.812956287Z" level=info msg="StopPodSandbox for \"06c2c0079aaa7b0c1b90ee82c25014d20e7a17675bda51951443971d75dc2991\" returns successfully" Jan 29 11:17:59.813362 containerd[1540]: time="2025-01-29T11:17:59.813162661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567645b7d8-9spgk,Uid:dfc4733c-b2d3-4e5f-a242-756778fe7626,Namespace:calico-apiserver,Attempt:5,}" Jan 29 11:17:59.813484 containerd[1540]: time="2025-01-29T11:17:59.813473887Z" level=info msg="StopPodSandbox for \"afe373a75fe2a4b1b152fa178d97c445f327537cf2f6cf3c4b15fab4554474b4\"" Jan 29 11:17:59.813705 containerd[1540]: time="2025-01-29T11:17:59.813591541Z" level=info msg="TearDown network for sandbox \"afe373a75fe2a4b1b152fa178d97c445f327537cf2f6cf3c4b15fab4554474b4\" successfully" Jan 29 11:17:59.813758 containerd[1540]: time="2025-01-29T11:17:59.813739951Z" level=info msg="StopPodSandbox for \"afe373a75fe2a4b1b152fa178d97c445f327537cf2f6cf3c4b15fab4554474b4\" returns successfully" Jan 29 11:17:59.814876 kubelet[2781]: I0129 11:17:59.814106 2781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="374ba73e142b2e0590751ebfa9ecb18acad37ef634e9f4b1ba9a38deb4e3a9dc" Jan 29 11:17:59.814927 containerd[1540]: time="2025-01-29T11:17:59.814413340Z" level=info msg="StopPodSandbox for \"374ba73e142b2e0590751ebfa9ecb18acad37ef634e9f4b1ba9a38deb4e3a9dc\"" Jan 29 11:17:59.814927 containerd[1540]: time="2025-01-29T11:17:59.814525927Z" level=info msg="Ensure that sandbox 374ba73e142b2e0590751ebfa9ecb18acad37ef634e9f4b1ba9a38deb4e3a9dc in task-service has been cleanup successfully" Jan 29 11:17:59.814927 containerd[1540]: time="2025-01-29T11:17:59.814684560Z" level=info msg="StopPodSandbox for \"965f0e21cd0c998502fe33972ceb0aa8588cdd62fe9c0d1228276afb07663b3a\"" Jan 29 11:17:59.814927 containerd[1540]: time="2025-01-29T11:17:59.814720488Z" level=info msg="TearDown network for sandbox \"965f0e21cd0c998502fe33972ceb0aa8588cdd62fe9c0d1228276afb07663b3a\" successfully" Jan 29 11:17:59.814927 containerd[1540]: time="2025-01-29T11:17:59.814726276Z" level=info msg="StopPodSandbox for \"965f0e21cd0c998502fe33972ceb0aa8588cdd62fe9c0d1228276afb07663b3a\" returns successfully" Jan 29 11:17:59.814927 containerd[1540]: time="2025-01-29T11:17:59.814851353Z" level=info msg="TearDown network for sandbox \"374ba73e142b2e0590751ebfa9ecb18acad37ef634e9f4b1ba9a38deb4e3a9dc\" successfully" Jan 29 11:17:59.814927 containerd[1540]: time="2025-01-29T11:17:59.814859697Z" level=info msg="StopPodSandbox for \"374ba73e142b2e0590751ebfa9ecb18acad37ef634e9f4b1ba9a38deb4e3a9dc\" returns successfully" Jan 29 11:17:59.815220 containerd[1540]: time="2025-01-29T11:17:59.815201751Z" level=info msg="StopPodSandbox for \"2f4a59700a7437677b25a8b32a15088b17d16fda3963a67cd5a23b1ad2cf6bfa\"" Jan 29 11:17:59.815884 containerd[1540]: time="2025-01-29T11:17:59.815322082Z" level=info msg="StopPodSandbox for \"0896878e2f7c90bddadcc3648f0bf177660243f948de176cff22c4a33fd20a5a\"" Jan 29 11:17:59.815884 containerd[1540]: time="2025-01-29T11:17:59.815357354Z" level=info msg="TearDown network for sandbox \"0896878e2f7c90bddadcc3648f0bf177660243f948de176cff22c4a33fd20a5a\" successfully" Jan 29 11:17:59.815884 containerd[1540]: time="2025-01-29T11:17:59.815363418Z" level=info msg="StopPodSandbox for \"0896878e2f7c90bddadcc3648f0bf177660243f948de176cff22c4a33fd20a5a\" returns successfully" Jan 29 11:17:59.815884 containerd[1540]: time="2025-01-29T11:17:59.815420349Z" level=info msg="TearDown network for sandbox \"2f4a59700a7437677b25a8b32a15088b17d16fda3963a67cd5a23b1ad2cf6bfa\" successfully" Jan 29 11:17:59.815884 containerd[1540]: time="2025-01-29T11:17:59.815428417Z" level=info msg="StopPodSandbox for \"2f4a59700a7437677b25a8b32a15088b17d16fda3963a67cd5a23b1ad2cf6bfa\" returns successfully" Jan 29 11:17:59.816203 containerd[1540]: time="2025-01-29T11:17:59.816186518Z" level=info msg="StopPodSandbox for \"3400f88c6211011833ab2611d8fc78659e89d2a3afff7f350188be4e22083966\"" Jan 29 11:17:59.816203 containerd[1540]: time="2025-01-29T11:17:59.816230448Z" level=info msg="TearDown network for sandbox \"3400f88c6211011833ab2611d8fc78659e89d2a3afff7f350188be4e22083966\" successfully" Jan 29 11:17:59.816203 containerd[1540]: time="2025-01-29T11:17:59.816237592Z" level=info msg="StopPodSandbox for \"3400f88c6211011833ab2611d8fc78659e89d2a3afff7f350188be4e22083966\" returns successfully" Jan 29 11:17:59.816203 containerd[1540]: time="2025-01-29T11:17:59.816258480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-54nng,Uid:fad30e8c-8ba2-44d9-9978-b8d6342a6efd,Namespace:kube-system,Attempt:5,}" Jan 29 11:17:59.816293 systemd[1]: run-netns-cni\x2db6e25d26\x2d0d64\x2d3968\x2dfd7c\x2d0389072253da.mount: Deactivated successfully. Jan 29 11:17:59.817034 containerd[1540]: time="2025-01-29T11:17:59.816927640Z" level=info msg="StopPodSandbox for \"623e36285bbc054c04bc3646932f77256d498dc809af594fa14fc84d90ab4c97\"" Jan 29 11:17:59.817034 containerd[1540]: time="2025-01-29T11:17:59.816977965Z" level=info msg="TearDown network for sandbox \"623e36285bbc054c04bc3646932f77256d498dc809af594fa14fc84d90ab4c97\" successfully" Jan 29 11:17:59.817034 containerd[1540]: time="2025-01-29T11:17:59.816984763Z" level=info msg="StopPodSandbox for \"623e36285bbc054c04bc3646932f77256d498dc809af594fa14fc84d90ab4c97\" returns successfully" Jan 29 11:17:59.817385 containerd[1540]: time="2025-01-29T11:17:59.817188562Z" level=info msg="StopPodSandbox for \"e8e80bf7e3d0dddb74351ce9d80ce6d0e4e17b790a9d4eeb36e6b229131f792d\"" Jan 29 11:17:59.817385 containerd[1540]: time="2025-01-29T11:17:59.817227764Z" level=info msg="TearDown network for sandbox \"e8e80bf7e3d0dddb74351ce9d80ce6d0e4e17b790a9d4eeb36e6b229131f792d\" successfully" Jan 29 11:17:59.817385 containerd[1540]: time="2025-01-29T11:17:59.817234349Z" level=info msg="StopPodSandbox for \"e8e80bf7e3d0dddb74351ce9d80ce6d0e4e17b790a9d4eeb36e6b229131f792d\" returns successfully" Jan 29 11:17:59.817636 kubelet[2781]: I0129 11:17:59.817496 2781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20c1d0d43e7d7d6a06775382b021ad3cfdf93ec7e64efa9cb8914a0665eb8c72" Jan 29 11:17:59.817752 containerd[1540]: time="2025-01-29T11:17:59.817647977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bd9c8f59d-nldcx,Uid:228d172c-bed5-421d-bbfc-69e399249629,Namespace:calico-system,Attempt:5,}" Jan 29 11:17:59.818303 containerd[1540]: time="2025-01-29T11:17:59.818120556Z" level=info msg="StopPodSandbox for \"20c1d0d43e7d7d6a06775382b021ad3cfdf93ec7e64efa9cb8914a0665eb8c72\"" Jan 29 11:17:59.818303 containerd[1540]: time="2025-01-29T11:17:59.818209693Z" level=info msg="Ensure that sandbox 20c1d0d43e7d7d6a06775382b021ad3cfdf93ec7e64efa9cb8914a0665eb8c72 in task-service has been cleanup successfully" Jan 29 11:17:59.818537 containerd[1540]: time="2025-01-29T11:17:59.818527486Z" level=info msg="TearDown network for sandbox \"20c1d0d43e7d7d6a06775382b021ad3cfdf93ec7e64efa9cb8914a0665eb8c72\" successfully" Jan 29 11:17:59.818581 containerd[1540]: time="2025-01-29T11:17:59.818574288Z" level=info msg="StopPodSandbox for \"20c1d0d43e7d7d6a06775382b021ad3cfdf93ec7e64efa9cb8914a0665eb8c72\" returns successfully" Jan 29 11:17:59.818792 containerd[1540]: time="2025-01-29T11:17:59.818778433Z" level=info msg="StopPodSandbox for \"ee54511ace36179b667adeb5410aa0c487826a5fb27e610c6a3fce075f6d248d\"" Jan 29 11:17:59.818880 containerd[1540]: time="2025-01-29T11:17:59.818822805Z" level=info msg="TearDown network for sandbox \"ee54511ace36179b667adeb5410aa0c487826a5fb27e610c6a3fce075f6d248d\" successfully" Jan 29 11:17:59.818880 containerd[1540]: time="2025-01-29T11:17:59.818848023Z" level=info msg="StopPodSandbox for \"ee54511ace36179b667adeb5410aa0c487826a5fb27e610c6a3fce075f6d248d\" returns successfully" Jan 29 11:17:59.819316 containerd[1540]: time="2025-01-29T11:17:59.819050281Z" level=info msg="StopPodSandbox for \"47bf1d5f1716fb7c348c76e99ea30c8ab02ab1badf7a9b5eb161467c74089f8f\"" Jan 29 11:17:59.819316 containerd[1540]: time="2025-01-29T11:17:59.819088411Z" level=info msg="TearDown network for sandbox \"47bf1d5f1716fb7c348c76e99ea30c8ab02ab1badf7a9b5eb161467c74089f8f\" successfully" Jan 29 11:17:59.819316 containerd[1540]: time="2025-01-29T11:17:59.819094177Z" level=info msg="StopPodSandbox for \"47bf1d5f1716fb7c348c76e99ea30c8ab02ab1badf7a9b5eb161467c74089f8f\" returns successfully" Jan 29 11:17:59.823263 containerd[1540]: time="2025-01-29T11:17:59.823247386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wnr5k,Uid:7d847577-abb5-4daf-916b-6334c86beb77,Namespace:calico-system,Attempt:3,}" Jan 29 11:17:59.840095 kubelet[2781]: I0129 11:17:59.840083 2781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2812cb2a492fa5a430e9b05ff132496b2fedfc05205f05db5317733b5bfa19a7" Jan 29 11:17:59.850807 containerd[1540]: time="2025-01-29T11:17:59.850779419Z" level=info msg="StopPodSandbox for \"2812cb2a492fa5a430e9b05ff132496b2fedfc05205f05db5317733b5bfa19a7\"" Jan 29 11:17:59.850940 containerd[1540]: time="2025-01-29T11:17:59.850928130Z" level=info msg="Ensure that sandbox 2812cb2a492fa5a430e9b05ff132496b2fedfc05205f05db5317733b5bfa19a7 in task-service has been cleanup successfully" Jan 29 11:17:59.851540 containerd[1540]: time="2025-01-29T11:17:59.851061119Z" level=info msg="TearDown network for sandbox \"2812cb2a492fa5a430e9b05ff132496b2fedfc05205f05db5317733b5bfa19a7\" successfully" Jan 29 11:17:59.851540 containerd[1540]: time="2025-01-29T11:17:59.851071076Z" level=info msg="StopPodSandbox for \"2812cb2a492fa5a430e9b05ff132496b2fedfc05205f05db5317733b5bfa19a7\" returns successfully" Jan 29 11:17:59.890683 containerd[1540]: time="2025-01-29T11:17:59.890125153Z" level=info msg="StopPodSandbox for \"4c45f66dcec342ba40b80e7c1273944e3ce617f60f85d06b98136ce5ccb070ea\"" Jan 29 11:17:59.890683 containerd[1540]: time="2025-01-29T11:17:59.890209517Z" level=info msg="TearDown network for sandbox \"4c45f66dcec342ba40b80e7c1273944e3ce617f60f85d06b98136ce5ccb070ea\" successfully" Jan 29 11:17:59.890683 containerd[1540]: time="2025-01-29T11:17:59.890217418Z" level=info msg="StopPodSandbox for \"4c45f66dcec342ba40b80e7c1273944e3ce617f60f85d06b98136ce5ccb070ea\" returns successfully" Jan 29 11:17:59.890683 containerd[1540]: time="2025-01-29T11:17:59.890459212Z" level=info msg="StopPodSandbox for \"4e9baf12bd4a7720a7c345a2beb0ed0da0a10437addd844ba1511d811e6143c2\"" Jan 29 11:17:59.890683 containerd[1540]: time="2025-01-29T11:17:59.890508203Z" level=info msg="TearDown network for sandbox \"4e9baf12bd4a7720a7c345a2beb0ed0da0a10437addd844ba1511d811e6143c2\" successfully" Jan 29 11:17:59.890683 containerd[1540]: time="2025-01-29T11:17:59.890515566Z" level=info msg="StopPodSandbox for \"4e9baf12bd4a7720a7c345a2beb0ed0da0a10437addd844ba1511d811e6143c2\" returns successfully" Jan 29 11:17:59.890854 containerd[1540]: time="2025-01-29T11:17:59.890728042Z" level=info msg="StopPodSandbox for \"23007204d967495cee1d23fcee3f19723efcffe924a0f2bc5376ef57c876772e\"" Jan 29 11:17:59.890854 containerd[1540]: time="2025-01-29T11:17:59.890773469Z" level=info msg="TearDown network for sandbox \"23007204d967495cee1d23fcee3f19723efcffe924a0f2bc5376ef57c876772e\" successfully" Jan 29 11:17:59.890854 containerd[1540]: time="2025-01-29T11:17:59.890779434Z" level=info msg="StopPodSandbox for \"23007204d967495cee1d23fcee3f19723efcffe924a0f2bc5376ef57c876772e\" returns successfully" Jan 29 11:17:59.890960 containerd[1540]: time="2025-01-29T11:17:59.890932329Z" level=info msg="StopPodSandbox for \"6355f039de9944dc68083e3a8b5e984d252c2bc0493f76ac2e44d33241168d1e\"" Jan 29 11:17:59.891644 containerd[1540]: time="2025-01-29T11:17:59.891003241Z" level=info msg="TearDown network for sandbox \"6355f039de9944dc68083e3a8b5e984d252c2bc0493f76ac2e44d33241168d1e\" successfully" Jan 29 11:17:59.891644 containerd[1540]: time="2025-01-29T11:17:59.891012602Z" level=info msg="StopPodSandbox for \"6355f039de9944dc68083e3a8b5e984d252c2bc0493f76ac2e44d33241168d1e\" returns successfully" Jan 29 11:17:59.891644 containerd[1540]: time="2025-01-29T11:17:59.891402442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567645b7d8-fwkgz,Uid:f5dc6b2c-ba1b-4039-8947-278e73fda781,Namespace:calico-apiserver,Attempt:5,}" Jan 29 11:17:59.892867 kubelet[2781]: I0129 11:17:59.892580 2781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="399de994fe8e279bd5d935f0d3b71a0ca4cbc60401fd7483c90bc80c2579c64e" Jan 29 11:17:59.913534 containerd[1540]: time="2025-01-29T11:17:59.913278086Z" level=info msg="StopPodSandbox for \"399de994fe8e279bd5d935f0d3b71a0ca4cbc60401fd7483c90bc80c2579c64e\"" Jan 29 11:17:59.913534 containerd[1540]: time="2025-01-29T11:17:59.913418524Z" level=info msg="Ensure that sandbox 399de994fe8e279bd5d935f0d3b71a0ca4cbc60401fd7483c90bc80c2579c64e in task-service has been cleanup successfully" Jan 29 11:17:59.913768 containerd[1540]: time="2025-01-29T11:17:59.913705920Z" level=info msg="TearDown network for sandbox \"399de994fe8e279bd5d935f0d3b71a0ca4cbc60401fd7483c90bc80c2579c64e\" successfully" Jan 29 11:17:59.913768 containerd[1540]: time="2025-01-29T11:17:59.913715948Z" level=info msg="StopPodSandbox for \"399de994fe8e279bd5d935f0d3b71a0ca4cbc60401fd7483c90bc80c2579c64e\" returns successfully" Jan 29 11:17:59.914076 containerd[1540]: time="2025-01-29T11:17:59.913997569Z" level=info msg="StopPodSandbox for \"16821b5a461346d4c370c2363d3281501b6a6e309f883f488c79352736d78222\"" Jan 29 11:17:59.914076 containerd[1540]: time="2025-01-29T11:17:59.914037721Z" level=info msg="TearDown network for sandbox \"16821b5a461346d4c370c2363d3281501b6a6e309f883f488c79352736d78222\" successfully" Jan 29 11:17:59.914076 containerd[1540]: time="2025-01-29T11:17:59.914043731Z" level=info msg="StopPodSandbox for \"16821b5a461346d4c370c2363d3281501b6a6e309f883f488c79352736d78222\" returns successfully" Jan 29 11:17:59.914270 containerd[1540]: time="2025-01-29T11:17:59.914260544Z" level=info msg="StopPodSandbox for \"46da7b273569014509eee1c5dd3ecf0e5a52dbc395649a8495ea0ee0e6141444\"" Jan 29 11:17:59.914430 containerd[1540]: time="2025-01-29T11:17:59.914421369Z" level=info msg="TearDown network for sandbox \"46da7b273569014509eee1c5dd3ecf0e5a52dbc395649a8495ea0ee0e6141444\" successfully" Jan 29 11:17:59.914565 containerd[1540]: time="2025-01-29T11:17:59.914557651Z" level=info msg="StopPodSandbox for \"46da7b273569014509eee1c5dd3ecf0e5a52dbc395649a8495ea0ee0e6141444\" returns successfully" Jan 29 11:17:59.917779 containerd[1540]: time="2025-01-29T11:17:59.917753747Z" level=info msg="StopPodSandbox for \"c793c1c171159802ca61909f642594f684634e7885bae0aff26a8a8fd1ff1a5a\"" Jan 29 11:17:59.917836 containerd[1540]: time="2025-01-29T11:17:59.917816969Z" level=info msg="TearDown network for sandbox \"c793c1c171159802ca61909f642594f684634e7885bae0aff26a8a8fd1ff1a5a\" successfully" Jan 29 11:17:59.917836 containerd[1540]: time="2025-01-29T11:17:59.917824331Z" level=info msg="StopPodSandbox for \"c793c1c171159802ca61909f642594f684634e7885bae0aff26a8a8fd1ff1a5a\" returns successfully" Jan 29 11:17:59.918775 containerd[1540]: time="2025-01-29T11:17:59.918455500Z" level=info msg="StopPodSandbox for \"5094348e9dfe0fbb3475ca8e30b0dcbcf250044f8a46205cbc8abcea1336af20\"" Jan 29 11:17:59.918775 containerd[1540]: time="2025-01-29T11:17:59.918500757Z" level=info msg="TearDown network for sandbox \"5094348e9dfe0fbb3475ca8e30b0dcbcf250044f8a46205cbc8abcea1336af20\" successfully" Jan 29 11:17:59.918775 containerd[1540]: time="2025-01-29T11:17:59.918506945Z" level=info msg="StopPodSandbox for \"5094348e9dfe0fbb3475ca8e30b0dcbcf250044f8a46205cbc8abcea1336af20\" returns successfully" Jan 29 11:17:59.921044 containerd[1540]: time="2025-01-29T11:17:59.920484328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9fbsh,Uid:34b7fb00-2637-448b-9a6c-d8fe087ac46d,Namespace:kube-system,Attempt:5,}" Jan 29 11:18:00.104087 systemd[1]: run-netns-cni\x2d6c75624c\x2d0a4c\x2d2048\x2d9505\x2d82020b6721b9.mount: Deactivated successfully. Jan 29 11:18:00.105083 systemd[1]: run-netns-cni\x2defc77b8b\x2d46fa\x2d7c60\x2debcc\x2dda74bf91aa4d.mount: Deactivated successfully. Jan 29 11:18:00.105130 systemd[1]: run-netns-cni\x2dca9ade1b\x2dcdfe\x2dad97\x2d2566\x2de25e24614563.mount: Deactivated successfully. Jan 29 11:18:00.105168 systemd[1]: run-netns-cni\x2d655e8c13\x2dabb6\x2d531f\x2d007e\x2dcd6daa48aa5a.mount: Deactivated successfully. Jan 29 11:18:00.508937 systemd-networkd[1455]: cali470cae6a75c: Link UP Jan 29 11:18:00.509047 systemd-networkd[1455]: cali470cae6a75c: Gained carrier Jan 29 11:18:00.526159 containerd[1540]: 2025-01-29 11:18:00.157 [INFO][4483] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:18:00.526159 containerd[1540]: 2025-01-29 11:18:00.184 [INFO][4483] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--567645b7d8--fwkgz-eth0 calico-apiserver-567645b7d8- calico-apiserver f5dc6b2c-ba1b-4039-8947-278e73fda781 668 0 2025-01-29 11:17:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:567645b7d8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-567645b7d8-fwkgz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali470cae6a75c [] []}} ContainerID="f680eda3ea0a18ff93607e5b043f2b07c48eeac93fc646cb4c02a003b455d97f" Namespace="calico-apiserver" Pod="calico-apiserver-567645b7d8-fwkgz" WorkloadEndpoint="localhost-k8s-calico--apiserver--567645b7d8--fwkgz-" Jan 29 11:18:00.526159 containerd[1540]: 2025-01-29 11:18:00.184 [INFO][4483] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f680eda3ea0a18ff93607e5b043f2b07c48eeac93fc646cb4c02a003b455d97f" Namespace="calico-apiserver" Pod="calico-apiserver-567645b7d8-fwkgz" WorkloadEndpoint="localhost-k8s-calico--apiserver--567645b7d8--fwkgz-eth0" Jan 29 11:18:00.526159 containerd[1540]: 2025-01-29 11:18:00.457 [INFO][4517] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f680eda3ea0a18ff93607e5b043f2b07c48eeac93fc646cb4c02a003b455d97f" HandleID="k8s-pod-network.f680eda3ea0a18ff93607e5b043f2b07c48eeac93fc646cb4c02a003b455d97f" Workload="localhost-k8s-calico--apiserver--567645b7d8--fwkgz-eth0" Jan 29 11:18:00.526159 containerd[1540]: 2025-01-29 11:18:00.470 [INFO][4517] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f680eda3ea0a18ff93607e5b043f2b07c48eeac93fc646cb4c02a003b455d97f" HandleID="k8s-pod-network.f680eda3ea0a18ff93607e5b043f2b07c48eeac93fc646cb4c02a003b455d97f" Workload="localhost-k8s-calico--apiserver--567645b7d8--fwkgz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c15e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-567645b7d8-fwkgz", "timestamp":"2025-01-29 11:18:00.457149774 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:18:00.526159 containerd[1540]: 2025-01-29 11:18:00.471 [INFO][4517] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:18:00.526159 containerd[1540]: 2025-01-29 11:18:00.471 [INFO][4517] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:18:00.526159 containerd[1540]: 2025-01-29 11:18:00.471 [INFO][4517] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:18:00.526159 containerd[1540]: 2025-01-29 11:18:00.472 [INFO][4517] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f680eda3ea0a18ff93607e5b043f2b07c48eeac93fc646cb4c02a003b455d97f" host="localhost" Jan 29 11:18:00.526159 containerd[1540]: 2025-01-29 11:18:00.478 [INFO][4517] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:18:00.526159 containerd[1540]: 2025-01-29 11:18:00.480 [INFO][4517] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:18:00.526159 containerd[1540]: 2025-01-29 11:18:00.482 [INFO][4517] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:18:00.526159 containerd[1540]: 2025-01-29 11:18:00.483 [INFO][4517] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:18:00.526159 containerd[1540]: 2025-01-29 11:18:00.483 [INFO][4517] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f680eda3ea0a18ff93607e5b043f2b07c48eeac93fc646cb4c02a003b455d97f" host="localhost" Jan 29 11:18:00.526159 containerd[1540]: 2025-01-29 11:18:00.484 [INFO][4517] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f680eda3ea0a18ff93607e5b043f2b07c48eeac93fc646cb4c02a003b455d97f Jan 29 11:18:00.526159 containerd[1540]: 2025-01-29 11:18:00.485 [INFO][4517] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f680eda3ea0a18ff93607e5b043f2b07c48eeac93fc646cb4c02a003b455d97f" host="localhost" Jan 29 11:18:00.526159 containerd[1540]: 2025-01-29 11:18:00.488 [INFO][4517] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.f680eda3ea0a18ff93607e5b043f2b07c48eeac93fc646cb4c02a003b455d97f" host="localhost" Jan 29 11:18:00.526159 containerd[1540]: 2025-01-29 11:18:00.488 [INFO][4517] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.f680eda3ea0a18ff93607e5b043f2b07c48eeac93fc646cb4c02a003b455d97f" host="localhost" Jan 29 11:18:00.526159 containerd[1540]: 2025-01-29 11:18:00.488 [INFO][4517] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:18:00.526159 containerd[1540]: 2025-01-29 11:18:00.488 [INFO][4517] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="f680eda3ea0a18ff93607e5b043f2b07c48eeac93fc646cb4c02a003b455d97f" HandleID="k8s-pod-network.f680eda3ea0a18ff93607e5b043f2b07c48eeac93fc646cb4c02a003b455d97f" Workload="localhost-k8s-calico--apiserver--567645b7d8--fwkgz-eth0" Jan 29 11:18:00.539179 containerd[1540]: 2025-01-29 11:18:00.490 [INFO][4483] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f680eda3ea0a18ff93607e5b043f2b07c48eeac93fc646cb4c02a003b455d97f" Namespace="calico-apiserver" Pod="calico-apiserver-567645b7d8-fwkgz" WorkloadEndpoint="localhost-k8s-calico--apiserver--567645b7d8--fwkgz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--567645b7d8--fwkgz-eth0", GenerateName:"calico-apiserver-567645b7d8-", Namespace:"calico-apiserver", SelfLink:"", UID:"f5dc6b2c-ba1b-4039-8947-278e73fda781", ResourceVersion:"668", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 17, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"567645b7d8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-567645b7d8-fwkgz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali470cae6a75c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:18:00.539179 containerd[1540]: 2025-01-29 11:18:00.490 [INFO][4483] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="f680eda3ea0a18ff93607e5b043f2b07c48eeac93fc646cb4c02a003b455d97f" Namespace="calico-apiserver" Pod="calico-apiserver-567645b7d8-fwkgz" WorkloadEndpoint="localhost-k8s-calico--apiserver--567645b7d8--fwkgz-eth0" Jan 29 11:18:00.539179 containerd[1540]: 2025-01-29 11:18:00.490 [INFO][4483] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali470cae6a75c ContainerID="f680eda3ea0a18ff93607e5b043f2b07c48eeac93fc646cb4c02a003b455d97f" Namespace="calico-apiserver" Pod="calico-apiserver-567645b7d8-fwkgz" WorkloadEndpoint="localhost-k8s-calico--apiserver--567645b7d8--fwkgz-eth0" Jan 29 11:18:00.539179 containerd[1540]: 2025-01-29 11:18:00.501 [INFO][4483] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f680eda3ea0a18ff93607e5b043f2b07c48eeac93fc646cb4c02a003b455d97f" Namespace="calico-apiserver" Pod="calico-apiserver-567645b7d8-fwkgz" WorkloadEndpoint="localhost-k8s-calico--apiserver--567645b7d8--fwkgz-eth0" Jan 29 11:18:00.539179 containerd[1540]: 2025-01-29 11:18:00.502 [INFO][4483] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f680eda3ea0a18ff93607e5b043f2b07c48eeac93fc646cb4c02a003b455d97f" Namespace="calico-apiserver" Pod="calico-apiserver-567645b7d8-fwkgz" WorkloadEndpoint="localhost-k8s-calico--apiserver--567645b7d8--fwkgz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--567645b7d8--fwkgz-eth0", GenerateName:"calico-apiserver-567645b7d8-", Namespace:"calico-apiserver", SelfLink:"", UID:"f5dc6b2c-ba1b-4039-8947-278e73fda781", ResourceVersion:"668", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 17, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"567645b7d8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f680eda3ea0a18ff93607e5b043f2b07c48eeac93fc646cb4c02a003b455d97f", Pod:"calico-apiserver-567645b7d8-fwkgz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali470cae6a75c", MAC:"de:82:4c:6d:e3:a0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:18:00.539179 containerd[1540]: 2025-01-29 11:18:00.522 [INFO][4483] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f680eda3ea0a18ff93607e5b043f2b07c48eeac93fc646cb4c02a003b455d97f" Namespace="calico-apiserver" Pod="calico-apiserver-567645b7d8-fwkgz" WorkloadEndpoint="localhost-k8s-calico--apiserver--567645b7d8--fwkgz-eth0" Jan 29 11:18:00.585799 kubelet[2781]: I0129 11:18:00.560667 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zql6b" podStartSLOduration=2.793114418 podStartE2EDuration="17.521704467s" podCreationTimestamp="2025-01-29 11:17:43 +0000 UTC" firstStartedPulling="2025-01-29 11:17:44.062204305 +0000 UTC m=+13.163594950" lastFinishedPulling="2025-01-29 11:17:58.790794355 +0000 UTC m=+27.892184999" observedRunningTime="2025-01-29 11:17:59.881685255 +0000 UTC m=+28.983075907" watchObservedRunningTime="2025-01-29 11:18:00.521704467 +0000 UTC m=+29.623095115" Jan 29 11:18:00.604851 containerd[1540]: time="2025-01-29T11:18:00.604709649Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:18:00.605474 containerd[1540]: time="2025-01-29T11:18:00.604755930Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:18:00.605474 containerd[1540]: time="2025-01-29T11:18:00.605395159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:18:00.607487 containerd[1540]: time="2025-01-29T11:18:00.606968614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:18:00.634869 systemd-networkd[1455]: cali5199332ba0c: Link UP Jan 29 11:18:00.635010 systemd-networkd[1455]: cali5199332ba0c: Gained carrier Jan 29 11:18:00.644484 containerd[1540]: 2025-01-29 11:18:00.151 [INFO][4473] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:18:00.644484 containerd[1540]: 2025-01-29 11:18:00.184 [INFO][4473] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--wnr5k-eth0 csi-node-driver- calico-system 7d847577-abb5-4daf-916b-6334c86beb77 582 0 2025-01-29 11:17:43 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-wnr5k eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5199332ba0c [] []}} ContainerID="a7456390fe4e63fd325f7dcdaf71c000a19c831a4de230c73b091a06ec987b0e" Namespace="calico-system" Pod="csi-node-driver-wnr5k" WorkloadEndpoint="localhost-k8s-csi--node--driver--wnr5k-" Jan 29 11:18:00.644484 containerd[1540]: 2025-01-29 11:18:00.184 [INFO][4473] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a7456390fe4e63fd325f7dcdaf71c000a19c831a4de230c73b091a06ec987b0e" Namespace="calico-system" Pod="csi-node-driver-wnr5k" WorkloadEndpoint="localhost-k8s-csi--node--driver--wnr5k-eth0" Jan 29 11:18:00.644484 containerd[1540]: 2025-01-29 11:18:00.457 [INFO][4521] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a7456390fe4e63fd325f7dcdaf71c000a19c831a4de230c73b091a06ec987b0e" HandleID="k8s-pod-network.a7456390fe4e63fd325f7dcdaf71c000a19c831a4de230c73b091a06ec987b0e" Workload="localhost-k8s-csi--node--driver--wnr5k-eth0" Jan 29 11:18:00.644484 containerd[1540]: 2025-01-29 11:18:00.471 [INFO][4521] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a7456390fe4e63fd325f7dcdaf71c000a19c831a4de230c73b091a06ec987b0e" HandleID="k8s-pod-network.a7456390fe4e63fd325f7dcdaf71c000a19c831a4de230c73b091a06ec987b0e" Workload="localhost-k8s-csi--node--driver--wnr5k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000f2e20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-wnr5k", "timestamp":"2025-01-29 11:18:00.457147943 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:18:00.644484 containerd[1540]: 2025-01-29 11:18:00.471 [INFO][4521] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:18:00.644484 containerd[1540]: 2025-01-29 11:18:00.489 [INFO][4521] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:18:00.644484 containerd[1540]: 2025-01-29 11:18:00.489 [INFO][4521] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:18:00.644484 containerd[1540]: 2025-01-29 11:18:00.574 [INFO][4521] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a7456390fe4e63fd325f7dcdaf71c000a19c831a4de230c73b091a06ec987b0e" host="localhost" Jan 29 11:18:00.644484 containerd[1540]: 2025-01-29 11:18:00.582 [INFO][4521] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:18:00.644484 containerd[1540]: 2025-01-29 11:18:00.596 [INFO][4521] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:18:00.644484 containerd[1540]: 2025-01-29 11:18:00.598 [INFO][4521] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:18:00.644484 containerd[1540]: 2025-01-29 11:18:00.600 [INFO][4521] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:18:00.644484 containerd[1540]: 2025-01-29 11:18:00.600 [INFO][4521] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a7456390fe4e63fd325f7dcdaf71c000a19c831a4de230c73b091a06ec987b0e" host="localhost" Jan 29 11:18:00.644484 containerd[1540]: 2025-01-29 11:18:00.603 [INFO][4521] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a7456390fe4e63fd325f7dcdaf71c000a19c831a4de230c73b091a06ec987b0e Jan 29 11:18:00.644484 containerd[1540]: 2025-01-29 11:18:00.619 [INFO][4521] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a7456390fe4e63fd325f7dcdaf71c000a19c831a4de230c73b091a06ec987b0e" host="localhost" Jan 29 11:18:00.644484 containerd[1540]: 2025-01-29 11:18:00.627 [INFO][4521] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.a7456390fe4e63fd325f7dcdaf71c000a19c831a4de230c73b091a06ec987b0e" host="localhost" Jan 29 11:18:00.644484 containerd[1540]: 2025-01-29 11:18:00.627 [INFO][4521] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.a7456390fe4e63fd325f7dcdaf71c000a19c831a4de230c73b091a06ec987b0e" host="localhost" Jan 29 11:18:00.644484 containerd[1540]: 2025-01-29 11:18:00.627 [INFO][4521] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:18:00.644484 containerd[1540]: 2025-01-29 11:18:00.628 [INFO][4521] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="a7456390fe4e63fd325f7dcdaf71c000a19c831a4de230c73b091a06ec987b0e" HandleID="k8s-pod-network.a7456390fe4e63fd325f7dcdaf71c000a19c831a4de230c73b091a06ec987b0e" Workload="localhost-k8s-csi--node--driver--wnr5k-eth0" Jan 29 11:18:00.645059 containerd[1540]: 2025-01-29 11:18:00.633 [INFO][4473] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a7456390fe4e63fd325f7dcdaf71c000a19c831a4de230c73b091a06ec987b0e" Namespace="calico-system" Pod="csi-node-driver-wnr5k" WorkloadEndpoint="localhost-k8s-csi--node--driver--wnr5k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wnr5k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7d847577-abb5-4daf-916b-6334c86beb77", ResourceVersion:"582", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 17, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-wnr5k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5199332ba0c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:18:00.645059 containerd[1540]: 2025-01-29 11:18:00.633 [INFO][4473] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="a7456390fe4e63fd325f7dcdaf71c000a19c831a4de230c73b091a06ec987b0e" Namespace="calico-system" Pod="csi-node-driver-wnr5k" WorkloadEndpoint="localhost-k8s-csi--node--driver--wnr5k-eth0" Jan 29 11:18:00.645059 containerd[1540]: 2025-01-29 11:18:00.633 [INFO][4473] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5199332ba0c ContainerID="a7456390fe4e63fd325f7dcdaf71c000a19c831a4de230c73b091a06ec987b0e" Namespace="calico-system" Pod="csi-node-driver-wnr5k" WorkloadEndpoint="localhost-k8s-csi--node--driver--wnr5k-eth0" Jan 29 11:18:00.645059 containerd[1540]: 2025-01-29 11:18:00.635 [INFO][4473] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a7456390fe4e63fd325f7dcdaf71c000a19c831a4de230c73b091a06ec987b0e" Namespace="calico-system" Pod="csi-node-driver-wnr5k" WorkloadEndpoint="localhost-k8s-csi--node--driver--wnr5k-eth0" Jan 29 11:18:00.645059 containerd[1540]: 2025-01-29 11:18:00.635 [INFO][4473] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a7456390fe4e63fd325f7dcdaf71c000a19c831a4de230c73b091a06ec987b0e" Namespace="calico-system" Pod="csi-node-driver-wnr5k" WorkloadEndpoint="localhost-k8s-csi--node--driver--wnr5k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wnr5k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7d847577-abb5-4daf-916b-6334c86beb77", ResourceVersion:"582", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 17, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a7456390fe4e63fd325f7dcdaf71c000a19c831a4de230c73b091a06ec987b0e", Pod:"csi-node-driver-wnr5k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5199332ba0c", MAC:"b2:5b:d2:4a:c5:30", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:18:00.645059 containerd[1540]: 2025-01-29 11:18:00.642 [INFO][4473] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a7456390fe4e63fd325f7dcdaf71c000a19c831a4de230c73b091a06ec987b0e" Namespace="calico-system" Pod="csi-node-driver-wnr5k" WorkloadEndpoint="localhost-k8s-csi--node--driver--wnr5k-eth0" Jan 29 11:18:00.664140 systemd[1]: Started cri-containerd-f680eda3ea0a18ff93607e5b043f2b07c48eeac93fc646cb4c02a003b455d97f.scope - libcontainer container f680eda3ea0a18ff93607e5b043f2b07c48eeac93fc646cb4c02a003b455d97f. Jan 29 11:18:00.689899 systemd-resolved[1458]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:18:00.695676 containerd[1540]: time="2025-01-29T11:18:00.695244125Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:18:00.695676 containerd[1540]: time="2025-01-29T11:18:00.695297658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:18:00.695676 containerd[1540]: time="2025-01-29T11:18:00.695314350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:18:00.696750 containerd[1540]: time="2025-01-29T11:18:00.696153408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:18:00.720239 systemd-networkd[1455]: cali84dc4a2bd1a: Link UP Jan 29 11:18:00.720509 systemd[1]: Started cri-containerd-a7456390fe4e63fd325f7dcdaf71c000a19c831a4de230c73b091a06ec987b0e.scope - libcontainer container a7456390fe4e63fd325f7dcdaf71c000a19c831a4de230c73b091a06ec987b0e. Jan 29 11:18:00.721251 systemd-networkd[1455]: cali84dc4a2bd1a: Gained carrier Jan 29 11:18:00.735219 containerd[1540]: 2025-01-29 11:18:00.136 [INFO][4440] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:18:00.735219 containerd[1540]: 2025-01-29 11:18:00.184 [INFO][4440] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--567645b7d8--9spgk-eth0 calico-apiserver-567645b7d8- calico-apiserver dfc4733c-b2d3-4e5f-a242-756778fe7626 669 0 2025-01-29 11:17:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:567645b7d8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-567645b7d8-9spgk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali84dc4a2bd1a [] []}} ContainerID="139768de746c8cf5771c0f9e2b88e61eaf38a7afef6f6236a5e5ed947fb82725" Namespace="calico-apiserver" Pod="calico-apiserver-567645b7d8-9spgk" WorkloadEndpoint="localhost-k8s-calico--apiserver--567645b7d8--9spgk-" Jan 29 11:18:00.735219 containerd[1540]: 2025-01-29 11:18:00.184 [INFO][4440] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="139768de746c8cf5771c0f9e2b88e61eaf38a7afef6f6236a5e5ed947fb82725" Namespace="calico-apiserver" Pod="calico-apiserver-567645b7d8-9spgk" WorkloadEndpoint="localhost-k8s-calico--apiserver--567645b7d8--9spgk-eth0" Jan 29 11:18:00.735219 containerd[1540]: 2025-01-29 11:18:00.457 [INFO][4518] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="139768de746c8cf5771c0f9e2b88e61eaf38a7afef6f6236a5e5ed947fb82725" HandleID="k8s-pod-network.139768de746c8cf5771c0f9e2b88e61eaf38a7afef6f6236a5e5ed947fb82725" Workload="localhost-k8s-calico--apiserver--567645b7d8--9spgk-eth0" Jan 29 11:18:00.735219 containerd[1540]: 2025-01-29 11:18:00.471 [INFO][4518] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="139768de746c8cf5771c0f9e2b88e61eaf38a7afef6f6236a5e5ed947fb82725" HandleID="k8s-pod-network.139768de746c8cf5771c0f9e2b88e61eaf38a7afef6f6236a5e5ed947fb82725" Workload="localhost-k8s-calico--apiserver--567645b7d8--9spgk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002b9c40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-567645b7d8-9spgk", "timestamp":"2025-01-29 11:18:00.457110595 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:18:00.735219 containerd[1540]: 2025-01-29 11:18:00.471 [INFO][4518] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:18:00.735219 containerd[1540]: 2025-01-29 11:18:00.628 [INFO][4518] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:18:00.735219 containerd[1540]: 2025-01-29 11:18:00.628 [INFO][4518] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:18:00.735219 containerd[1540]: 2025-01-29 11:18:00.675 [INFO][4518] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.139768de746c8cf5771c0f9e2b88e61eaf38a7afef6f6236a5e5ed947fb82725" host="localhost" Jan 29 11:18:00.735219 containerd[1540]: 2025-01-29 11:18:00.684 [INFO][4518] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:18:00.735219 containerd[1540]: 2025-01-29 11:18:00.690 [INFO][4518] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:18:00.735219 containerd[1540]: 2025-01-29 11:18:00.694 [INFO][4518] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:18:00.735219 containerd[1540]: 2025-01-29 11:18:00.697 [INFO][4518] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:18:00.735219 containerd[1540]: 2025-01-29 11:18:00.697 [INFO][4518] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.139768de746c8cf5771c0f9e2b88e61eaf38a7afef6f6236a5e5ed947fb82725" host="localhost" Jan 29 11:18:00.735219 containerd[1540]: 2025-01-29 11:18:00.700 [INFO][4518] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.139768de746c8cf5771c0f9e2b88e61eaf38a7afef6f6236a5e5ed947fb82725 Jan 29 11:18:00.735219 containerd[1540]: 2025-01-29 11:18:00.704 [INFO][4518] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.139768de746c8cf5771c0f9e2b88e61eaf38a7afef6f6236a5e5ed947fb82725" host="localhost" Jan 29 11:18:00.735219 containerd[1540]: 2025-01-29 11:18:00.709 [INFO][4518] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.139768de746c8cf5771c0f9e2b88e61eaf38a7afef6f6236a5e5ed947fb82725" host="localhost" Jan 29 11:18:00.735219 containerd[1540]: 2025-01-29 11:18:00.709 [INFO][4518] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.139768de746c8cf5771c0f9e2b88e61eaf38a7afef6f6236a5e5ed947fb82725" host="localhost" Jan 29 11:18:00.735219 containerd[1540]: 2025-01-29 11:18:00.710 [INFO][4518] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:18:00.735219 containerd[1540]: 2025-01-29 11:18:00.710 [INFO][4518] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="139768de746c8cf5771c0f9e2b88e61eaf38a7afef6f6236a5e5ed947fb82725" HandleID="k8s-pod-network.139768de746c8cf5771c0f9e2b88e61eaf38a7afef6f6236a5e5ed947fb82725" Workload="localhost-k8s-calico--apiserver--567645b7d8--9spgk-eth0" Jan 29 11:18:00.736568 containerd[1540]: 2025-01-29 11:18:00.715 [INFO][4440] cni-plugin/k8s.go 386: Populated endpoint ContainerID="139768de746c8cf5771c0f9e2b88e61eaf38a7afef6f6236a5e5ed947fb82725" Namespace="calico-apiserver" Pod="calico-apiserver-567645b7d8-9spgk" WorkloadEndpoint="localhost-k8s-calico--apiserver--567645b7d8--9spgk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--567645b7d8--9spgk-eth0", GenerateName:"calico-apiserver-567645b7d8-", Namespace:"calico-apiserver", SelfLink:"", UID:"dfc4733c-b2d3-4e5f-a242-756778fe7626", ResourceVersion:"669", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 17, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"567645b7d8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-567645b7d8-9spgk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali84dc4a2bd1a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:18:00.736568 containerd[1540]: 2025-01-29 11:18:00.715 [INFO][4440] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="139768de746c8cf5771c0f9e2b88e61eaf38a7afef6f6236a5e5ed947fb82725" Namespace="calico-apiserver" Pod="calico-apiserver-567645b7d8-9spgk" WorkloadEndpoint="localhost-k8s-calico--apiserver--567645b7d8--9spgk-eth0" Jan 29 11:18:00.736568 containerd[1540]: 2025-01-29 11:18:00.715 [INFO][4440] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali84dc4a2bd1a ContainerID="139768de746c8cf5771c0f9e2b88e61eaf38a7afef6f6236a5e5ed947fb82725" Namespace="calico-apiserver" Pod="calico-apiserver-567645b7d8-9spgk" WorkloadEndpoint="localhost-k8s-calico--apiserver--567645b7d8--9spgk-eth0" Jan 29 11:18:00.736568 containerd[1540]: 2025-01-29 11:18:00.721 [INFO][4440] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="139768de746c8cf5771c0f9e2b88e61eaf38a7afef6f6236a5e5ed947fb82725" Namespace="calico-apiserver" Pod="calico-apiserver-567645b7d8-9spgk" WorkloadEndpoint="localhost-k8s-calico--apiserver--567645b7d8--9spgk-eth0" Jan 29 11:18:00.736568 containerd[1540]: 2025-01-29 11:18:00.722 [INFO][4440] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="139768de746c8cf5771c0f9e2b88e61eaf38a7afef6f6236a5e5ed947fb82725" Namespace="calico-apiserver" Pod="calico-apiserver-567645b7d8-9spgk" WorkloadEndpoint="localhost-k8s-calico--apiserver--567645b7d8--9spgk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--567645b7d8--9spgk-eth0", GenerateName:"calico-apiserver-567645b7d8-", Namespace:"calico-apiserver", SelfLink:"", UID:"dfc4733c-b2d3-4e5f-a242-756778fe7626", ResourceVersion:"669", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 17, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"567645b7d8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"139768de746c8cf5771c0f9e2b88e61eaf38a7afef6f6236a5e5ed947fb82725", Pod:"calico-apiserver-567645b7d8-9spgk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali84dc4a2bd1a", MAC:"7a:5c:7d:2c:27:3d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:18:00.736568 containerd[1540]: 2025-01-29 11:18:00.733 [INFO][4440] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="139768de746c8cf5771c0f9e2b88e61eaf38a7afef6f6236a5e5ed947fb82725" Namespace="calico-apiserver" Pod="calico-apiserver-567645b7d8-9spgk" WorkloadEndpoint="localhost-k8s-calico--apiserver--567645b7d8--9spgk-eth0" Jan 29 11:18:00.752562 systemd-resolved[1458]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:18:00.773611 containerd[1540]: time="2025-01-29T11:18:00.773201406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567645b7d8-fwkgz,Uid:f5dc6b2c-ba1b-4039-8947-278e73fda781,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"f680eda3ea0a18ff93607e5b043f2b07c48eeac93fc646cb4c02a003b455d97f\"" Jan 29 11:18:00.776081 containerd[1540]: time="2025-01-29T11:18:00.776008929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 11:18:00.778555 containerd[1540]: time="2025-01-29T11:18:00.778528284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wnr5k,Uid:7d847577-abb5-4daf-916b-6334c86beb77,Namespace:calico-system,Attempt:3,} returns sandbox id \"a7456390fe4e63fd325f7dcdaf71c000a19c831a4de230c73b091a06ec987b0e\"" Jan 29 11:18:00.788671 containerd[1540]: time="2025-01-29T11:18:00.787424371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:18:00.788671 containerd[1540]: time="2025-01-29T11:18:00.787560894Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:18:00.788671 containerd[1540]: time="2025-01-29T11:18:00.787908039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:18:00.788671 containerd[1540]: time="2025-01-29T11:18:00.788095190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:18:00.808806 systemd[1]: Started cri-containerd-139768de746c8cf5771c0f9e2b88e61eaf38a7afef6f6236a5e5ed947fb82725.scope - libcontainer container 139768de746c8cf5771c0f9e2b88e61eaf38a7afef6f6236a5e5ed947fb82725. Jan 29 11:18:00.821956 systemd-networkd[1455]: calie74f6320d44: Link UP Jan 29 11:18:00.823190 systemd-networkd[1455]: calie74f6320d44: Gained carrier Jan 29 11:18:00.828917 systemd-resolved[1458]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:18:00.837976 containerd[1540]: 2025-01-29 11:18:00.136 [INFO][4452] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:18:00.837976 containerd[1540]: 2025-01-29 11:18:00.184 [INFO][4452] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--54nng-eth0 coredns-668d6bf9bc- kube-system fad30e8c-8ba2-44d9-9978-b8d6342a6efd 660 0 2025-01-29 11:17:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-54nng eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie74f6320d44 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d4adc95ff1cfb4478c030796b386d8b98926cdedb4945f0642f8b45b2fd3642e" Namespace="kube-system" Pod="coredns-668d6bf9bc-54nng" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--54nng-" Jan 29 11:18:00.837976 containerd[1540]: 2025-01-29 11:18:00.184 [INFO][4452] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d4adc95ff1cfb4478c030796b386d8b98926cdedb4945f0642f8b45b2fd3642e" Namespace="kube-system" Pod="coredns-668d6bf9bc-54nng" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--54nng-eth0" Jan 29 11:18:00.837976 containerd[1540]: 2025-01-29 11:18:00.457 [INFO][4520] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d4adc95ff1cfb4478c030796b386d8b98926cdedb4945f0642f8b45b2fd3642e" HandleID="k8s-pod-network.d4adc95ff1cfb4478c030796b386d8b98926cdedb4945f0642f8b45b2fd3642e" Workload="localhost-k8s-coredns--668d6bf9bc--54nng-eth0" Jan 29 11:18:00.837976 containerd[1540]: 2025-01-29 11:18:00.470 [INFO][4520] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d4adc95ff1cfb4478c030796b386d8b98926cdedb4945f0642f8b45b2fd3642e" HandleID="k8s-pod-network.d4adc95ff1cfb4478c030796b386d8b98926cdedb4945f0642f8b45b2fd3642e" Workload="localhost-k8s-coredns--668d6bf9bc--54nng-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003051f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-54nng", "timestamp":"2025-01-29 11:18:00.457098717 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:18:00.837976 containerd[1540]: 2025-01-29 11:18:00.470 [INFO][4520] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:18:00.837976 containerd[1540]: 2025-01-29 11:18:00.710 [INFO][4520] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:18:00.837976 containerd[1540]: 2025-01-29 11:18:00.710 [INFO][4520] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:18:00.837976 containerd[1540]: 2025-01-29 11:18:00.778 [INFO][4520] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d4adc95ff1cfb4478c030796b386d8b98926cdedb4945f0642f8b45b2fd3642e" host="localhost" Jan 29 11:18:00.837976 containerd[1540]: 2025-01-29 11:18:00.785 [INFO][4520] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:18:00.837976 containerd[1540]: 2025-01-29 11:18:00.795 [INFO][4520] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:18:00.837976 containerd[1540]: 2025-01-29 11:18:00.799 [INFO][4520] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:18:00.837976 containerd[1540]: 2025-01-29 11:18:00.802 [INFO][4520] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:18:00.837976 containerd[1540]: 2025-01-29 11:18:00.802 [INFO][4520] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d4adc95ff1cfb4478c030796b386d8b98926cdedb4945f0642f8b45b2fd3642e" host="localhost" Jan 29 11:18:00.837976 containerd[1540]: 2025-01-29 11:18:00.805 [INFO][4520] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d4adc95ff1cfb4478c030796b386d8b98926cdedb4945f0642f8b45b2fd3642e Jan 29 11:18:00.837976 containerd[1540]: 2025-01-29 11:18:00.813 [INFO][4520] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d4adc95ff1cfb4478c030796b386d8b98926cdedb4945f0642f8b45b2fd3642e" host="localhost" Jan 29 11:18:00.837976 containerd[1540]: 2025-01-29 11:18:00.816 [INFO][4520] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.d4adc95ff1cfb4478c030796b386d8b98926cdedb4945f0642f8b45b2fd3642e" host="localhost" Jan 29 11:18:00.837976 containerd[1540]: 2025-01-29 11:18:00.816 [INFO][4520] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.d4adc95ff1cfb4478c030796b386d8b98926cdedb4945f0642f8b45b2fd3642e" host="localhost" Jan 29 11:18:00.837976 containerd[1540]: 2025-01-29 11:18:00.816 [INFO][4520] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:18:00.837976 containerd[1540]: 2025-01-29 11:18:00.816 [INFO][4520] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="d4adc95ff1cfb4478c030796b386d8b98926cdedb4945f0642f8b45b2fd3642e" HandleID="k8s-pod-network.d4adc95ff1cfb4478c030796b386d8b98926cdedb4945f0642f8b45b2fd3642e" Workload="localhost-k8s-coredns--668d6bf9bc--54nng-eth0" Jan 29 11:18:00.838991 containerd[1540]: 2025-01-29 11:18:00.819 [INFO][4452] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d4adc95ff1cfb4478c030796b386d8b98926cdedb4945f0642f8b45b2fd3642e" Namespace="kube-system" Pod="coredns-668d6bf9bc-54nng" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--54nng-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--54nng-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fad30e8c-8ba2-44d9-9978-b8d6342a6efd", ResourceVersion:"660", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 17, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-54nng", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie74f6320d44", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:18:00.838991 containerd[1540]: 2025-01-29 11:18:00.819 [INFO][4452] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="d4adc95ff1cfb4478c030796b386d8b98926cdedb4945f0642f8b45b2fd3642e" Namespace="kube-system" Pod="coredns-668d6bf9bc-54nng" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--54nng-eth0" Jan 29 11:18:00.838991 containerd[1540]: 2025-01-29 11:18:00.819 [INFO][4452] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie74f6320d44 ContainerID="d4adc95ff1cfb4478c030796b386d8b98926cdedb4945f0642f8b45b2fd3642e" Namespace="kube-system" Pod="coredns-668d6bf9bc-54nng" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--54nng-eth0" Jan 29 11:18:00.838991 containerd[1540]: 2025-01-29 11:18:00.822 [INFO][4452] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d4adc95ff1cfb4478c030796b386d8b98926cdedb4945f0642f8b45b2fd3642e" Namespace="kube-system" Pod="coredns-668d6bf9bc-54nng" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--54nng-eth0" Jan 29 11:18:00.838991 containerd[1540]: 2025-01-29 11:18:00.824 [INFO][4452] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d4adc95ff1cfb4478c030796b386d8b98926cdedb4945f0642f8b45b2fd3642e" Namespace="kube-system" Pod="coredns-668d6bf9bc-54nng" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--54nng-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--54nng-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fad30e8c-8ba2-44d9-9978-b8d6342a6efd", ResourceVersion:"660", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 17, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d4adc95ff1cfb4478c030796b386d8b98926cdedb4945f0642f8b45b2fd3642e", Pod:"coredns-668d6bf9bc-54nng", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie74f6320d44", MAC:"12:bc:0c:66:d9:b2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:18:00.838991 containerd[1540]: 2025-01-29 11:18:00.834 [INFO][4452] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d4adc95ff1cfb4478c030796b386d8b98926cdedb4945f0642f8b45b2fd3642e" Namespace="kube-system" Pod="coredns-668d6bf9bc-54nng" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--54nng-eth0" Jan 29 11:18:00.855525 containerd[1540]: time="2025-01-29T11:18:00.855342824Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:18:00.855525 containerd[1540]: time="2025-01-29T11:18:00.855394620Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:18:00.855525 containerd[1540]: time="2025-01-29T11:18:00.855406716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:18:00.855525 containerd[1540]: time="2025-01-29T11:18:00.855474405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:18:00.864998 containerd[1540]: time="2025-01-29T11:18:00.864934815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567645b7d8-9spgk,Uid:dfc4733c-b2d3-4e5f-a242-756778fe7626,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"139768de746c8cf5771c0f9e2b88e61eaf38a7afef6f6236a5e5ed947fb82725\"" Jan 29 11:18:00.870887 systemd[1]: Started cri-containerd-d4adc95ff1cfb4478c030796b386d8b98926cdedb4945f0642f8b45b2fd3642e.scope - libcontainer container d4adc95ff1cfb4478c030796b386d8b98926cdedb4945f0642f8b45b2fd3642e. Jan 29 11:18:00.882892 systemd-resolved[1458]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:18:00.914325 systemd-networkd[1455]: calice4dbebc507: Link UP Jan 29 11:18:00.914777 systemd-networkd[1455]: calice4dbebc507: Gained carrier Jan 29 11:18:00.917162 containerd[1540]: time="2025-01-29T11:18:00.916888167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-54nng,Uid:fad30e8c-8ba2-44d9-9978-b8d6342a6efd,Namespace:kube-system,Attempt:5,} returns sandbox id \"d4adc95ff1cfb4478c030796b386d8b98926cdedb4945f0642f8b45b2fd3642e\"" Jan 29 11:18:00.923093 containerd[1540]: time="2025-01-29T11:18:00.922993728Z" level=info msg="CreateContainer within sandbox \"d4adc95ff1cfb4478c030796b386d8b98926cdedb4945f0642f8b45b2fd3642e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:18:00.929650 containerd[1540]: 2025-01-29 11:18:00.150 [INFO][4462] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:18:00.929650 containerd[1540]: 2025-01-29 11:18:00.184 [INFO][4462] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--bd9c8f59d--nldcx-eth0 calico-kube-controllers-bd9c8f59d- calico-system 228d172c-bed5-421d-bbfc-69e399249629 663 0 2025-01-29 11:17:43 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:bd9c8f59d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-bd9c8f59d-nldcx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calice4dbebc507 [] []}} ContainerID="673cd46642be328affab225fd383352dd2e252560c693d1eb7eccd820428dde4" Namespace="calico-system" Pod="calico-kube-controllers-bd9c8f59d-nldcx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bd9c8f59d--nldcx-" Jan 29 11:18:00.929650 containerd[1540]: 2025-01-29 11:18:00.184 [INFO][4462] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="673cd46642be328affab225fd383352dd2e252560c693d1eb7eccd820428dde4" Namespace="calico-system" Pod="calico-kube-controllers-bd9c8f59d-nldcx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bd9c8f59d--nldcx-eth0" Jan 29 11:18:00.929650 containerd[1540]: 2025-01-29 11:18:00.457 [INFO][4519] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="673cd46642be328affab225fd383352dd2e252560c693d1eb7eccd820428dde4" HandleID="k8s-pod-network.673cd46642be328affab225fd383352dd2e252560c693d1eb7eccd820428dde4" Workload="localhost-k8s-calico--kube--controllers--bd9c8f59d--nldcx-eth0" Jan 29 11:18:00.929650 containerd[1540]: 2025-01-29 11:18:00.471 [INFO][4519] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="673cd46642be328affab225fd383352dd2e252560c693d1eb7eccd820428dde4" HandleID="k8s-pod-network.673cd46642be328affab225fd383352dd2e252560c693d1eb7eccd820428dde4" Workload="localhost-k8s-calico--kube--controllers--bd9c8f59d--nldcx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039edc0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-bd9c8f59d-nldcx", "timestamp":"2025-01-29 11:18:00.45703935 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:18:00.929650 containerd[1540]: 2025-01-29 11:18:00.471 [INFO][4519] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:18:00.929650 containerd[1540]: 2025-01-29 11:18:00.816 [INFO][4519] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:18:00.929650 containerd[1540]: 2025-01-29 11:18:00.817 [INFO][4519] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:18:00.929650 containerd[1540]: 2025-01-29 11:18:00.877 [INFO][4519] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.673cd46642be328affab225fd383352dd2e252560c693d1eb7eccd820428dde4" host="localhost" Jan 29 11:18:00.929650 containerd[1540]: 2025-01-29 11:18:00.885 [INFO][4519] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:18:00.929650 containerd[1540]: 2025-01-29 11:18:00.892 [INFO][4519] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:18:00.929650 containerd[1540]: 2025-01-29 11:18:00.893 [INFO][4519] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:18:00.929650 containerd[1540]: 2025-01-29 11:18:00.895 [INFO][4519] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:18:00.929650 containerd[1540]: 2025-01-29 11:18:00.895 [INFO][4519] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.673cd46642be328affab225fd383352dd2e252560c693d1eb7eccd820428dde4" host="localhost" Jan 29 11:18:00.929650 containerd[1540]: 2025-01-29 11:18:00.896 [INFO][4519] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.673cd46642be328affab225fd383352dd2e252560c693d1eb7eccd820428dde4 Jan 29 11:18:00.929650 containerd[1540]: 2025-01-29 11:18:00.901 [INFO][4519] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.673cd46642be328affab225fd383352dd2e252560c693d1eb7eccd820428dde4" host="localhost" Jan 29 11:18:00.929650 containerd[1540]: 2025-01-29 11:18:00.906 [INFO][4519] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.673cd46642be328affab225fd383352dd2e252560c693d1eb7eccd820428dde4" host="localhost" Jan 29 11:18:00.929650 containerd[1540]: 2025-01-29 11:18:00.906 [INFO][4519] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.673cd46642be328affab225fd383352dd2e252560c693d1eb7eccd820428dde4" host="localhost" Jan 29 11:18:00.929650 containerd[1540]: 2025-01-29 11:18:00.906 [INFO][4519] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:18:00.929650 containerd[1540]: 2025-01-29 11:18:00.906 [INFO][4519] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="673cd46642be328affab225fd383352dd2e252560c693d1eb7eccd820428dde4" HandleID="k8s-pod-network.673cd46642be328affab225fd383352dd2e252560c693d1eb7eccd820428dde4" Workload="localhost-k8s-calico--kube--controllers--bd9c8f59d--nldcx-eth0" Jan 29 11:18:00.931609 containerd[1540]: 2025-01-29 11:18:00.908 [INFO][4462] cni-plugin/k8s.go 386: Populated endpoint ContainerID="673cd46642be328affab225fd383352dd2e252560c693d1eb7eccd820428dde4" Namespace="calico-system" Pod="calico-kube-controllers-bd9c8f59d-nldcx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bd9c8f59d--nldcx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--bd9c8f59d--nldcx-eth0", GenerateName:"calico-kube-controllers-bd9c8f59d-", Namespace:"calico-system", SelfLink:"", UID:"228d172c-bed5-421d-bbfc-69e399249629", ResourceVersion:"663", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 17, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bd9c8f59d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-bd9c8f59d-nldcx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calice4dbebc507", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:18:00.931609 containerd[1540]: 2025-01-29 11:18:00.908 [INFO][4462] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="673cd46642be328affab225fd383352dd2e252560c693d1eb7eccd820428dde4" Namespace="calico-system" Pod="calico-kube-controllers-bd9c8f59d-nldcx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bd9c8f59d--nldcx-eth0" Jan 29 11:18:00.931609 containerd[1540]: 2025-01-29 11:18:00.908 [INFO][4462] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calice4dbebc507 ContainerID="673cd46642be328affab225fd383352dd2e252560c693d1eb7eccd820428dde4" Namespace="calico-system" Pod="calico-kube-controllers-bd9c8f59d-nldcx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bd9c8f59d--nldcx-eth0" Jan 29 11:18:00.931609 containerd[1540]: 2025-01-29 11:18:00.915 [INFO][4462] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="673cd46642be328affab225fd383352dd2e252560c693d1eb7eccd820428dde4" Namespace="calico-system" Pod="calico-kube-controllers-bd9c8f59d-nldcx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bd9c8f59d--nldcx-eth0" Jan 29 11:18:00.931609 containerd[1540]: 2025-01-29 11:18:00.917 [INFO][4462] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="673cd46642be328affab225fd383352dd2e252560c693d1eb7eccd820428dde4" Namespace="calico-system" Pod="calico-kube-controllers-bd9c8f59d-nldcx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bd9c8f59d--nldcx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--bd9c8f59d--nldcx-eth0", GenerateName:"calico-kube-controllers-bd9c8f59d-", Namespace:"calico-system", SelfLink:"", UID:"228d172c-bed5-421d-bbfc-69e399249629", ResourceVersion:"663", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 17, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bd9c8f59d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"673cd46642be328affab225fd383352dd2e252560c693d1eb7eccd820428dde4", Pod:"calico-kube-controllers-bd9c8f59d-nldcx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calice4dbebc507", MAC:"c2:28:d5:49:37:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:18:00.931609 containerd[1540]: 2025-01-29 11:18:00.928 [INFO][4462] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="673cd46642be328affab225fd383352dd2e252560c693d1eb7eccd820428dde4" Namespace="calico-system" Pod="calico-kube-controllers-bd9c8f59d-nldcx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bd9c8f59d--nldcx-eth0" Jan 29 11:18:00.949696 containerd[1540]: time="2025-01-29T11:18:00.948876970Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:18:00.949696 containerd[1540]: time="2025-01-29T11:18:00.948959769Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:18:00.949696 containerd[1540]: time="2025-01-29T11:18:00.948979458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:18:00.949696 containerd[1540]: time="2025-01-29T11:18:00.949070750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:18:00.965009 systemd[1]: Started cri-containerd-673cd46642be328affab225fd383352dd2e252560c693d1eb7eccd820428dde4.scope - libcontainer container 673cd46642be328affab225fd383352dd2e252560c693d1eb7eccd820428dde4. Jan 29 11:18:00.965448 containerd[1540]: time="2025-01-29T11:18:00.965417590Z" level=info msg="CreateContainer within sandbox \"d4adc95ff1cfb4478c030796b386d8b98926cdedb4945f0642f8b45b2fd3642e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"983ae23c8b441fc8b5ee5f00ca4ff6c4b10af636934c058dea20788b15f56293\"" Jan 29 11:18:00.966701 containerd[1540]: time="2025-01-29T11:18:00.966574249Z" level=info msg="StartContainer for \"983ae23c8b441fc8b5ee5f00ca4ff6c4b10af636934c058dea20788b15f56293\"" Jan 29 11:18:00.981416 systemd-resolved[1458]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:18:01.000925 systemd[1]: Started cri-containerd-983ae23c8b441fc8b5ee5f00ca4ff6c4b10af636934c058dea20788b15f56293.scope - libcontainer container 983ae23c8b441fc8b5ee5f00ca4ff6c4b10af636934c058dea20788b15f56293. Jan 29 11:18:01.015671 containerd[1540]: time="2025-01-29T11:18:01.015643797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bd9c8f59d-nldcx,Uid:228d172c-bed5-421d-bbfc-69e399249629,Namespace:calico-system,Attempt:5,} returns sandbox id \"673cd46642be328affab225fd383352dd2e252560c693d1eb7eccd820428dde4\"" Jan 29 11:18:01.024912 systemd-networkd[1455]: calia93b10f1445: Link UP Jan 29 11:18:01.025942 systemd-networkd[1455]: calia93b10f1445: Gained carrier Jan 29 11:18:01.047868 containerd[1540]: 2025-01-29 11:18:00.160 [INFO][4493] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:18:01.047868 containerd[1540]: 2025-01-29 11:18:00.184 [INFO][4493] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--9fbsh-eth0 coredns-668d6bf9bc- kube-system 34b7fb00-2637-448b-9a6c-d8fe087ac46d 666 0 2025-01-29 11:17:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-9fbsh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia93b10f1445 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="cdee111318911c61e5a1dcd3e963c456cc019ba1e727017fc52f775ab2be7740" Namespace="kube-system" Pod="coredns-668d6bf9bc-9fbsh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9fbsh-" Jan 29 11:18:01.047868 containerd[1540]: 2025-01-29 11:18:00.184 [INFO][4493] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cdee111318911c61e5a1dcd3e963c456cc019ba1e727017fc52f775ab2be7740" Namespace="kube-system" Pod="coredns-668d6bf9bc-9fbsh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9fbsh-eth0" Jan 29 11:18:01.047868 containerd[1540]: 2025-01-29 11:18:00.457 [INFO][4516] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cdee111318911c61e5a1dcd3e963c456cc019ba1e727017fc52f775ab2be7740" HandleID="k8s-pod-network.cdee111318911c61e5a1dcd3e963c456cc019ba1e727017fc52f775ab2be7740" Workload="localhost-k8s-coredns--668d6bf9bc--9fbsh-eth0" Jan 29 11:18:01.047868 containerd[1540]: 2025-01-29 11:18:00.472 [INFO][4516] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cdee111318911c61e5a1dcd3e963c456cc019ba1e727017fc52f775ab2be7740" HandleID="k8s-pod-network.cdee111318911c61e5a1dcd3e963c456cc019ba1e727017fc52f775ab2be7740" Workload="localhost-k8s-coredns--668d6bf9bc--9fbsh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030f930), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-9fbsh", "timestamp":"2025-01-29 11:18:00.457052574 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:18:01.047868 containerd[1540]: 2025-01-29 11:18:00.472 [INFO][4516] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:18:01.047868 containerd[1540]: 2025-01-29 11:18:00.906 [INFO][4516] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:18:01.047868 containerd[1540]: 2025-01-29 11:18:00.906 [INFO][4516] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:18:01.047868 containerd[1540]: 2025-01-29 11:18:00.977 [INFO][4516] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cdee111318911c61e5a1dcd3e963c456cc019ba1e727017fc52f775ab2be7740" host="localhost" Jan 29 11:18:01.047868 containerd[1540]: 2025-01-29 11:18:00.986 [INFO][4516] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:18:01.047868 containerd[1540]: 2025-01-29 11:18:00.996 [INFO][4516] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:18:01.047868 containerd[1540]: 2025-01-29 11:18:00.998 [INFO][4516] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:18:01.047868 containerd[1540]: 2025-01-29 11:18:00.999 [INFO][4516] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:18:01.047868 containerd[1540]: 2025-01-29 11:18:00.999 [INFO][4516] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cdee111318911c61e5a1dcd3e963c456cc019ba1e727017fc52f775ab2be7740" host="localhost" Jan 29 11:18:01.047868 containerd[1540]: 2025-01-29 11:18:01.000 [INFO][4516] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.cdee111318911c61e5a1dcd3e963c456cc019ba1e727017fc52f775ab2be7740 Jan 29 11:18:01.047868 containerd[1540]: 2025-01-29 11:18:01.008 [INFO][4516] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cdee111318911c61e5a1dcd3e963c456cc019ba1e727017fc52f775ab2be7740" host="localhost" Jan 29 11:18:01.047868 containerd[1540]: 2025-01-29 11:18:01.021 [INFO][4516] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.cdee111318911c61e5a1dcd3e963c456cc019ba1e727017fc52f775ab2be7740" host="localhost" Jan 29 11:18:01.047868 containerd[1540]: 2025-01-29 11:18:01.021 [INFO][4516] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.cdee111318911c61e5a1dcd3e963c456cc019ba1e727017fc52f775ab2be7740" host="localhost" Jan 29 11:18:01.047868 containerd[1540]: 2025-01-29 11:18:01.021 [INFO][4516] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:18:01.047868 containerd[1540]: 2025-01-29 11:18:01.021 [INFO][4516] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="cdee111318911c61e5a1dcd3e963c456cc019ba1e727017fc52f775ab2be7740" HandleID="k8s-pod-network.cdee111318911c61e5a1dcd3e963c456cc019ba1e727017fc52f775ab2be7740" Workload="localhost-k8s-coredns--668d6bf9bc--9fbsh-eth0" Jan 29 11:18:01.048452 containerd[1540]: 2025-01-29 11:18:01.023 [INFO][4493] cni-plugin/k8s.go 386: Populated endpoint ContainerID="cdee111318911c61e5a1dcd3e963c456cc019ba1e727017fc52f775ab2be7740" Namespace="kube-system" Pod="coredns-668d6bf9bc-9fbsh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9fbsh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--9fbsh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"34b7fb00-2637-448b-9a6c-d8fe087ac46d", ResourceVersion:"666", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 17, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-9fbsh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia93b10f1445", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:18:01.048452 containerd[1540]: 2025-01-29 11:18:01.023 [INFO][4493] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="cdee111318911c61e5a1dcd3e963c456cc019ba1e727017fc52f775ab2be7740" Namespace="kube-system" Pod="coredns-668d6bf9bc-9fbsh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9fbsh-eth0" Jan 29 11:18:01.048452 containerd[1540]: 2025-01-29 11:18:01.023 [INFO][4493] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia93b10f1445 ContainerID="cdee111318911c61e5a1dcd3e963c456cc019ba1e727017fc52f775ab2be7740" Namespace="kube-system" Pod="coredns-668d6bf9bc-9fbsh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9fbsh-eth0" Jan 29 11:18:01.048452 containerd[1540]: 2025-01-29 11:18:01.026 [INFO][4493] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cdee111318911c61e5a1dcd3e963c456cc019ba1e727017fc52f775ab2be7740" Namespace="kube-system" Pod="coredns-668d6bf9bc-9fbsh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9fbsh-eth0" Jan 29 11:18:01.048452 containerd[1540]: 2025-01-29 11:18:01.027 [INFO][4493] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cdee111318911c61e5a1dcd3e963c456cc019ba1e727017fc52f775ab2be7740" Namespace="kube-system" Pod="coredns-668d6bf9bc-9fbsh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9fbsh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--9fbsh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"34b7fb00-2637-448b-9a6c-d8fe087ac46d", ResourceVersion:"666", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 17, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cdee111318911c61e5a1dcd3e963c456cc019ba1e727017fc52f775ab2be7740", Pod:"coredns-668d6bf9bc-9fbsh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia93b10f1445", MAC:"52:25:e8:30:87:ee", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:18:01.048452 containerd[1540]: 2025-01-29 11:18:01.046 [INFO][4493] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="cdee111318911c61e5a1dcd3e963c456cc019ba1e727017fc52f775ab2be7740" Namespace="kube-system" Pod="coredns-668d6bf9bc-9fbsh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9fbsh-eth0" Jan 29 11:18:01.075298 containerd[1540]: time="2025-01-29T11:18:01.075182301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:18:01.075298 containerd[1540]: time="2025-01-29T11:18:01.075213960Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:18:01.075298 containerd[1540]: time="2025-01-29T11:18:01.075224195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:18:01.075771 containerd[1540]: time="2025-01-29T11:18:01.075630283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:18:01.091554 systemd[1]: Started cri-containerd-cdee111318911c61e5a1dcd3e963c456cc019ba1e727017fc52f775ab2be7740.scope - libcontainer container cdee111318911c61e5a1dcd3e963c456cc019ba1e727017fc52f775ab2be7740. Jan 29 11:18:01.107018 systemd-resolved[1458]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:18:01.133477 containerd[1540]: time="2025-01-29T11:18:01.133416740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9fbsh,Uid:34b7fb00-2637-448b-9a6c-d8fe087ac46d,Namespace:kube-system,Attempt:5,} returns sandbox id \"cdee111318911c61e5a1dcd3e963c456cc019ba1e727017fc52f775ab2be7740\"" Jan 29 11:18:01.133646 containerd[1540]: time="2025-01-29T11:18:01.133607156Z" level=info msg="StartContainer for \"983ae23c8b441fc8b5ee5f00ca4ff6c4b10af636934c058dea20788b15f56293\" returns successfully" Jan 29 11:18:01.228390 containerd[1540]: time="2025-01-29T11:18:01.228360713Z" level=info msg="CreateContainer within sandbox \"cdee111318911c61e5a1dcd3e963c456cc019ba1e727017fc52f775ab2be7740\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:18:01.263728 containerd[1540]: time="2025-01-29T11:18:01.263667412Z" level=info msg="CreateContainer within sandbox \"cdee111318911c61e5a1dcd3e963c456cc019ba1e727017fc52f775ab2be7740\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eada989b6b7fc96446b4a61b9855ff38181570ac292bf090426f05301b5657f9\"" Jan 29 11:18:01.264487 containerd[1540]: time="2025-01-29T11:18:01.264472358Z" level=info msg="StartContainer for \"eada989b6b7fc96446b4a61b9855ff38181570ac292bf090426f05301b5657f9\"" Jan 29 11:18:01.289455 systemd[1]: Started cri-containerd-eada989b6b7fc96446b4a61b9855ff38181570ac292bf090426f05301b5657f9.scope - libcontainer container eada989b6b7fc96446b4a61b9855ff38181570ac292bf090426f05301b5657f9. Jan 29 11:18:01.308970 containerd[1540]: time="2025-01-29T11:18:01.308912048Z" level=info msg="StartContainer for \"eada989b6b7fc96446b4a61b9855ff38181570ac292bf090426f05301b5657f9\" returns successfully" Jan 29 11:18:01.703102 systemd-networkd[1455]: cali5199332ba0c: Gained IPv6LL Jan 29 11:18:01.828568 systemd-networkd[1455]: cali470cae6a75c: Gained IPv6LL Jan 29 11:18:01.954179 kubelet[2781]: I0129 11:18:01.954098 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-9fbsh" podStartSLOduration=24.954087306 podStartE2EDuration="24.954087306s" podCreationTimestamp="2025-01-29 11:17:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:18:01.953333836 +0000 UTC m=+31.054724483" watchObservedRunningTime="2025-01-29 11:18:01.954087306 +0000 UTC m=+31.055477951" Jan 29 11:18:01.965867 kubelet[2781]: I0129 11:18:01.965829 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-54nng" podStartSLOduration=24.965819042 podStartE2EDuration="24.965819042s" podCreationTimestamp="2025-01-29 11:17:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:18:01.965325333 +0000 UTC m=+31.066715987" watchObservedRunningTime="2025-01-29 11:18:01.965819042 +0000 UTC m=+31.067209687" Jan 29 11:18:02.020628 systemd-networkd[1455]: cali84dc4a2bd1a: Gained IPv6LL Jan 29 11:18:02.084710 systemd-networkd[1455]: calia93b10f1445: Gained IPv6LL Jan 29 11:18:02.468657 systemd-networkd[1455]: calice4dbebc507: Gained IPv6LL Jan 29 11:18:02.852509 systemd-networkd[1455]: calie74f6320d44: Gained IPv6LL Jan 29 11:18:03.517936 containerd[1540]: time="2025-01-29T11:18:03.517441616Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:18:03.517936 containerd[1540]: time="2025-01-29T11:18:03.517826972Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 29 11:18:03.517936 containerd[1540]: time="2025-01-29T11:18:03.517906297Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:18:03.519058 containerd[1540]: time="2025-01-29T11:18:03.519044277Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:18:03.519463 containerd[1540]: time="2025-01-29T11:18:03.519447386Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.74341518s" Jan 29 11:18:03.519495 containerd[1540]: time="2025-01-29T11:18:03.519464421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 29 11:18:03.520536 containerd[1540]: time="2025-01-29T11:18:03.520516792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 11:18:03.521043 containerd[1540]: time="2025-01-29T11:18:03.521031845Z" level=info msg="CreateContainer within sandbox \"f680eda3ea0a18ff93607e5b043f2b07c48eeac93fc646cb4c02a003b455d97f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 11:18:03.539222 containerd[1540]: time="2025-01-29T11:18:03.539183923Z" level=info msg="CreateContainer within sandbox \"f680eda3ea0a18ff93607e5b043f2b07c48eeac93fc646cb4c02a003b455d97f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b5cdd0f2289a733a07b27f81aca038af9b281e986213069a143918d3e44fa568\"" Jan 29 11:18:03.539652 containerd[1540]: time="2025-01-29T11:18:03.539515778Z" level=info msg="StartContainer for \"b5cdd0f2289a733a07b27f81aca038af9b281e986213069a143918d3e44fa568\"" Jan 29 11:18:03.564528 systemd[1]: Started cri-containerd-b5cdd0f2289a733a07b27f81aca038af9b281e986213069a143918d3e44fa568.scope - libcontainer container b5cdd0f2289a733a07b27f81aca038af9b281e986213069a143918d3e44fa568. Jan 29 11:18:03.598328 containerd[1540]: time="2025-01-29T11:18:03.598300173Z" level=info msg="StartContainer for \"b5cdd0f2289a733a07b27f81aca038af9b281e986213069a143918d3e44fa568\" returns successfully" Jan 29 11:18:03.951632 kubelet[2781]: I0129 11:18:03.951361 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-567645b7d8-fwkgz" podStartSLOduration=18.206646157 podStartE2EDuration="20.951349714s" podCreationTimestamp="2025-01-29 11:17:43 +0000 UTC" firstStartedPulling="2025-01-29 11:18:00.775454172 +0000 UTC m=+29.876844816" lastFinishedPulling="2025-01-29 11:18:03.520157727 +0000 UTC m=+32.621548373" observedRunningTime="2025-01-29 11:18:03.950224752 +0000 UTC m=+33.051615404" watchObservedRunningTime="2025-01-29 11:18:03.951349714 +0000 UTC m=+33.052740362" Jan 29 11:18:05.069709 kubelet[2781]: I0129 11:18:05.069652 2781 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:18:05.258819 kubelet[2781]: I0129 11:18:05.258748 2781 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:18:05.577215 containerd[1540]: time="2025-01-29T11:18:05.577175122Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:18:05.584576 containerd[1540]: time="2025-01-29T11:18:05.584543043Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 29 11:18:05.593913 containerd[1540]: time="2025-01-29T11:18:05.593865156Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:18:05.619482 containerd[1540]: time="2025-01-29T11:18:05.618258389Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:18:05.619482 containerd[1540]: time="2025-01-29T11:18:05.618766590Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.098234528s" Jan 29 11:18:05.619482 containerd[1540]: time="2025-01-29T11:18:05.618782722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 29 11:18:05.620782 containerd[1540]: time="2025-01-29T11:18:05.620764819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 11:18:05.634680 containerd[1540]: time="2025-01-29T11:18:05.634658056Z" level=info msg="CreateContainer within sandbox \"a7456390fe4e63fd325f7dcdaf71c000a19c831a4de230c73b091a06ec987b0e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 11:18:05.740884 containerd[1540]: time="2025-01-29T11:18:05.740852670Z" level=info msg="CreateContainer within sandbox \"a7456390fe4e63fd325f7dcdaf71c000a19c831a4de230c73b091a06ec987b0e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0c9a5653ca460dfe9a56416929ee528ac857a559d893da00d0e3bf1e5faa8d7e\"" Jan 29 11:18:05.744464 containerd[1540]: time="2025-01-29T11:18:05.741428112Z" level=info msg="StartContainer for \"0c9a5653ca460dfe9a56416929ee528ac857a559d893da00d0e3bf1e5faa8d7e\"" Jan 29 11:18:05.772463 systemd[1]: Started cri-containerd-0c9a5653ca460dfe9a56416929ee528ac857a559d893da00d0e3bf1e5faa8d7e.scope - libcontainer container 0c9a5653ca460dfe9a56416929ee528ac857a559d893da00d0e3bf1e5faa8d7e. Jan 29 11:18:05.796534 containerd[1540]: time="2025-01-29T11:18:05.796483188Z" level=info msg="StartContainer for \"0c9a5653ca460dfe9a56416929ee528ac857a559d893da00d0e3bf1e5faa8d7e\" returns successfully" Jan 29 11:18:06.006244 containerd[1540]: time="2025-01-29T11:18:06.006199898Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:18:06.006699 containerd[1540]: time="2025-01-29T11:18:06.006669655Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 29 11:18:06.008931 containerd[1540]: time="2025-01-29T11:18:06.008870022Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 388.0018ms" Jan 29 11:18:06.008931 containerd[1540]: time="2025-01-29T11:18:06.008896465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 29 11:18:06.009964 containerd[1540]: time="2025-01-29T11:18:06.009805874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 29 11:18:06.011050 containerd[1540]: time="2025-01-29T11:18:06.010952144Z" level=info msg="CreateContainer within sandbox \"139768de746c8cf5771c0f9e2b88e61eaf38a7afef6f6236a5e5ed947fb82725\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 11:18:06.034111 containerd[1540]: time="2025-01-29T11:18:06.034081766Z" level=info msg="CreateContainer within sandbox \"139768de746c8cf5771c0f9e2b88e61eaf38a7afef6f6236a5e5ed947fb82725\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"50f6c3808fd3496ed5c213f424a277bcd0cf15603ca1b34b75470395f39743a0\"" Jan 29 11:18:06.035262 containerd[1540]: time="2025-01-29T11:18:06.034928557Z" level=info msg="StartContainer for \"50f6c3808fd3496ed5c213f424a277bcd0cf15603ca1b34b75470395f39743a0\"" Jan 29 11:18:06.057636 systemd[1]: Started cri-containerd-50f6c3808fd3496ed5c213f424a277bcd0cf15603ca1b34b75470395f39743a0.scope - libcontainer container 50f6c3808fd3496ed5c213f424a277bcd0cf15603ca1b34b75470395f39743a0. Jan 29 11:18:06.095757 containerd[1540]: time="2025-01-29T11:18:06.095730631Z" level=info msg="StartContainer for \"50f6c3808fd3496ed5c213f424a277bcd0cf15603ca1b34b75470395f39743a0\" returns successfully" Jan 29 11:18:06.191731 kernel: bpftool[5312]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 11:18:06.510580 systemd-networkd[1455]: vxlan.calico: Link UP Jan 29 11:18:06.510585 systemd-networkd[1455]: vxlan.calico: Gained carrier Jan 29 11:18:07.082844 kubelet[2781]: I0129 11:18:07.082154 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-567645b7d8-9spgk" podStartSLOduration=18.939994643 podStartE2EDuration="24.082141695s" podCreationTimestamp="2025-01-29 11:17:43 +0000 UTC" firstStartedPulling="2025-01-29 11:18:00.867518051 +0000 UTC m=+29.968908692" lastFinishedPulling="2025-01-29 11:18:06.009665101 +0000 UTC m=+35.111055744" observedRunningTime="2025-01-29 11:18:07.082056105 +0000 UTC m=+36.183446757" watchObservedRunningTime="2025-01-29 11:18:07.082141695 +0000 UTC m=+36.183532341" Jan 29 11:18:08.090016 kubelet[2781]: I0129 11:18:08.089991 2781 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:18:08.350579 containerd[1540]: time="2025-01-29T11:18:08.350440586Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:18:08.364802 containerd[1540]: time="2025-01-29T11:18:08.364710435Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 29 11:18:08.397991 containerd[1540]: time="2025-01-29T11:18:08.397947423Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:18:08.404045 containerd[1540]: time="2025-01-29T11:18:08.404011098Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:18:08.404403 containerd[1540]: time="2025-01-29T11:18:08.404280869Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.394457812s" Jan 29 11:18:08.404403 containerd[1540]: time="2025-01-29T11:18:08.404298627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 29 11:18:08.407441 containerd[1540]: time="2025-01-29T11:18:08.404978821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 11:18:08.415765 containerd[1540]: time="2025-01-29T11:18:08.415733866Z" level=info msg="CreateContainer within sandbox \"673cd46642be328affab225fd383352dd2e252560c693d1eb7eccd820428dde4\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 29 11:18:08.420511 systemd-networkd[1455]: vxlan.calico: Gained IPv6LL Jan 29 11:18:08.428787 containerd[1540]: time="2025-01-29T11:18:08.428759203Z" level=info msg="CreateContainer within sandbox \"673cd46642be328affab225fd383352dd2e252560c693d1eb7eccd820428dde4\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"20d67bbf954b525bb740037c1db0d5af9ca840ac3a0a6e74b46bc21618e1b9b3\"" Jan 29 11:18:08.429743 containerd[1540]: time="2025-01-29T11:18:08.429599100Z" level=info msg="StartContainer for \"20d67bbf954b525bb740037c1db0d5af9ca840ac3a0a6e74b46bc21618e1b9b3\"" Jan 29 11:18:08.477569 systemd[1]: Started cri-containerd-20d67bbf954b525bb740037c1db0d5af9ca840ac3a0a6e74b46bc21618e1b9b3.scope - libcontainer container 20d67bbf954b525bb740037c1db0d5af9ca840ac3a0a6e74b46bc21618e1b9b3. Jan 29 11:18:08.510466 containerd[1540]: time="2025-01-29T11:18:08.510437480Z" level=info msg="StartContainer for \"20d67bbf954b525bb740037c1db0d5af9ca840ac3a0a6e74b46bc21618e1b9b3\" returns successfully" Jan 29 11:18:09.120703 kubelet[2781]: I0129 11:18:09.120666 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-bd9c8f59d-nldcx" podStartSLOduration=18.753547159 podStartE2EDuration="26.120652203s" podCreationTimestamp="2025-01-29 11:17:43 +0000 UTC" firstStartedPulling="2025-01-29 11:18:01.037752978 +0000 UTC m=+30.139143619" lastFinishedPulling="2025-01-29 11:18:08.404858019 +0000 UTC m=+37.506248663" observedRunningTime="2025-01-29 11:18:09.120439794 +0000 UTC m=+38.221830453" watchObservedRunningTime="2025-01-29 11:18:09.120652203 +0000 UTC m=+38.222042852" Jan 29 11:18:10.112940 kubelet[2781]: I0129 11:18:10.112922 2781 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:18:10.572136 containerd[1540]: time="2025-01-29T11:18:10.572109928Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:18:10.572781 containerd[1540]: time="2025-01-29T11:18:10.572740282Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 29 11:18:10.573080 containerd[1540]: time="2025-01-29T11:18:10.573064189Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:18:10.597122 containerd[1540]: time="2025-01-29T11:18:10.597093066Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:18:10.598026 containerd[1540]: time="2025-01-29T11:18:10.597581118Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.192588478s" Jan 29 11:18:10.598026 containerd[1540]: time="2025-01-29T11:18:10.597596608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 29 11:18:10.602986 containerd[1540]: time="2025-01-29T11:18:10.602887303Z" level=info msg="CreateContainer within sandbox \"a7456390fe4e63fd325f7dcdaf71c000a19c831a4de230c73b091a06ec987b0e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 11:18:10.610267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3678712847.mount: Deactivated successfully. Jan 29 11:18:10.617219 containerd[1540]: time="2025-01-29T11:18:10.617203287Z" level=info msg="CreateContainer within sandbox \"a7456390fe4e63fd325f7dcdaf71c000a19c831a4de230c73b091a06ec987b0e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"53eb8d3165f63b3fe66e60f3b0e6d909e9d1a426443fa67d7a2b15f87e6cdb84\"" Jan 29 11:18:10.617919 containerd[1540]: time="2025-01-29T11:18:10.617676999Z" level=info msg="StartContainer for \"53eb8d3165f63b3fe66e60f3b0e6d909e9d1a426443fa67d7a2b15f87e6cdb84\"" Jan 29 11:18:10.644452 systemd[1]: Started cri-containerd-53eb8d3165f63b3fe66e60f3b0e6d909e9d1a426443fa67d7a2b15f87e6cdb84.scope - libcontainer container 53eb8d3165f63b3fe66e60f3b0e6d909e9d1a426443fa67d7a2b15f87e6cdb84. Jan 29 11:18:10.660844 containerd[1540]: time="2025-01-29T11:18:10.660818708Z" level=info msg="StartContainer for \"53eb8d3165f63b3fe66e60f3b0e6d909e9d1a426443fa67d7a2b15f87e6cdb84\" returns successfully" Jan 29 11:18:11.296584 kubelet[2781]: I0129 11:18:11.293161 2781 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 11:18:11.304884 kubelet[2781]: I0129 11:18:11.304870 2781 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 11:18:14.527100 kubelet[2781]: I0129 11:18:14.526901 2781 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:18:14.564088 kubelet[2781]: I0129 11:18:14.564039 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-wnr5k" podStartSLOduration=21.743817928 podStartE2EDuration="31.564026483s" podCreationTimestamp="2025-01-29 11:17:43 +0000 UTC" firstStartedPulling="2025-01-29 11:18:00.781029988 +0000 UTC m=+29.882420631" lastFinishedPulling="2025-01-29 11:18:10.601238543 +0000 UTC m=+39.702629186" observedRunningTime="2025-01-29 11:18:11.13369111 +0000 UTC m=+40.235081764" watchObservedRunningTime="2025-01-29 11:18:14.564026483 +0000 UTC m=+43.665417133" Jan 29 11:18:18.775417 kubelet[2781]: I0129 11:18:18.775013 2781 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:18:19.988985 kubelet[2781]: I0129 11:18:19.988949 2781 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:18:31.345584 containerd[1540]: time="2025-01-29T11:18:31.345378200Z" level=info msg="StopPodSandbox for \"2f4a59700a7437677b25a8b32a15088b17d16fda3963a67cd5a23b1ad2cf6bfa\"" Jan 29 11:18:31.345584 containerd[1540]: time="2025-01-29T11:18:31.345493410Z" level=info msg="TearDown network for sandbox \"2f4a59700a7437677b25a8b32a15088b17d16fda3963a67cd5a23b1ad2cf6bfa\" successfully" Jan 29 11:18:31.345584 containerd[1540]: time="2025-01-29T11:18:31.345503274Z" level=info msg="StopPodSandbox for \"2f4a59700a7437677b25a8b32a15088b17d16fda3963a67cd5a23b1ad2cf6bfa\" returns successfully" Jan 29 11:18:31.381407 containerd[1540]: time="2025-01-29T11:18:31.381320668Z" level=info msg="RemovePodSandbox for \"2f4a59700a7437677b25a8b32a15088b17d16fda3963a67cd5a23b1ad2cf6bfa\"" Jan 29 11:18:31.397040 containerd[1540]: time="2025-01-29T11:18:31.396998950Z" level=info msg="Forcibly stopping sandbox \"2f4a59700a7437677b25a8b32a15088b17d16fda3963a67cd5a23b1ad2cf6bfa\"" Jan 29 11:18:31.400015 containerd[1540]: time="2025-01-29T11:18:31.397099188Z" level=info msg="TearDown network for sandbox \"2f4a59700a7437677b25a8b32a15088b17d16fda3963a67cd5a23b1ad2cf6bfa\" successfully" Jan 29 11:18:31.420261 containerd[1540]: time="2025-01-29T11:18:31.420225343Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2f4a59700a7437677b25a8b32a15088b17d16fda3963a67cd5a23b1ad2cf6bfa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:18:31.437045 containerd[1540]: time="2025-01-29T11:18:31.437001503Z" level=info msg="RemovePodSandbox \"2f4a59700a7437677b25a8b32a15088b17d16fda3963a67cd5a23b1ad2cf6bfa\" returns successfully" Jan 29 11:18:31.442049 containerd[1540]: time="2025-01-29T11:18:31.442015510Z" level=info msg="StopPodSandbox for \"965f0e21cd0c998502fe33972ceb0aa8588cdd62fe9c0d1228276afb07663b3a\"" Jan 29 11:18:31.442148 containerd[1540]: time="2025-01-29T11:18:31.442096504Z" level=info msg="TearDown network for sandbox \"965f0e21cd0c998502fe33972ceb0aa8588cdd62fe9c0d1228276afb07663b3a\" successfully" Jan 29 11:18:31.442148 containerd[1540]: time="2025-01-29T11:18:31.442106210Z" level=info msg="StopPodSandbox for \"965f0e21cd0c998502fe33972ceb0aa8588cdd62fe9c0d1228276afb07663b3a\" returns successfully" Jan 29 11:18:31.442361 containerd[1540]: time="2025-01-29T11:18:31.442342684Z" level=info msg="RemovePodSandbox for \"965f0e21cd0c998502fe33972ceb0aa8588cdd62fe9c0d1228276afb07663b3a\"" Jan 29 11:18:31.442361 containerd[1540]: time="2025-01-29T11:18:31.442358629Z" level=info msg="Forcibly stopping sandbox \"965f0e21cd0c998502fe33972ceb0aa8588cdd62fe9c0d1228276afb07663b3a\"" Jan 29 11:18:31.442443 containerd[1540]: time="2025-01-29T11:18:31.442417826Z" level=info msg="TearDown network for sandbox \"965f0e21cd0c998502fe33972ceb0aa8588cdd62fe9c0d1228276afb07663b3a\" successfully" Jan 29 11:18:31.443804 containerd[1540]: time="2025-01-29T11:18:31.443777589Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"965f0e21cd0c998502fe33972ceb0aa8588cdd62fe9c0d1228276afb07663b3a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:18:31.443856 containerd[1540]: time="2025-01-29T11:18:31.443823340Z" level=info msg="RemovePodSandbox \"965f0e21cd0c998502fe33972ceb0aa8588cdd62fe9c0d1228276afb07663b3a\" returns successfully" Jan 29 11:18:31.444031 containerd[1540]: time="2025-01-29T11:18:31.444014926Z" level=info msg="StopPodSandbox for \"afe373a75fe2a4b1b152fa178d97c445f327537cf2f6cf3c4b15fab4554474b4\"" Jan 29 11:18:31.444273 containerd[1540]: time="2025-01-29T11:18:31.444075395Z" level=info msg="TearDown network for sandbox \"afe373a75fe2a4b1b152fa178d97c445f327537cf2f6cf3c4b15fab4554474b4\" successfully" Jan 29 11:18:31.444273 containerd[1540]: time="2025-01-29T11:18:31.444085130Z" level=info msg="StopPodSandbox for \"afe373a75fe2a4b1b152fa178d97c445f327537cf2f6cf3c4b15fab4554474b4\" returns successfully" Jan 29 11:18:31.444273 containerd[1540]: time="2025-01-29T11:18:31.444256373Z" level=info msg="RemovePodSandbox for \"afe373a75fe2a4b1b152fa178d97c445f327537cf2f6cf3c4b15fab4554474b4\"" Jan 29 11:18:31.444607 containerd[1540]: time="2025-01-29T11:18:31.444272195Z" level=info msg="Forcibly stopping sandbox \"afe373a75fe2a4b1b152fa178d97c445f327537cf2f6cf3c4b15fab4554474b4\"" Jan 29 11:18:31.444607 containerd[1540]: time="2025-01-29T11:18:31.444309143Z" level=info msg="TearDown network for sandbox \"afe373a75fe2a4b1b152fa178d97c445f327537cf2f6cf3c4b15fab4554474b4\" successfully" Jan 29 11:18:31.445635 containerd[1540]: time="2025-01-29T11:18:31.445617094Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"afe373a75fe2a4b1b152fa178d97c445f327537cf2f6cf3c4b15fab4554474b4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:18:31.445669 containerd[1540]: time="2025-01-29T11:18:31.445645223Z" level=info msg="RemovePodSandbox \"afe373a75fe2a4b1b152fa178d97c445f327537cf2f6cf3c4b15fab4554474b4\" returns successfully" Jan 29 11:18:31.445830 containerd[1540]: time="2025-01-29T11:18:31.445814328Z" level=info msg="StopPodSandbox for \"06c2c0079aaa7b0c1b90ee82c25014d20e7a17675bda51951443971d75dc2991\"" Jan 29 11:18:31.445878 containerd[1540]: time="2025-01-29T11:18:31.445863948Z" level=info msg="TearDown network for sandbox \"06c2c0079aaa7b0c1b90ee82c25014d20e7a17675bda51951443971d75dc2991\" successfully" Jan 29 11:18:31.445902 containerd[1540]: time="2025-01-29T11:18:31.445878314Z" level=info msg="StopPodSandbox for \"06c2c0079aaa7b0c1b90ee82c25014d20e7a17675bda51951443971d75dc2991\" returns successfully" Jan 29 11:18:31.446116 containerd[1540]: time="2025-01-29T11:18:31.446102624Z" level=info msg="RemovePodSandbox for \"06c2c0079aaa7b0c1b90ee82c25014d20e7a17675bda51951443971d75dc2991\"" Jan 29 11:18:31.446479 containerd[1540]: time="2025-01-29T11:18:31.446117489Z" level=info msg="Forcibly stopping sandbox \"06c2c0079aaa7b0c1b90ee82c25014d20e7a17675bda51951443971d75dc2991\"" Jan 29 11:18:31.446479 containerd[1540]: time="2025-01-29T11:18:31.446159963Z" level=info msg="TearDown network for sandbox \"06c2c0079aaa7b0c1b90ee82c25014d20e7a17675bda51951443971d75dc2991\" successfully" Jan 29 11:18:31.447355 containerd[1540]: time="2025-01-29T11:18:31.447334545Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"06c2c0079aaa7b0c1b90ee82c25014d20e7a17675bda51951443971d75dc2991\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:18:31.447410 containerd[1540]: time="2025-01-29T11:18:31.447387093Z" level=info msg="RemovePodSandbox \"06c2c0079aaa7b0c1b90ee82c25014d20e7a17675bda51951443971d75dc2991\" returns successfully" Jan 29 11:18:31.447638 containerd[1540]: time="2025-01-29T11:18:31.447560943Z" level=info msg="StopPodSandbox for \"11fd57254fd350cdf6ceae4fa18ccb31024c084ac553db045d4ee4303cc5857e\"" Jan 29 11:18:31.449059 containerd[1540]: time="2025-01-29T11:18:31.448980803Z" level=info msg="TearDown network for sandbox \"11fd57254fd350cdf6ceae4fa18ccb31024c084ac553db045d4ee4303cc5857e\" successfully" Jan 29 11:18:31.449059 containerd[1540]: time="2025-01-29T11:18:31.448992073Z" level=info msg="StopPodSandbox for \"11fd57254fd350cdf6ceae4fa18ccb31024c084ac553db045d4ee4303cc5857e\" returns successfully" Jan 29 11:18:31.450064 containerd[1540]: time="2025-01-29T11:18:31.449158739Z" level=info msg="RemovePodSandbox for \"11fd57254fd350cdf6ceae4fa18ccb31024c084ac553db045d4ee4303cc5857e\"" Jan 29 11:18:31.450064 containerd[1540]: time="2025-01-29T11:18:31.449170701Z" level=info msg="Forcibly stopping sandbox \"11fd57254fd350cdf6ceae4fa18ccb31024c084ac553db045d4ee4303cc5857e\"" Jan 29 11:18:31.450064 containerd[1540]: time="2025-01-29T11:18:31.449207519Z" level=info msg="TearDown network for sandbox \"11fd57254fd350cdf6ceae4fa18ccb31024c084ac553db045d4ee4303cc5857e\" successfully" Jan 29 11:18:31.450512 containerd[1540]: time="2025-01-29T11:18:31.450497696Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"11fd57254fd350cdf6ceae4fa18ccb31024c084ac553db045d4ee4303cc5857e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:18:31.450572 containerd[1540]: time="2025-01-29T11:18:31.450564197Z" level=info msg="RemovePodSandbox \"11fd57254fd350cdf6ceae4fa18ccb31024c084ac553db045d4ee4303cc5857e\" returns successfully" Jan 29 11:18:31.450791 containerd[1540]: time="2025-01-29T11:18:31.450775351Z" level=info msg="StopPodSandbox for \"6355f039de9944dc68083e3a8b5e984d252c2bc0493f76ac2e44d33241168d1e\"" Jan 29 11:18:31.450842 containerd[1540]: time="2025-01-29T11:18:31.450828326Z" level=info msg="TearDown network for sandbox \"6355f039de9944dc68083e3a8b5e984d252c2bc0493f76ac2e44d33241168d1e\" successfully" Jan 29 11:18:31.450842 containerd[1540]: time="2025-01-29T11:18:31.450839439Z" level=info msg="StopPodSandbox for \"6355f039de9944dc68083e3a8b5e984d252c2bc0493f76ac2e44d33241168d1e\" returns successfully" Jan 29 11:18:31.451568 containerd[1540]: time="2025-01-29T11:18:31.450985068Z" level=info msg="RemovePodSandbox for \"6355f039de9944dc68083e3a8b5e984d252c2bc0493f76ac2e44d33241168d1e\"" Jan 29 11:18:31.451568 containerd[1540]: time="2025-01-29T11:18:31.450998350Z" level=info msg="Forcibly stopping sandbox \"6355f039de9944dc68083e3a8b5e984d252c2bc0493f76ac2e44d33241168d1e\"" Jan 29 11:18:31.451568 containerd[1540]: time="2025-01-29T11:18:31.451032901Z" level=info msg="TearDown network for sandbox \"6355f039de9944dc68083e3a8b5e984d252c2bc0493f76ac2e44d33241168d1e\" successfully" Jan 29 11:18:31.465278 containerd[1540]: time="2025-01-29T11:18:31.465196110Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6355f039de9944dc68083e3a8b5e984d252c2bc0493f76ac2e44d33241168d1e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:18:31.465278 containerd[1540]: time="2025-01-29T11:18:31.465239777Z" level=info msg="RemovePodSandbox \"6355f039de9944dc68083e3a8b5e984d252c2bc0493f76ac2e44d33241168d1e\" returns successfully" Jan 29 11:18:31.465846 containerd[1540]: time="2025-01-29T11:18:31.465715410Z" level=info msg="StopPodSandbox for \"23007204d967495cee1d23fcee3f19723efcffe924a0f2bc5376ef57c876772e\"" Jan 29 11:18:31.465846 containerd[1540]: time="2025-01-29T11:18:31.465786480Z" level=info msg="TearDown network for sandbox \"23007204d967495cee1d23fcee3f19723efcffe924a0f2bc5376ef57c876772e\" successfully" Jan 29 11:18:31.465846 containerd[1540]: time="2025-01-29T11:18:31.465794379Z" level=info msg="StopPodSandbox for \"23007204d967495cee1d23fcee3f19723efcffe924a0f2bc5376ef57c876772e\" returns successfully" Jan 29 11:18:31.466234 containerd[1540]: time="2025-01-29T11:18:31.466027508Z" level=info msg="RemovePodSandbox for \"23007204d967495cee1d23fcee3f19723efcffe924a0f2bc5376ef57c876772e\"" Jan 29 11:18:31.466234 containerd[1540]: time="2025-01-29T11:18:31.466091763Z" level=info msg="Forcibly stopping sandbox \"23007204d967495cee1d23fcee3f19723efcffe924a0f2bc5376ef57c876772e\"" Jan 29 11:18:31.466234 containerd[1540]: time="2025-01-29T11:18:31.466179237Z" level=info msg="TearDown network for sandbox \"23007204d967495cee1d23fcee3f19723efcffe924a0f2bc5376ef57c876772e\" successfully" Jan 29 11:18:31.467761 containerd[1540]: time="2025-01-29T11:18:31.467745565Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"23007204d967495cee1d23fcee3f19723efcffe924a0f2bc5376ef57c876772e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:18:31.467949 containerd[1540]: time="2025-01-29T11:18:31.467857137Z" level=info msg="RemovePodSandbox \"23007204d967495cee1d23fcee3f19723efcffe924a0f2bc5376ef57c876772e\" returns successfully" Jan 29 11:18:31.468142 containerd[1540]: time="2025-01-29T11:18:31.468062391Z" level=info msg="StopPodSandbox for \"4e9baf12bd4a7720a7c345a2beb0ed0da0a10437addd844ba1511d811e6143c2\"" Jan 29 11:18:31.468142 containerd[1540]: time="2025-01-29T11:18:31.468112957Z" level=info msg="TearDown network for sandbox \"4e9baf12bd4a7720a7c345a2beb0ed0da0a10437addd844ba1511d811e6143c2\" successfully" Jan 29 11:18:31.468142 containerd[1540]: time="2025-01-29T11:18:31.468120295Z" level=info msg="StopPodSandbox for \"4e9baf12bd4a7720a7c345a2beb0ed0da0a10437addd844ba1511d811e6143c2\" returns successfully" Jan 29 11:18:31.468844 containerd[1540]: time="2025-01-29T11:18:31.468487311Z" level=info msg="RemovePodSandbox for \"4e9baf12bd4a7720a7c345a2beb0ed0da0a10437addd844ba1511d811e6143c2\"" Jan 29 11:18:31.468844 containerd[1540]: time="2025-01-29T11:18:31.468502439Z" level=info msg="Forcibly stopping sandbox \"4e9baf12bd4a7720a7c345a2beb0ed0da0a10437addd844ba1511d811e6143c2\"" Jan 29 11:18:31.468844 containerd[1540]: time="2025-01-29T11:18:31.468553041Z" level=info msg="TearDown network for sandbox \"4e9baf12bd4a7720a7c345a2beb0ed0da0a10437addd844ba1511d811e6143c2\" successfully" Jan 29 11:18:31.472852 containerd[1540]: time="2025-01-29T11:18:31.472821452Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4e9baf12bd4a7720a7c345a2beb0ed0da0a10437addd844ba1511d811e6143c2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:18:31.472918 containerd[1540]: time="2025-01-29T11:18:31.472869976Z" level=info msg="RemovePodSandbox \"4e9baf12bd4a7720a7c345a2beb0ed0da0a10437addd844ba1511d811e6143c2\" returns successfully" Jan 29 11:18:31.473164 containerd[1540]: time="2025-01-29T11:18:31.473149374Z" level=info msg="StopPodSandbox for \"4c45f66dcec342ba40b80e7c1273944e3ce617f60f85d06b98136ce5ccb070ea\"" Jan 29 11:18:31.473228 containerd[1540]: time="2025-01-29T11:18:31.473211453Z" level=info msg="TearDown network for sandbox \"4c45f66dcec342ba40b80e7c1273944e3ce617f60f85d06b98136ce5ccb070ea\" successfully" Jan 29 11:18:31.473253 containerd[1540]: time="2025-01-29T11:18:31.473225796Z" level=info msg="StopPodSandbox for \"4c45f66dcec342ba40b80e7c1273944e3ce617f60f85d06b98136ce5ccb070ea\" returns successfully" Jan 29 11:18:31.473812 containerd[1540]: time="2025-01-29T11:18:31.473409488Z" level=info msg="RemovePodSandbox for \"4c45f66dcec342ba40b80e7c1273944e3ce617f60f85d06b98136ce5ccb070ea\"" Jan 29 11:18:31.473812 containerd[1540]: time="2025-01-29T11:18:31.473423665Z" level=info msg="Forcibly stopping sandbox \"4c45f66dcec342ba40b80e7c1273944e3ce617f60f85d06b98136ce5ccb070ea\"" Jan 29 11:18:31.473812 containerd[1540]: time="2025-01-29T11:18:31.473463263Z" level=info msg="TearDown network for sandbox \"4c45f66dcec342ba40b80e7c1273944e3ce617f60f85d06b98136ce5ccb070ea\" successfully" Jan 29 11:18:31.483149 containerd[1540]: time="2025-01-29T11:18:31.483046906Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4c45f66dcec342ba40b80e7c1273944e3ce617f60f85d06b98136ce5ccb070ea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:18:31.483149 containerd[1540]: time="2025-01-29T11:18:31.483110618Z" level=info msg="RemovePodSandbox \"4c45f66dcec342ba40b80e7c1273944e3ce617f60f85d06b98136ce5ccb070ea\" returns successfully" Jan 29 11:18:31.483763 containerd[1540]: time="2025-01-29T11:18:31.483598626Z" level=info msg="StopPodSandbox for \"2812cb2a492fa5a430e9b05ff132496b2fedfc05205f05db5317733b5bfa19a7\"" Jan 29 11:18:31.483763 containerd[1540]: time="2025-01-29T11:18:31.483653283Z" level=info msg="TearDown network for sandbox \"2812cb2a492fa5a430e9b05ff132496b2fedfc05205f05db5317733b5bfa19a7\" successfully" Jan 29 11:18:31.483763 containerd[1540]: time="2025-01-29T11:18:31.483660303Z" level=info msg="StopPodSandbox for \"2812cb2a492fa5a430e9b05ff132496b2fedfc05205f05db5317733b5bfa19a7\" returns successfully" Jan 29 11:18:31.484806 containerd[1540]: time="2025-01-29T11:18:31.483919085Z" level=info msg="RemovePodSandbox for \"2812cb2a492fa5a430e9b05ff132496b2fedfc05205f05db5317733b5bfa19a7\"" Jan 29 11:18:31.484806 containerd[1540]: time="2025-01-29T11:18:31.483934863Z" level=info msg="Forcibly stopping sandbox \"2812cb2a492fa5a430e9b05ff132496b2fedfc05205f05db5317733b5bfa19a7\"" Jan 29 11:18:31.484806 containerd[1540]: time="2025-01-29T11:18:31.483979049Z" level=info msg="TearDown network for sandbox \"2812cb2a492fa5a430e9b05ff132496b2fedfc05205f05db5317733b5bfa19a7\" successfully" Jan 29 11:18:31.485616 containerd[1540]: time="2025-01-29T11:18:31.485603274Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2812cb2a492fa5a430e9b05ff132496b2fedfc05205f05db5317733b5bfa19a7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:18:31.485708 containerd[1540]: time="2025-01-29T11:18:31.485694636Z" level=info msg="RemovePodSandbox \"2812cb2a492fa5a430e9b05ff132496b2fedfc05205f05db5317733b5bfa19a7\" returns successfully" Jan 29 11:18:31.485986 containerd[1540]: time="2025-01-29T11:18:31.485973510Z" level=info msg="StopPodSandbox for \"e8e80bf7e3d0dddb74351ce9d80ce6d0e4e17b790a9d4eeb36e6b229131f792d\"" Jan 29 11:18:31.486162 containerd[1540]: time="2025-01-29T11:18:31.486151679Z" level=info msg="TearDown network for sandbox \"e8e80bf7e3d0dddb74351ce9d80ce6d0e4e17b790a9d4eeb36e6b229131f792d\" successfully" Jan 29 11:18:31.486207 containerd[1540]: time="2025-01-29T11:18:31.486199657Z" level=info msg="StopPodSandbox for \"e8e80bf7e3d0dddb74351ce9d80ce6d0e4e17b790a9d4eeb36e6b229131f792d\" returns successfully" Jan 29 11:18:31.486418 containerd[1540]: time="2025-01-29T11:18:31.486406944Z" level=info msg="RemovePodSandbox for \"e8e80bf7e3d0dddb74351ce9d80ce6d0e4e17b790a9d4eeb36e6b229131f792d\"" Jan 29 11:18:31.486524 containerd[1540]: time="2025-01-29T11:18:31.486515666Z" level=info msg="Forcibly stopping sandbox \"e8e80bf7e3d0dddb74351ce9d80ce6d0e4e17b790a9d4eeb36e6b229131f792d\"" Jan 29 11:18:31.486628 containerd[1540]: time="2025-01-29T11:18:31.486605269Z" level=info msg="TearDown network for sandbox \"e8e80bf7e3d0dddb74351ce9d80ce6d0e4e17b790a9d4eeb36e6b229131f792d\" successfully" Jan 29 11:18:31.491625 containerd[1540]: time="2025-01-29T11:18:31.491564540Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e8e80bf7e3d0dddb74351ce9d80ce6d0e4e17b790a9d4eeb36e6b229131f792d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:18:31.491625 containerd[1540]: time="2025-01-29T11:18:31.491605203Z" level=info msg="RemovePodSandbox \"e8e80bf7e3d0dddb74351ce9d80ce6d0e4e17b790a9d4eeb36e6b229131f792d\" returns successfully" Jan 29 11:18:31.491924 containerd[1540]: time="2025-01-29T11:18:31.491828951Z" level=info msg="StopPodSandbox for \"623e36285bbc054c04bc3646932f77256d498dc809af594fa14fc84d90ab4c97\"" Jan 29 11:18:31.491924 containerd[1540]: time="2025-01-29T11:18:31.491884196Z" level=info msg="TearDown network for sandbox \"623e36285bbc054c04bc3646932f77256d498dc809af594fa14fc84d90ab4c97\" successfully" Jan 29 11:18:31.491924 containerd[1540]: time="2025-01-29T11:18:31.491890842Z" level=info msg="StopPodSandbox for \"623e36285bbc054c04bc3646932f77256d498dc809af594fa14fc84d90ab4c97\" returns successfully" Jan 29 11:18:31.492137 containerd[1540]: time="2025-01-29T11:18:31.492118255Z" level=info msg="RemovePodSandbox for \"623e36285bbc054c04bc3646932f77256d498dc809af594fa14fc84d90ab4c97\"" Jan 29 11:18:31.492173 containerd[1540]: time="2025-01-29T11:18:31.492137854Z" level=info msg="Forcibly stopping sandbox \"623e36285bbc054c04bc3646932f77256d498dc809af594fa14fc84d90ab4c97\"" Jan 29 11:18:31.492197 containerd[1540]: time="2025-01-29T11:18:31.492175270Z" level=info msg="TearDown network for sandbox \"623e36285bbc054c04bc3646932f77256d498dc809af594fa14fc84d90ab4c97\" successfully" Jan 29 11:18:31.493455 containerd[1540]: time="2025-01-29T11:18:31.493436941Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"623e36285bbc054c04bc3646932f77256d498dc809af594fa14fc84d90ab4c97\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:18:31.493501 containerd[1540]: time="2025-01-29T11:18:31.493466599Z" level=info msg="RemovePodSandbox \"623e36285bbc054c04bc3646932f77256d498dc809af594fa14fc84d90ab4c97\" returns successfully" Jan 29 11:18:31.493747 containerd[1540]: time="2025-01-29T11:18:31.493651357Z" level=info msg="StopPodSandbox for \"3400f88c6211011833ab2611d8fc78659e89d2a3afff7f350188be4e22083966\"" Jan 29 11:18:31.493747 containerd[1540]: time="2025-01-29T11:18:31.493693285Z" level=info msg="TearDown network for sandbox \"3400f88c6211011833ab2611d8fc78659e89d2a3afff7f350188be4e22083966\" successfully" Jan 29 11:18:31.493747 containerd[1540]: time="2025-01-29T11:18:31.493699610Z" level=info msg="StopPodSandbox for \"3400f88c6211011833ab2611d8fc78659e89d2a3afff7f350188be4e22083966\" returns successfully" Jan 29 11:18:31.493921 containerd[1540]: time="2025-01-29T11:18:31.493860727Z" level=info msg="RemovePodSandbox for \"3400f88c6211011833ab2611d8fc78659e89d2a3afff7f350188be4e22083966\"" Jan 29 11:18:31.493921 containerd[1540]: time="2025-01-29T11:18:31.493872810Z" level=info msg="Forcibly stopping sandbox \"3400f88c6211011833ab2611d8fc78659e89d2a3afff7f350188be4e22083966\"" Jan 29 11:18:31.494336 containerd[1540]: time="2025-01-29T11:18:31.494020114Z" level=info msg="TearDown network for sandbox \"3400f88c6211011833ab2611d8fc78659e89d2a3afff7f350188be4e22083966\" successfully" Jan 29 11:18:31.501266 containerd[1540]: time="2025-01-29T11:18:31.501247334Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3400f88c6211011833ab2611d8fc78659e89d2a3afff7f350188be4e22083966\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:18:31.501350 containerd[1540]: time="2025-01-29T11:18:31.501341259Z" level=info msg="RemovePodSandbox \"3400f88c6211011833ab2611d8fc78659e89d2a3afff7f350188be4e22083966\" returns successfully" Jan 29 11:18:31.501664 containerd[1540]: time="2025-01-29T11:18:31.501653110Z" level=info msg="StopPodSandbox for \"0896878e2f7c90bddadcc3648f0bf177660243f948de176cff22c4a33fd20a5a\"" Jan 29 11:18:31.501783 containerd[1540]: time="2025-01-29T11:18:31.501775622Z" level=info msg="TearDown network for sandbox \"0896878e2f7c90bddadcc3648f0bf177660243f948de176cff22c4a33fd20a5a\" successfully" Jan 29 11:18:31.501838 containerd[1540]: time="2025-01-29T11:18:31.501829984Z" level=info msg="StopPodSandbox for \"0896878e2f7c90bddadcc3648f0bf177660243f948de176cff22c4a33fd20a5a\" returns successfully" Jan 29 11:18:31.502031 containerd[1540]: time="2025-01-29T11:18:31.502020262Z" level=info msg="RemovePodSandbox for \"0896878e2f7c90bddadcc3648f0bf177660243f948de176cff22c4a33fd20a5a\"" Jan 29 11:18:31.507439 containerd[1540]: time="2025-01-29T11:18:31.502211511Z" level=info msg="Forcibly stopping sandbox \"0896878e2f7c90bddadcc3648f0bf177660243f948de176cff22c4a33fd20a5a\"" Jan 29 11:18:31.507439 containerd[1540]: time="2025-01-29T11:18:31.502255254Z" level=info msg="TearDown network for sandbox \"0896878e2f7c90bddadcc3648f0bf177660243f948de176cff22c4a33fd20a5a\" successfully" Jan 29 11:18:31.511991 containerd[1540]: time="2025-01-29T11:18:31.511924763Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0896878e2f7c90bddadcc3648f0bf177660243f948de176cff22c4a33fd20a5a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:18:31.511991 containerd[1540]: time="2025-01-29T11:18:31.511966216Z" level=info msg="RemovePodSandbox \"0896878e2f7c90bddadcc3648f0bf177660243f948de176cff22c4a33fd20a5a\" returns successfully" Jan 29 11:18:31.512242 containerd[1540]: time="2025-01-29T11:18:31.512228394Z" level=info msg="StopPodSandbox for \"374ba73e142b2e0590751ebfa9ecb18acad37ef634e9f4b1ba9a38deb4e3a9dc\"" Jan 29 11:18:31.512330 containerd[1540]: time="2025-01-29T11:18:31.512317876Z" level=info msg="TearDown network for sandbox \"374ba73e142b2e0590751ebfa9ecb18acad37ef634e9f4b1ba9a38deb4e3a9dc\" successfully" Jan 29 11:18:31.512360 containerd[1540]: time="2025-01-29T11:18:31.512328168Z" level=info msg="StopPodSandbox for \"374ba73e142b2e0590751ebfa9ecb18acad37ef634e9f4b1ba9a38deb4e3a9dc\" returns successfully" Jan 29 11:18:31.512584 containerd[1540]: time="2025-01-29T11:18:31.512568695Z" level=info msg="RemovePodSandbox for \"374ba73e142b2e0590751ebfa9ecb18acad37ef634e9f4b1ba9a38deb4e3a9dc\"" Jan 29 11:18:31.512584 containerd[1540]: time="2025-01-29T11:18:31.512582577Z" level=info msg="Forcibly stopping sandbox \"374ba73e142b2e0590751ebfa9ecb18acad37ef634e9f4b1ba9a38deb4e3a9dc\"" Jan 29 11:18:31.512683 containerd[1540]: time="2025-01-29T11:18:31.512655121Z" level=info msg="TearDown network for sandbox \"374ba73e142b2e0590751ebfa9ecb18acad37ef634e9f4b1ba9a38deb4e3a9dc\" successfully" Jan 29 11:18:31.516144 containerd[1540]: time="2025-01-29T11:18:31.516123102Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"374ba73e142b2e0590751ebfa9ecb18acad37ef634e9f4b1ba9a38deb4e3a9dc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:18:31.522659 containerd[1540]: time="2025-01-29T11:18:31.516151422Z" level=info msg="RemovePodSandbox \"374ba73e142b2e0590751ebfa9ecb18acad37ef634e9f4b1ba9a38deb4e3a9dc\" returns successfully" Jan 29 11:18:31.522659 containerd[1540]: time="2025-01-29T11:18:31.516393706Z" level=info msg="StopPodSandbox for \"5094348e9dfe0fbb3475ca8e30b0dcbcf250044f8a46205cbc8abcea1336af20\"" Jan 29 11:18:31.522659 containerd[1540]: time="2025-01-29T11:18:31.516435571Z" level=info msg="TearDown network for sandbox \"5094348e9dfe0fbb3475ca8e30b0dcbcf250044f8a46205cbc8abcea1336af20\" successfully" Jan 29 11:18:31.522659 containerd[1540]: time="2025-01-29T11:18:31.516441685Z" level=info msg="StopPodSandbox for \"5094348e9dfe0fbb3475ca8e30b0dcbcf250044f8a46205cbc8abcea1336af20\" returns successfully" Jan 29 11:18:31.522659 containerd[1540]: time="2025-01-29T11:18:31.516591818Z" level=info msg="RemovePodSandbox for \"5094348e9dfe0fbb3475ca8e30b0dcbcf250044f8a46205cbc8abcea1336af20\"" Jan 29 11:18:31.522659 containerd[1540]: time="2025-01-29T11:18:31.516606076Z" level=info msg="Forcibly stopping sandbox \"5094348e9dfe0fbb3475ca8e30b0dcbcf250044f8a46205cbc8abcea1336af20\"" Jan 29 11:18:31.522659 containerd[1540]: time="2025-01-29T11:18:31.516638767Z" level=info msg="TearDown network for sandbox \"5094348e9dfe0fbb3475ca8e30b0dcbcf250044f8a46205cbc8abcea1336af20\" successfully" Jan 29 11:18:31.527603 containerd[1540]: time="2025-01-29T11:18:31.527571869Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5094348e9dfe0fbb3475ca8e30b0dcbcf250044f8a46205cbc8abcea1336af20\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:18:31.527664 containerd[1540]: time="2025-01-29T11:18:31.527616446Z" level=info msg="RemovePodSandbox \"5094348e9dfe0fbb3475ca8e30b0dcbcf250044f8a46205cbc8abcea1336af20\" returns successfully" Jan 29 11:18:31.531027 containerd[1540]: time="2025-01-29T11:18:31.530921620Z" level=info msg="StopPodSandbox for \"c793c1c171159802ca61909f642594f684634e7885bae0aff26a8a8fd1ff1a5a\"" Jan 29 11:18:31.531027 containerd[1540]: time="2025-01-29T11:18:31.530983187Z" level=info msg="TearDown network for sandbox \"c793c1c171159802ca61909f642594f684634e7885bae0aff26a8a8fd1ff1a5a\" successfully" Jan 29 11:18:31.531027 containerd[1540]: time="2025-01-29T11:18:31.531009768Z" level=info msg="StopPodSandbox for \"c793c1c171159802ca61909f642594f684634e7885bae0aff26a8a8fd1ff1a5a\" returns successfully" Jan 29 11:18:31.531278 containerd[1540]: time="2025-01-29T11:18:31.531262983Z" level=info msg="RemovePodSandbox for \"c793c1c171159802ca61909f642594f684634e7885bae0aff26a8a8fd1ff1a5a\"" Jan 29 11:18:31.531314 containerd[1540]: time="2025-01-29T11:18:31.531278254Z" level=info msg="Forcibly stopping sandbox \"c793c1c171159802ca61909f642594f684634e7885bae0aff26a8a8fd1ff1a5a\"" Jan 29 11:18:31.531417 containerd[1540]: time="2025-01-29T11:18:31.531337015Z" level=info msg="TearDown network for sandbox \"c793c1c171159802ca61909f642594f684634e7885bae0aff26a8a8fd1ff1a5a\" successfully" Jan 29 11:18:31.539765 containerd[1540]: time="2025-01-29T11:18:31.539746972Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c793c1c171159802ca61909f642594f684634e7885bae0aff26a8a8fd1ff1a5a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:18:31.539809 containerd[1540]: time="2025-01-29T11:18:31.539780994Z" level=info msg="RemovePodSandbox \"c793c1c171159802ca61909f642594f684634e7885bae0aff26a8a8fd1ff1a5a\" returns successfully" Jan 29 11:18:31.540132 containerd[1540]: time="2025-01-29T11:18:31.540043567Z" level=info msg="StopPodSandbox for \"46da7b273569014509eee1c5dd3ecf0e5a52dbc395649a8495ea0ee0e6141444\"" Jan 29 11:18:31.540132 containerd[1540]: time="2025-01-29T11:18:31.540092505Z" level=info msg="TearDown network for sandbox \"46da7b273569014509eee1c5dd3ecf0e5a52dbc395649a8495ea0ee0e6141444\" successfully" Jan 29 11:18:31.540132 containerd[1540]: time="2025-01-29T11:18:31.540099458Z" level=info msg="StopPodSandbox for \"46da7b273569014509eee1c5dd3ecf0e5a52dbc395649a8495ea0ee0e6141444\" returns successfully" Jan 29 11:18:31.544166 containerd[1540]: time="2025-01-29T11:18:31.540284356Z" level=info msg="RemovePodSandbox for \"46da7b273569014509eee1c5dd3ecf0e5a52dbc395649a8495ea0ee0e6141444\"" Jan 29 11:18:31.544166 containerd[1540]: time="2025-01-29T11:18:31.540295422Z" level=info msg="Forcibly stopping sandbox \"46da7b273569014509eee1c5dd3ecf0e5a52dbc395649a8495ea0ee0e6141444\"" Jan 29 11:18:31.544166 containerd[1540]: time="2025-01-29T11:18:31.540387927Z" level=info msg="TearDown network for sandbox \"46da7b273569014509eee1c5dd3ecf0e5a52dbc395649a8495ea0ee0e6141444\" successfully" Jan 29 11:18:31.549504 containerd[1540]: time="2025-01-29T11:18:31.544951597Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"46da7b273569014509eee1c5dd3ecf0e5a52dbc395649a8495ea0ee0e6141444\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:18:31.549504 containerd[1540]: time="2025-01-29T11:18:31.544990923Z" level=info msg="RemovePodSandbox \"46da7b273569014509eee1c5dd3ecf0e5a52dbc395649a8495ea0ee0e6141444\" returns successfully" Jan 29 11:18:31.549504 containerd[1540]: time="2025-01-29T11:18:31.545312944Z" level=info msg="StopPodSandbox for \"16821b5a461346d4c370c2363d3281501b6a6e309f883f488c79352736d78222\"" Jan 29 11:18:31.549504 containerd[1540]: time="2025-01-29T11:18:31.545376904Z" level=info msg="TearDown network for sandbox \"16821b5a461346d4c370c2363d3281501b6a6e309f883f488c79352736d78222\" successfully" Jan 29 11:18:31.549504 containerd[1540]: time="2025-01-29T11:18:31.545386334Z" level=info msg="StopPodSandbox for \"16821b5a461346d4c370c2363d3281501b6a6e309f883f488c79352736d78222\" returns successfully" Jan 29 11:18:31.549504 containerd[1540]: time="2025-01-29T11:18:31.545562504Z" level=info msg="RemovePodSandbox for \"16821b5a461346d4c370c2363d3281501b6a6e309f883f488c79352736d78222\"" Jan 29 11:18:31.549504 containerd[1540]: time="2025-01-29T11:18:31.545604903Z" level=info msg="Forcibly stopping sandbox \"16821b5a461346d4c370c2363d3281501b6a6e309f883f488c79352736d78222\"" Jan 29 11:18:31.549504 containerd[1540]: time="2025-01-29T11:18:31.545683670Z" level=info msg="TearDown network for sandbox \"16821b5a461346d4c370c2363d3281501b6a6e309f883f488c79352736d78222\" successfully" Jan 29 11:18:31.549504 containerd[1540]: time="2025-01-29T11:18:31.548165882Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"16821b5a461346d4c370c2363d3281501b6a6e309f883f488c79352736d78222\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:18:31.549504 containerd[1540]: time="2025-01-29T11:18:31.548196209Z" level=info msg="RemovePodSandbox \"16821b5a461346d4c370c2363d3281501b6a6e309f883f488c79352736d78222\" returns successfully" Jan 29 11:18:31.549504 containerd[1540]: time="2025-01-29T11:18:31.548455297Z" level=info msg="StopPodSandbox for \"399de994fe8e279bd5d935f0d3b71a0ca4cbc60401fd7483c90bc80c2579c64e\"" Jan 29 11:18:31.549504 containerd[1540]: time="2025-01-29T11:18:31.548517497Z" level=info msg="TearDown network for sandbox \"399de994fe8e279bd5d935f0d3b71a0ca4cbc60401fd7483c90bc80c2579c64e\" successfully" Jan 29 11:18:31.549504 containerd[1540]: time="2025-01-29T11:18:31.548526734Z" level=info msg="StopPodSandbox for \"399de994fe8e279bd5d935f0d3b71a0ca4cbc60401fd7483c90bc80c2579c64e\" returns successfully" Jan 29 11:18:31.549504 containerd[1540]: time="2025-01-29T11:18:31.548694043Z" level=info msg="RemovePodSandbox for \"399de994fe8e279bd5d935f0d3b71a0ca4cbc60401fd7483c90bc80c2579c64e\"" Jan 29 11:18:31.549504 containerd[1540]: time="2025-01-29T11:18:31.548710723Z" level=info msg="Forcibly stopping sandbox \"399de994fe8e279bd5d935f0d3b71a0ca4cbc60401fd7483c90bc80c2579c64e\"" Jan 29 11:18:31.549504 containerd[1540]: time="2025-01-29T11:18:31.548753285Z" level=info msg="TearDown network for sandbox \"399de994fe8e279bd5d935f0d3b71a0ca4cbc60401fd7483c90bc80c2579c64e\" successfully" Jan 29 11:18:31.550465 containerd[1540]: time="2025-01-29T11:18:31.550442611Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"399de994fe8e279bd5d935f0d3b71a0ca4cbc60401fd7483c90bc80c2579c64e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:18:31.550569 containerd[1540]: time="2025-01-29T11:18:31.550478254Z" level=info msg="RemovePodSandbox \"399de994fe8e279bd5d935f0d3b71a0ca4cbc60401fd7483c90bc80c2579c64e\" returns successfully" Jan 29 11:18:31.550684 containerd[1540]: time="2025-01-29T11:18:31.550664289Z" level=info msg="StopPodSandbox for \"fe8823acf14dafe1127b302a3b7f03db3aacf62b65e1c7d73654ab4f6e3c73ac\"" Jan 29 11:18:31.550754 containerd[1540]: time="2025-01-29T11:18:31.550722879Z" level=info msg="TearDown network for sandbox \"fe8823acf14dafe1127b302a3b7f03db3aacf62b65e1c7d73654ab4f6e3c73ac\" successfully" Jan 29 11:18:31.550782 containerd[1540]: time="2025-01-29T11:18:31.550753556Z" level=info msg="StopPodSandbox for \"fe8823acf14dafe1127b302a3b7f03db3aacf62b65e1c7d73654ab4f6e3c73ac\" returns successfully" Jan 29 11:18:31.550966 containerd[1540]: time="2025-01-29T11:18:31.550950780Z" level=info msg="RemovePodSandbox for \"fe8823acf14dafe1127b302a3b7f03db3aacf62b65e1c7d73654ab4f6e3c73ac\"" Jan 29 11:18:31.551611 containerd[1540]: time="2025-01-29T11:18:31.551021287Z" level=info msg="Forcibly stopping sandbox \"fe8823acf14dafe1127b302a3b7f03db3aacf62b65e1c7d73654ab4f6e3c73ac\"" Jan 29 11:18:31.551611 containerd[1540]: time="2025-01-29T11:18:31.551070146Z" level=info msg="TearDown network for sandbox \"fe8823acf14dafe1127b302a3b7f03db3aacf62b65e1c7d73654ab4f6e3c73ac\" successfully" Jan 29 11:18:31.552609 containerd[1540]: time="2025-01-29T11:18:31.552588143Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fe8823acf14dafe1127b302a3b7f03db3aacf62b65e1c7d73654ab4f6e3c73ac\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:18:31.552693 containerd[1540]: time="2025-01-29T11:18:31.552618201Z" level=info msg="RemovePodSandbox \"fe8823acf14dafe1127b302a3b7f03db3aacf62b65e1c7d73654ab4f6e3c73ac\" returns successfully" Jan 29 11:18:31.553060 containerd[1540]: time="2025-01-29T11:18:31.552887716Z" level=info msg="StopPodSandbox for \"86e28ef1ae145ad1a45ac419c458722cc93d669f5a926e1e4b0d549e71fac0d7\"" Jan 29 11:18:31.553060 containerd[1540]: time="2025-01-29T11:18:31.552945891Z" level=info msg="TearDown network for sandbox \"86e28ef1ae145ad1a45ac419c458722cc93d669f5a926e1e4b0d549e71fac0d7\" successfully" Jan 29 11:18:31.553060 containerd[1540]: time="2025-01-29T11:18:31.552954508Z" level=info msg="StopPodSandbox for \"86e28ef1ae145ad1a45ac419c458722cc93d669f5a926e1e4b0d549e71fac0d7\" returns successfully" Jan 29 11:18:31.553430 containerd[1540]: time="2025-01-29T11:18:31.553235439Z" level=info msg="RemovePodSandbox for \"86e28ef1ae145ad1a45ac419c458722cc93d669f5a926e1e4b0d549e71fac0d7\"" Jan 29 11:18:31.553430 containerd[1540]: time="2025-01-29T11:18:31.553314155Z" level=info msg="Forcibly stopping sandbox \"86e28ef1ae145ad1a45ac419c458722cc93d669f5a926e1e4b0d549e71fac0d7\"" Jan 29 11:18:31.553430 containerd[1540]: time="2025-01-29T11:18:31.553386636Z" level=info msg="TearDown network for sandbox \"86e28ef1ae145ad1a45ac419c458722cc93d669f5a926e1e4b0d549e71fac0d7\" successfully" Jan 29 11:18:31.556676 containerd[1540]: time="2025-01-29T11:18:31.556595988Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"86e28ef1ae145ad1a45ac419c458722cc93d669f5a926e1e4b0d549e71fac0d7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:18:31.556676 containerd[1540]: time="2025-01-29T11:18:31.556626677Z" level=info msg="RemovePodSandbox \"86e28ef1ae145ad1a45ac419c458722cc93d669f5a926e1e4b0d549e71fac0d7\" returns successfully" Jan 29 11:18:31.556808 containerd[1540]: time="2025-01-29T11:18:31.556789983Z" level=info msg="StopPodSandbox for \"34d5f81b6bf32fd103a059f7e8e471b0f64115105110ab425bd26ae6525d30f5\"" Jan 29 11:18:31.556857 containerd[1540]: time="2025-01-29T11:18:31.556841531Z" level=info msg="TearDown network for sandbox \"34d5f81b6bf32fd103a059f7e8e471b0f64115105110ab425bd26ae6525d30f5\" successfully" Jan 29 11:18:31.556857 containerd[1540]: time="2025-01-29T11:18:31.556852305Z" level=info msg="StopPodSandbox for \"34d5f81b6bf32fd103a059f7e8e471b0f64115105110ab425bd26ae6525d30f5\" returns successfully" Jan 29 11:18:31.557037 containerd[1540]: time="2025-01-29T11:18:31.557021418Z" level=info msg="RemovePodSandbox for \"34d5f81b6bf32fd103a059f7e8e471b0f64115105110ab425bd26ae6525d30f5\"" Jan 29 11:18:31.557071 containerd[1540]: time="2025-01-29T11:18:31.557058485Z" level=info msg="Forcibly stopping sandbox \"34d5f81b6bf32fd103a059f7e8e471b0f64115105110ab425bd26ae6525d30f5\"" Jan 29 11:18:31.557153 containerd[1540]: time="2025-01-29T11:18:31.557097144Z" level=info msg="TearDown network for sandbox \"34d5f81b6bf32fd103a059f7e8e471b0f64115105110ab425bd26ae6525d30f5\" successfully" Jan 29 11:18:31.559476 containerd[1540]: time="2025-01-29T11:18:31.559456058Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"34d5f81b6bf32fd103a059f7e8e471b0f64115105110ab425bd26ae6525d30f5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:18:31.559522 containerd[1540]: time="2025-01-29T11:18:31.559488562Z" level=info msg="RemovePodSandbox \"34d5f81b6bf32fd103a059f7e8e471b0f64115105110ab425bd26ae6525d30f5\" returns successfully" Jan 29 11:18:31.559784 containerd[1540]: time="2025-01-29T11:18:31.559688263Z" level=info msg="StopPodSandbox for \"2f143d0f0cd7781b10ac01bdc12ff00154b59685051326862b2b38dfabe185ba\"" Jan 29 11:18:31.559784 containerd[1540]: time="2025-01-29T11:18:31.559735081Z" level=info msg="TearDown network for sandbox \"2f143d0f0cd7781b10ac01bdc12ff00154b59685051326862b2b38dfabe185ba\" successfully" Jan 29 11:18:31.559784 containerd[1540]: time="2025-01-29T11:18:31.559741788Z" level=info msg="StopPodSandbox for \"2f143d0f0cd7781b10ac01bdc12ff00154b59685051326862b2b38dfabe185ba\" returns successfully" Jan 29 11:18:31.560043 containerd[1540]: time="2025-01-29T11:18:31.559956221Z" level=info msg="RemovePodSandbox for \"2f143d0f0cd7781b10ac01bdc12ff00154b59685051326862b2b38dfabe185ba\"" Jan 29 11:18:31.560043 containerd[1540]: time="2025-01-29T11:18:31.560015775Z" level=info msg="Forcibly stopping sandbox \"2f143d0f0cd7781b10ac01bdc12ff00154b59685051326862b2b38dfabe185ba\"" Jan 29 11:18:31.560113 containerd[1540]: time="2025-01-29T11:18:31.560058311Z" level=info msg="TearDown network for sandbox \"2f143d0f0cd7781b10ac01bdc12ff00154b59685051326862b2b38dfabe185ba\" successfully" Jan 29 11:18:31.561637 containerd[1540]: time="2025-01-29T11:18:31.561617550Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2f143d0f0cd7781b10ac01bdc12ff00154b59685051326862b2b38dfabe185ba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:18:31.561687 containerd[1540]: time="2025-01-29T11:18:31.561648838Z" level=info msg="RemovePodSandbox \"2f143d0f0cd7781b10ac01bdc12ff00154b59685051326862b2b38dfabe185ba\" returns successfully" Jan 29 11:18:31.562000 containerd[1540]: time="2025-01-29T11:18:31.561912346Z" level=info msg="StopPodSandbox for \"cc7624e953451f932ad24aac291d9a20dd4aaf5933924fd7644e26be246394f2\"" Jan 29 11:18:31.562000 containerd[1540]: time="2025-01-29T11:18:31.561960150Z" level=info msg="TearDown network for sandbox \"cc7624e953451f932ad24aac291d9a20dd4aaf5933924fd7644e26be246394f2\" successfully" Jan 29 11:18:31.562000 containerd[1540]: time="2025-01-29T11:18:31.561967604Z" level=info msg="StopPodSandbox for \"cc7624e953451f932ad24aac291d9a20dd4aaf5933924fd7644e26be246394f2\" returns successfully" Jan 29 11:18:31.563094 containerd[1540]: time="2025-01-29T11:18:31.562197796Z" level=info msg="RemovePodSandbox for \"cc7624e953451f932ad24aac291d9a20dd4aaf5933924fd7644e26be246394f2\"" Jan 29 11:18:31.563094 containerd[1540]: time="2025-01-29T11:18:31.562212170Z" level=info msg="Forcibly stopping sandbox \"cc7624e953451f932ad24aac291d9a20dd4aaf5933924fd7644e26be246394f2\"" Jan 29 11:18:31.563094 containerd[1540]: time="2025-01-29T11:18:31.562247890Z" level=info msg="TearDown network for sandbox \"cc7624e953451f932ad24aac291d9a20dd4aaf5933924fd7644e26be246394f2\" successfully" Jan 29 11:18:31.565422 containerd[1540]: time="2025-01-29T11:18:31.565408941Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cc7624e953451f932ad24aac291d9a20dd4aaf5933924fd7644e26be246394f2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:18:31.565492 containerd[1540]: time="2025-01-29T11:18:31.565483610Z" level=info msg="RemovePodSandbox \"cc7624e953451f932ad24aac291d9a20dd4aaf5933924fd7644e26be246394f2\" returns successfully" Jan 29 11:18:31.565751 containerd[1540]: time="2025-01-29T11:18:31.565741498Z" level=info msg="StopPodSandbox for \"47bf1d5f1716fb7c348c76e99ea30c8ab02ab1badf7a9b5eb161467c74089f8f\"" Jan 29 11:18:31.565837 containerd[1540]: time="2025-01-29T11:18:31.565828573Z" level=info msg="TearDown network for sandbox \"47bf1d5f1716fb7c348c76e99ea30c8ab02ab1badf7a9b5eb161467c74089f8f\" successfully" Jan 29 11:18:31.565883 containerd[1540]: time="2025-01-29T11:18:31.565876168Z" level=info msg="StopPodSandbox for \"47bf1d5f1716fb7c348c76e99ea30c8ab02ab1badf7a9b5eb161467c74089f8f\" returns successfully" Jan 29 11:18:31.566078 containerd[1540]: time="2025-01-29T11:18:31.566070075Z" level=info msg="RemovePodSandbox for \"47bf1d5f1716fb7c348c76e99ea30c8ab02ab1badf7a9b5eb161467c74089f8f\"" Jan 29 11:18:31.566145 containerd[1540]: time="2025-01-29T11:18:31.566137196Z" level=info msg="Forcibly stopping sandbox \"47bf1d5f1716fb7c348c76e99ea30c8ab02ab1badf7a9b5eb161467c74089f8f\"" Jan 29 11:18:31.566216 containerd[1540]: time="2025-01-29T11:18:31.566196950Z" level=info msg="TearDown network for sandbox \"47bf1d5f1716fb7c348c76e99ea30c8ab02ab1badf7a9b5eb161467c74089f8f\" successfully" Jan 29 11:18:31.568312 containerd[1540]: time="2025-01-29T11:18:31.568297521Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"47bf1d5f1716fb7c348c76e99ea30c8ab02ab1badf7a9b5eb161467c74089f8f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:18:31.568379 containerd[1540]: time="2025-01-29T11:18:31.568362392Z" level=info msg="RemovePodSandbox \"47bf1d5f1716fb7c348c76e99ea30c8ab02ab1badf7a9b5eb161467c74089f8f\" returns successfully" Jan 29 11:18:31.568584 containerd[1540]: time="2025-01-29T11:18:31.568565807Z" level=info msg="StopPodSandbox for \"ee54511ace36179b667adeb5410aa0c487826a5fb27e610c6a3fce075f6d248d\"" Jan 29 11:18:31.568637 containerd[1540]: time="2025-01-29T11:18:31.568623527Z" level=info msg="TearDown network for sandbox \"ee54511ace36179b667adeb5410aa0c487826a5fb27e610c6a3fce075f6d248d\" successfully" Jan 29 11:18:31.568668 containerd[1540]: time="2025-01-29T11:18:31.568635276Z" level=info msg="StopPodSandbox for \"ee54511ace36179b667adeb5410aa0c487826a5fb27e610c6a3fce075f6d248d\" returns successfully" Jan 29 11:18:31.570302 containerd[1540]: time="2025-01-29T11:18:31.569511796Z" level=info msg="RemovePodSandbox for \"ee54511ace36179b667adeb5410aa0c487826a5fb27e610c6a3fce075f6d248d\"" Jan 29 11:18:31.570302 containerd[1540]: time="2025-01-29T11:18:31.569527672Z" level=info msg="Forcibly stopping sandbox \"ee54511ace36179b667adeb5410aa0c487826a5fb27e610c6a3fce075f6d248d\"" Jan 29 11:18:31.570302 containerd[1540]: time="2025-01-29T11:18:31.569568222Z" level=info msg="TearDown network for sandbox \"ee54511ace36179b667adeb5410aa0c487826a5fb27e610c6a3fce075f6d248d\" successfully" Jan 29 11:18:31.571498 containerd[1540]: time="2025-01-29T11:18:31.571482292Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ee54511ace36179b667adeb5410aa0c487826a5fb27e610c6a3fce075f6d248d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:18:31.571569 containerd[1540]: time="2025-01-29T11:18:31.571559540Z" level=info msg="RemovePodSandbox \"ee54511ace36179b667adeb5410aa0c487826a5fb27e610c6a3fce075f6d248d\" returns successfully" Jan 29 11:18:31.571799 containerd[1540]: time="2025-01-29T11:18:31.571785888Z" level=info msg="StopPodSandbox for \"20c1d0d43e7d7d6a06775382b021ad3cfdf93ec7e64efa9cb8914a0665eb8c72\"" Jan 29 11:18:31.571891 containerd[1540]: time="2025-01-29T11:18:31.571878495Z" level=info msg="TearDown network for sandbox \"20c1d0d43e7d7d6a06775382b021ad3cfdf93ec7e64efa9cb8914a0665eb8c72\" successfully" Jan 29 11:18:31.571936 containerd[1540]: time="2025-01-29T11:18:31.571929013Z" level=info msg="StopPodSandbox for \"20c1d0d43e7d7d6a06775382b021ad3cfdf93ec7e64efa9cb8914a0665eb8c72\" returns successfully" Jan 29 11:18:31.572145 containerd[1540]: time="2025-01-29T11:18:31.572133578Z" level=info msg="RemovePodSandbox for \"20c1d0d43e7d7d6a06775382b021ad3cfdf93ec7e64efa9cb8914a0665eb8c72\"" Jan 29 11:18:31.572192 containerd[1540]: time="2025-01-29T11:18:31.572183568Z" level=info msg="Forcibly stopping sandbox \"20c1d0d43e7d7d6a06775382b021ad3cfdf93ec7e64efa9cb8914a0665eb8c72\"" Jan 29 11:18:31.573413 containerd[1540]: time="2025-01-29T11:18:31.572409127Z" level=info msg="TearDown network for sandbox \"20c1d0d43e7d7d6a06775382b021ad3cfdf93ec7e64efa9cb8914a0665eb8c72\" successfully" Jan 29 11:18:31.576894 containerd[1540]: time="2025-01-29T11:18:31.574046264Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"20c1d0d43e7d7d6a06775382b021ad3cfdf93ec7e64efa9cb8914a0665eb8c72\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:18:31.576894 containerd[1540]: time="2025-01-29T11:18:31.574067868Z" level=info msg="RemovePodSandbox \"20c1d0d43e7d7d6a06775382b021ad3cfdf93ec7e64efa9cb8914a0665eb8c72\" returns successfully" Jan 29 11:18:48.985331 systemd[1]: Started sshd@7-139.178.70.108:22-147.75.109.163:43094.service - OpenSSH per-connection server daemon (147.75.109.163:43094). Jan 29 11:18:49.098875 sshd[5639]: Accepted publickey for core from 147.75.109.163 port 43094 ssh2: RSA SHA256:oQKHht31ZrZ2aBtTxFwhSjNHKxqmsfAvv5OvtSg5zro Jan 29 11:18:49.099664 sshd-session[5639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:18:49.104635 systemd-logind[1521]: New session 10 of user core. Jan 29 11:18:49.109460 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 11:18:50.165661 sshd[5643]: Connection closed by 147.75.109.163 port 43094 Jan 29 11:18:50.166204 sshd-session[5639]: pam_unix(sshd:session): session closed for user core Jan 29 11:18:50.171255 systemd-logind[1521]: Session 10 logged out. Waiting for processes to exit. Jan 29 11:18:50.171626 systemd[1]: sshd@7-139.178.70.108:22-147.75.109.163:43094.service: Deactivated successfully. Jan 29 11:18:50.173721 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 11:18:50.175511 systemd-logind[1521]: Removed session 10. Jan 29 11:18:55.176586 systemd[1]: Started sshd@8-139.178.70.108:22-147.75.109.163:43108.service - OpenSSH per-connection server daemon (147.75.109.163:43108). Jan 29 11:18:55.416239 sshd[5693]: Accepted publickey for core from 147.75.109.163 port 43108 ssh2: RSA SHA256:oQKHht31ZrZ2aBtTxFwhSjNHKxqmsfAvv5OvtSg5zro Jan 29 11:18:55.416157 sshd-session[5693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:18:55.422720 systemd-logind[1521]: New session 11 of user core. Jan 29 11:18:55.425458 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 11:18:55.568824 sshd[5695]: Connection closed by 147.75.109.163 port 43108 Jan 29 11:18:55.568691 sshd-session[5693]: pam_unix(sshd:session): session closed for user core Jan 29 11:18:55.570677 systemd-logind[1521]: Session 11 logged out. Waiting for processes to exit. Jan 29 11:18:55.571889 systemd[1]: sshd@8-139.178.70.108:22-147.75.109.163:43108.service: Deactivated successfully. Jan 29 11:18:55.573297 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 11:18:55.574217 systemd-logind[1521]: Removed session 11. Jan 29 11:19:00.579448 systemd[1]: Started sshd@9-139.178.70.108:22-147.75.109.163:33972.service - OpenSSH per-connection server daemon (147.75.109.163:33972). Jan 29 11:19:00.619057 sshd[5709]: Accepted publickey for core from 147.75.109.163 port 33972 ssh2: RSA SHA256:oQKHht31ZrZ2aBtTxFwhSjNHKxqmsfAvv5OvtSg5zro Jan 29 11:19:00.619920 sshd-session[5709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:19:00.623171 systemd-logind[1521]: New session 12 of user core. Jan 29 11:19:00.629599 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 11:19:00.724504 sshd[5711]: Connection closed by 147.75.109.163 port 33972 Jan 29 11:19:00.725105 sshd-session[5709]: pam_unix(sshd:session): session closed for user core Jan 29 11:19:00.731076 systemd[1]: sshd@9-139.178.70.108:22-147.75.109.163:33972.service: Deactivated successfully. Jan 29 11:19:00.732148 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 11:19:00.733159 systemd-logind[1521]: Session 12 logged out. Waiting for processes to exit. Jan 29 11:19:00.734233 systemd[1]: Started sshd@10-139.178.70.108:22-147.75.109.163:33980.service - OpenSSH per-connection server daemon (147.75.109.163:33980). Jan 29 11:19:00.736218 systemd-logind[1521]: Removed session 12. Jan 29 11:19:00.786611 sshd[5723]: Accepted publickey for core from 147.75.109.163 port 33980 ssh2: RSA SHA256:oQKHht31ZrZ2aBtTxFwhSjNHKxqmsfAvv5OvtSg5zro Jan 29 11:19:00.787498 sshd-session[5723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:19:00.790077 systemd-logind[1521]: New session 13 of user core. Jan 29 11:19:00.798614 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 11:19:00.936404 sshd[5726]: Connection closed by 147.75.109.163 port 33980 Jan 29 11:19:00.937106 sshd-session[5723]: pam_unix(sshd:session): session closed for user core Jan 29 11:19:00.944569 systemd[1]: sshd@10-139.178.70.108:22-147.75.109.163:33980.service: Deactivated successfully. Jan 29 11:19:00.947662 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 11:19:00.952059 systemd-logind[1521]: Session 13 logged out. Waiting for processes to exit. Jan 29 11:19:00.958651 systemd[1]: Started sshd@11-139.178.70.108:22-147.75.109.163:33984.service - OpenSSH per-connection server daemon (147.75.109.163:33984). Jan 29 11:19:00.961640 systemd-logind[1521]: Removed session 13. Jan 29 11:19:01.001389 sshd[5735]: Accepted publickey for core from 147.75.109.163 port 33984 ssh2: RSA SHA256:oQKHht31ZrZ2aBtTxFwhSjNHKxqmsfAvv5OvtSg5zro Jan 29 11:19:01.001837 sshd-session[5735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:19:01.004295 systemd-logind[1521]: New session 14 of user core. Jan 29 11:19:01.009559 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 11:19:01.102451 sshd[5737]: Connection closed by 147.75.109.163 port 33984 Jan 29 11:19:01.102909 sshd-session[5735]: pam_unix(sshd:session): session closed for user core Jan 29 11:19:01.105097 systemd[1]: sshd@11-139.178.70.108:22-147.75.109.163:33984.service: Deactivated successfully. Jan 29 11:19:01.106214 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 11:19:01.106674 systemd-logind[1521]: Session 14 logged out. Waiting for processes to exit. Jan 29 11:19:01.107196 systemd-logind[1521]: Removed session 14. Jan 29 11:19:06.111235 systemd[1]: Started sshd@12-139.178.70.108:22-147.75.109.163:33986.service - OpenSSH per-connection server daemon (147.75.109.163:33986). Jan 29 11:19:06.164621 sshd[5773]: Accepted publickey for core from 147.75.109.163 port 33986 ssh2: RSA SHA256:oQKHht31ZrZ2aBtTxFwhSjNHKxqmsfAvv5OvtSg5zro Jan 29 11:19:06.166625 sshd-session[5773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:19:06.170048 systemd-logind[1521]: New session 15 of user core. Jan 29 11:19:06.175470 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 11:19:06.283795 sshd[5775]: Connection closed by 147.75.109.163 port 33986 Jan 29 11:19:06.284650 sshd-session[5773]: pam_unix(sshd:session): session closed for user core Jan 29 11:19:06.287782 systemd-logind[1521]: Session 15 logged out. Waiting for processes to exit. Jan 29 11:19:06.287876 systemd[1]: sshd@12-139.178.70.108:22-147.75.109.163:33986.service: Deactivated successfully. Jan 29 11:19:06.288927 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 11:19:06.289420 systemd-logind[1521]: Removed session 15. Jan 29 11:19:11.296414 systemd[1]: Started sshd@13-139.178.70.108:22-147.75.109.163:59890.service - OpenSSH per-connection server daemon (147.75.109.163:59890). Jan 29 11:19:11.386052 sshd[5788]: Accepted publickey for core from 147.75.109.163 port 59890 ssh2: RSA SHA256:oQKHht31ZrZ2aBtTxFwhSjNHKxqmsfAvv5OvtSg5zro Jan 29 11:19:11.387191 sshd-session[5788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:19:11.389949 systemd-logind[1521]: New session 16 of user core. Jan 29 11:19:11.394456 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 11:19:11.516840 sshd[5791]: Connection closed by 147.75.109.163 port 59890 Jan 29 11:19:11.517743 sshd-session[5788]: pam_unix(sshd:session): session closed for user core Jan 29 11:19:11.523071 systemd[1]: sshd@13-139.178.70.108:22-147.75.109.163:59890.service: Deactivated successfully. Jan 29 11:19:11.524110 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 11:19:11.524891 systemd-logind[1521]: Session 16 logged out. Waiting for processes to exit. Jan 29 11:19:11.527595 systemd[1]: Started sshd@14-139.178.70.108:22-147.75.109.163:59900.service - OpenSSH per-connection server daemon (147.75.109.163:59900). Jan 29 11:19:11.529014 systemd-logind[1521]: Removed session 16. Jan 29 11:19:11.584710 sshd[5801]: Accepted publickey for core from 147.75.109.163 port 59900 ssh2: RSA SHA256:oQKHht31ZrZ2aBtTxFwhSjNHKxqmsfAvv5OvtSg5zro Jan 29 11:19:11.585439 sshd-session[5801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:19:11.588401 systemd-logind[1521]: New session 17 of user core. Jan 29 11:19:11.593449 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 11:19:12.069530 sshd[5803]: Connection closed by 147.75.109.163 port 59900 Jan 29 11:19:12.073564 sshd-session[5801]: pam_unix(sshd:session): session closed for user core Jan 29 11:19:12.077532 systemd[1]: Started sshd@15-139.178.70.108:22-147.75.109.163:59912.service - OpenSSH per-connection server daemon (147.75.109.163:59912). Jan 29 11:19:12.078449 systemd[1]: sshd@14-139.178.70.108:22-147.75.109.163:59900.service: Deactivated successfully. Jan 29 11:19:12.080467 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 11:19:12.082059 systemd-logind[1521]: Session 17 logged out. Waiting for processes to exit. Jan 29 11:19:12.083147 systemd-logind[1521]: Removed session 17. Jan 29 11:19:12.148247 sshd[5812]: Accepted publickey for core from 147.75.109.163 port 59912 ssh2: RSA SHA256:oQKHht31ZrZ2aBtTxFwhSjNHKxqmsfAvv5OvtSg5zro Jan 29 11:19:12.149253 sshd-session[5812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:19:12.152939 systemd-logind[1521]: New session 18 of user core. Jan 29 11:19:12.156541 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 11:19:12.843041 sshd[5816]: Connection closed by 147.75.109.163 port 59912 Jan 29 11:19:12.843270 sshd-session[5812]: pam_unix(sshd:session): session closed for user core Jan 29 11:19:12.849841 systemd[1]: sshd@15-139.178.70.108:22-147.75.109.163:59912.service: Deactivated successfully. Jan 29 11:19:12.853293 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 11:19:12.854304 systemd-logind[1521]: Session 18 logged out. Waiting for processes to exit. Jan 29 11:19:12.858781 systemd[1]: Started sshd@16-139.178.70.108:22-147.75.109.163:59916.service - OpenSSH per-connection server daemon (147.75.109.163:59916). Jan 29 11:19:12.860362 systemd-logind[1521]: Removed session 18. Jan 29 11:19:12.942609 sshd[5830]: Accepted publickey for core from 147.75.109.163 port 59916 ssh2: RSA SHA256:oQKHht31ZrZ2aBtTxFwhSjNHKxqmsfAvv5OvtSg5zro Jan 29 11:19:12.943449 sshd-session[5830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:19:12.946293 systemd-logind[1521]: New session 19 of user core. Jan 29 11:19:12.958477 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 11:19:13.194915 sshd[5834]: Connection closed by 147.75.109.163 port 59916 Jan 29 11:19:13.195950 sshd-session[5830]: pam_unix(sshd:session): session closed for user core Jan 29 11:19:13.203138 systemd[1]: sshd@16-139.178.70.108:22-147.75.109.163:59916.service: Deactivated successfully. Jan 29 11:19:13.205079 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 11:19:13.207183 systemd-logind[1521]: Session 19 logged out. Waiting for processes to exit. Jan 29 11:19:13.212965 systemd[1]: Started sshd@17-139.178.70.108:22-147.75.109.163:59920.service - OpenSSH per-connection server daemon (147.75.109.163:59920). Jan 29 11:19:13.213970 systemd-logind[1521]: Removed session 19. Jan 29 11:19:13.258308 sshd[5843]: Accepted publickey for core from 147.75.109.163 port 59920 ssh2: RSA SHA256:oQKHht31ZrZ2aBtTxFwhSjNHKxqmsfAvv5OvtSg5zro Jan 29 11:19:13.259536 sshd-session[5843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:19:13.263365 systemd-logind[1521]: New session 20 of user core. Jan 29 11:19:13.268501 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 11:19:13.365115 sshd[5845]: Connection closed by 147.75.109.163 port 59920 Jan 29 11:19:13.365467 sshd-session[5843]: pam_unix(sshd:session): session closed for user core Jan 29 11:19:13.367100 systemd-logind[1521]: Session 20 logged out. Waiting for processes to exit. Jan 29 11:19:13.367786 systemd[1]: sshd@17-139.178.70.108:22-147.75.109.163:59920.service: Deactivated successfully. Jan 29 11:19:13.369077 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 11:19:13.370174 systemd-logind[1521]: Removed session 20. Jan 29 11:19:18.375324 systemd[1]: Started sshd@18-139.178.70.108:22-147.75.109.163:51440.service - OpenSSH per-connection server daemon (147.75.109.163:51440). Jan 29 11:19:18.412256 sshd[5857]: Accepted publickey for core from 147.75.109.163 port 51440 ssh2: RSA SHA256:oQKHht31ZrZ2aBtTxFwhSjNHKxqmsfAvv5OvtSg5zro Jan 29 11:19:18.413000 sshd-session[5857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:19:18.415838 systemd-logind[1521]: New session 21 of user core. Jan 29 11:19:18.423485 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 11:19:18.517962 sshd[5859]: Connection closed by 147.75.109.163 port 51440 Jan 29 11:19:18.519048 sshd-session[5857]: pam_unix(sshd:session): session closed for user core Jan 29 11:19:18.521028 systemd[1]: sshd@18-139.178.70.108:22-147.75.109.163:51440.service: Deactivated successfully. Jan 29 11:19:18.522751 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 11:19:18.523537 systemd-logind[1521]: Session 21 logged out. Waiting for processes to exit. Jan 29 11:19:18.524060 systemd-logind[1521]: Removed session 21. Jan 29 11:19:23.528731 systemd[1]: Started sshd@19-139.178.70.108:22-147.75.109.163:51456.service - OpenSSH per-connection server daemon (147.75.109.163:51456). Jan 29 11:19:23.855555 sshd[5888]: Accepted publickey for core from 147.75.109.163 port 51456 ssh2: RSA SHA256:oQKHht31ZrZ2aBtTxFwhSjNHKxqmsfAvv5OvtSg5zro Jan 29 11:19:23.866581 sshd-session[5888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:19:23.869112 systemd-logind[1521]: New session 22 of user core. Jan 29 11:19:23.874494 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 11:19:24.206040 sshd[5890]: Connection closed by 147.75.109.163 port 51456 Jan 29 11:19:24.206486 sshd-session[5888]: pam_unix(sshd:session): session closed for user core Jan 29 11:19:24.210147 systemd[1]: sshd@19-139.178.70.108:22-147.75.109.163:51456.service: Deactivated successfully. Jan 29 11:19:24.211303 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 11:19:24.211755 systemd-logind[1521]: Session 22 logged out. Waiting for processes to exit. Jan 29 11:19:24.212437 systemd-logind[1521]: Removed session 22. Jan 29 11:19:29.224684 systemd[1]: Started sshd@20-139.178.70.108:22-147.75.109.163:54884.service - OpenSSH per-connection server daemon (147.75.109.163:54884). Jan 29 11:19:29.282325 sshd[5910]: Accepted publickey for core from 147.75.109.163 port 54884 ssh2: RSA SHA256:oQKHht31ZrZ2aBtTxFwhSjNHKxqmsfAvv5OvtSg5zro Jan 29 11:19:29.283277 sshd-session[5910]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:19:29.285897 systemd-logind[1521]: New session 23 of user core. Jan 29 11:19:29.292505 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 11:19:29.441250 sshd[5912]: Connection closed by 147.75.109.163 port 54884 Jan 29 11:19:29.441568 sshd-session[5910]: pam_unix(sshd:session): session closed for user core Jan 29 11:19:29.444043 systemd[1]: sshd@20-139.178.70.108:22-147.75.109.163:54884.service: Deactivated successfully. Jan 29 11:19:29.445496 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 11:19:29.446455 systemd-logind[1521]: Session 23 logged out. Waiting for processes to exit. Jan 29 11:19:29.447124 systemd-logind[1521]: Removed session 23.