Dec 13 01:45:42.741238 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:45:42.741255 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:45:42.741261 kernel: Disabled fast string operations Dec 13 01:45:42.741265 kernel: BIOS-provided physical RAM map: Dec 13 01:45:42.741269 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Dec 13 01:45:42.741273 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Dec 13 01:45:42.741279 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Dec 13 01:45:42.741283 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Dec 13 01:45:42.741287 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Dec 13 01:45:42.741292 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Dec 13 01:45:42.741296 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Dec 13 01:45:42.741300 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Dec 13 01:45:42.741304 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Dec 13 01:45:42.741308 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Dec 13 01:45:42.741314 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Dec 13 01:45:42.741319 kernel: NX (Execute Disable) protection: active Dec 13 01:45:42.741324 kernel: APIC: Static calls initialized Dec 13 01:45:42.741329 kernel: SMBIOS 2.7 present. Dec 13 01:45:42.741334 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Dec 13 01:45:42.741338 kernel: vmware: hypercall mode: 0x00 Dec 13 01:45:42.741343 kernel: Hypervisor detected: VMware Dec 13 01:45:42.741348 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Dec 13 01:45:42.741354 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Dec 13 01:45:42.741358 kernel: vmware: using clock offset of 2502493562 ns Dec 13 01:45:42.741363 kernel: tsc: Detected 3408.000 MHz processor Dec 13 01:45:42.741368 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:45:42.741373 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:45:42.741378 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Dec 13 01:45:42.741383 kernel: total RAM covered: 3072M Dec 13 01:45:42.741388 kernel: Found optimal setting for mtrr clean up Dec 13 01:45:42.741393 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Dec 13 01:45:42.741399 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Dec 13 01:45:42.741404 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:45:42.741409 kernel: Using GB pages for direct mapping Dec 13 01:45:42.741413 kernel: ACPI: Early table checksum verification disabled Dec 13 01:45:42.741418 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Dec 13 01:45:42.741423 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Dec 13 01:45:42.741428 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Dec 13 01:45:42.741433 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Dec 13 01:45:42.741438 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Dec 13 01:45:42.741446 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Dec 13 01:45:42.741451 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Dec 13 01:45:42.741456 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Dec 13 01:45:42.741461 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Dec 13 01:45:42.741466 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Dec 13 01:45:42.741473 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Dec 13 01:45:42.741478 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Dec 13 01:45:42.741483 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Dec 13 01:45:42.741488 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Dec 13 01:45:42.741493 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Dec 13 01:45:42.741498 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Dec 13 01:45:42.741503 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Dec 13 01:45:42.741508 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Dec 13 01:45:42.741513 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Dec 13 01:45:42.741518 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Dec 13 01:45:42.741525 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Dec 13 01:45:42.741530 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Dec 13 01:45:42.741535 kernel: system APIC only can use physical flat Dec 13 01:45:42.741540 kernel: APIC: Switched APIC routing to: physical flat Dec 13 01:45:42.741545 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 01:45:42.741550 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Dec 13 01:45:42.741555 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Dec 13 01:45:42.741560 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Dec 13 01:45:42.741565 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Dec 13 01:45:42.741571 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Dec 13 01:45:42.741576 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Dec 13 01:45:42.741581 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Dec 13 01:45:42.741586 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Dec 13 01:45:42.741591 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Dec 13 01:45:42.741596 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Dec 13 01:45:42.741601 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Dec 13 01:45:42.741606 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Dec 13 01:45:42.741611 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Dec 13 01:45:42.741616 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Dec 13 01:45:42.741622 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Dec 13 01:45:42.741627 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Dec 13 01:45:42.741632 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Dec 13 01:45:42.741637 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Dec 13 01:45:42.741642 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Dec 13 01:45:42.741647 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Dec 13 01:45:42.741652 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Dec 13 01:45:42.741657 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Dec 13 01:45:42.741661 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Dec 13 01:45:42.741666 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Dec 13 01:45:42.741672 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Dec 13 01:45:42.741677 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Dec 13 01:45:42.741682 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Dec 13 01:45:42.741687 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Dec 13 01:45:42.741692 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Dec 13 01:45:42.741697 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Dec 13 01:45:42.741702 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Dec 13 01:45:42.741707 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Dec 13 01:45:42.741712 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Dec 13 01:45:42.741717 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Dec 13 01:45:42.741723 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Dec 13 01:45:42.741728 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Dec 13 01:45:42.741733 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Dec 13 01:45:42.741738 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Dec 13 01:45:42.741743 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Dec 13 01:45:42.741748 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Dec 13 01:45:42.741753 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Dec 13 01:45:42.741758 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Dec 13 01:45:42.741763 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Dec 13 01:45:42.741768 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Dec 13 01:45:42.741773 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Dec 13 01:45:42.741779 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Dec 13 01:45:42.741784 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Dec 13 01:45:42.741789 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Dec 13 01:45:42.741794 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Dec 13 01:45:42.741799 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Dec 13 01:45:42.741804 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Dec 13 01:45:42.741809 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Dec 13 01:45:42.741814 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Dec 13 01:45:42.742135 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Dec 13 01:45:42.742141 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Dec 13 01:45:42.742148 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Dec 13 01:45:42.742154 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Dec 13 01:45:42.742159 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Dec 13 01:45:42.742168 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Dec 13 01:45:42.742174 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Dec 13 01:45:42.742180 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Dec 13 01:45:42.742185 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Dec 13 01:45:42.742190 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Dec 13 01:45:42.742197 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Dec 13 01:45:42.742203 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Dec 13 01:45:42.742208 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Dec 13 01:45:42.742213 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Dec 13 01:45:42.742219 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Dec 13 01:45:42.742224 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Dec 13 01:45:42.742229 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Dec 13 01:45:42.742234 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Dec 13 01:45:42.742240 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Dec 13 01:45:42.742245 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Dec 13 01:45:42.742257 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Dec 13 01:45:42.742272 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Dec 13 01:45:42.742283 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Dec 13 01:45:42.742289 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Dec 13 01:45:42.742300 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Dec 13 01:45:42.742307 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Dec 13 01:45:42.742312 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Dec 13 01:45:42.742317 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Dec 13 01:45:42.742322 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Dec 13 01:45:42.742328 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Dec 13 01:45:42.742335 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Dec 13 01:45:42.742341 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Dec 13 01:45:42.742346 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Dec 13 01:45:42.742351 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Dec 13 01:45:42.742357 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Dec 13 01:45:42.742362 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Dec 13 01:45:42.742367 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Dec 13 01:45:42.742373 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Dec 13 01:45:42.742378 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Dec 13 01:45:42.742384 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Dec 13 01:45:42.742390 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Dec 13 01:45:42.742395 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Dec 13 01:45:42.742401 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Dec 13 01:45:42.742406 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Dec 13 01:45:42.742411 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Dec 13 01:45:42.742417 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Dec 13 01:45:42.742422 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Dec 13 01:45:42.742428 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Dec 13 01:45:42.742433 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Dec 13 01:45:42.742438 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Dec 13 01:45:42.742444 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Dec 13 01:45:42.742450 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Dec 13 01:45:42.742456 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Dec 13 01:45:42.742461 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Dec 13 01:45:42.742466 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Dec 13 01:45:42.742472 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Dec 13 01:45:42.742477 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Dec 13 01:45:42.742482 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Dec 13 01:45:42.742488 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Dec 13 01:45:42.742493 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Dec 13 01:45:42.742498 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Dec 13 01:45:42.742505 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Dec 13 01:45:42.742510 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Dec 13 01:45:42.742516 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Dec 13 01:45:42.742521 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Dec 13 01:45:42.742527 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Dec 13 01:45:42.742532 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Dec 13 01:45:42.742538 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Dec 13 01:45:42.742543 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Dec 13 01:45:42.742548 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Dec 13 01:45:42.742553 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Dec 13 01:45:42.742561 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Dec 13 01:45:42.742566 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Dec 13 01:45:42.742572 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Dec 13 01:45:42.742577 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 01:45:42.742582 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 13 01:45:42.742588 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Dec 13 01:45:42.742594 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Dec 13 01:45:42.742599 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Dec 13 01:45:42.742605 kernel: Zone ranges: Dec 13 01:45:42.742612 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:45:42.742617 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Dec 13 01:45:42.742623 kernel: Normal empty Dec 13 01:45:42.742628 kernel: Movable zone start for each node Dec 13 01:45:42.742634 kernel: Early memory node ranges Dec 13 01:45:42.742639 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Dec 13 01:45:42.742644 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Dec 13 01:45:42.742650 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Dec 13 01:45:42.742655 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Dec 13 01:45:42.742661 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:45:42.742668 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Dec 13 01:45:42.742673 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Dec 13 01:45:42.742678 kernel: ACPI: PM-Timer IO Port: 0x1008 Dec 13 01:45:42.742684 kernel: system APIC only can use physical flat Dec 13 01:45:42.742689 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Dec 13 01:45:42.742695 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Dec 13 01:45:42.742700 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Dec 13 01:45:42.742705 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Dec 13 01:45:42.742711 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Dec 13 01:45:42.742717 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Dec 13 01:45:42.742722 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Dec 13 01:45:42.742728 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Dec 13 01:45:42.742733 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Dec 13 01:45:42.742739 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Dec 13 01:45:42.742744 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Dec 13 01:45:42.742749 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Dec 13 01:45:42.742755 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Dec 13 01:45:42.742760 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Dec 13 01:45:42.742765 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Dec 13 01:45:42.742772 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Dec 13 01:45:42.742777 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Dec 13 01:45:42.742783 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Dec 13 01:45:42.742788 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Dec 13 01:45:42.742793 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Dec 13 01:45:42.742799 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Dec 13 01:45:42.742804 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Dec 13 01:45:42.742809 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Dec 13 01:45:42.742822 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Dec 13 01:45:42.742828 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Dec 13 01:45:42.742835 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Dec 13 01:45:42.742841 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Dec 13 01:45:42.742846 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Dec 13 01:45:42.742851 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Dec 13 01:45:42.742857 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Dec 13 01:45:42.742862 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Dec 13 01:45:42.742868 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Dec 13 01:45:42.742873 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Dec 13 01:45:42.742878 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Dec 13 01:45:42.742885 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Dec 13 01:45:42.742890 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Dec 13 01:45:42.742896 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Dec 13 01:45:42.742901 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Dec 13 01:45:42.742907 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Dec 13 01:45:42.742912 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Dec 13 01:45:42.742918 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Dec 13 01:45:42.742923 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Dec 13 01:45:42.742928 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Dec 13 01:45:42.742934 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Dec 13 01:45:42.742940 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Dec 13 01:45:42.742946 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Dec 13 01:45:42.742951 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Dec 13 01:45:42.742956 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Dec 13 01:45:42.742962 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Dec 13 01:45:42.742967 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Dec 13 01:45:42.742973 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Dec 13 01:45:42.742978 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Dec 13 01:45:42.742983 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Dec 13 01:45:42.742990 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Dec 13 01:45:42.742995 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Dec 13 01:45:42.743001 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Dec 13 01:45:42.743008 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Dec 13 01:45:42.743017 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Dec 13 01:45:42.743025 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Dec 13 01:45:42.743033 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Dec 13 01:45:42.743042 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Dec 13 01:45:42.743051 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Dec 13 01:45:42.743058 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Dec 13 01:45:42.743065 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Dec 13 01:45:42.743070 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Dec 13 01:45:42.743076 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Dec 13 01:45:42.743098 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Dec 13 01:45:42.743119 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Dec 13 01:45:42.743141 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Dec 13 01:45:42.743151 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Dec 13 01:45:42.743156 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Dec 13 01:45:42.743162 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Dec 13 01:45:42.743169 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Dec 13 01:45:42.743175 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Dec 13 01:45:42.743180 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Dec 13 01:45:42.743186 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Dec 13 01:45:42.743191 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Dec 13 01:45:42.743197 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Dec 13 01:45:42.743202 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Dec 13 01:45:42.743207 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Dec 13 01:45:42.743213 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Dec 13 01:45:42.743218 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Dec 13 01:45:42.743225 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Dec 13 01:45:42.743231 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Dec 13 01:45:42.743239 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Dec 13 01:45:42.743245 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Dec 13 01:45:42.743253 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Dec 13 01:45:42.743259 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Dec 13 01:45:42.743264 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Dec 13 01:45:42.743270 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Dec 13 01:45:42.743275 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Dec 13 01:45:42.743282 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Dec 13 01:45:42.743287 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Dec 13 01:45:42.743293 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Dec 13 01:45:42.743298 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Dec 13 01:45:42.743304 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Dec 13 01:45:42.743309 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Dec 13 01:45:42.743314 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Dec 13 01:45:42.743320 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Dec 13 01:45:42.743325 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Dec 13 01:45:42.743331 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Dec 13 01:45:42.743337 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Dec 13 01:45:42.743343 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Dec 13 01:45:42.743348 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Dec 13 01:45:42.743354 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Dec 13 01:45:42.743359 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Dec 13 01:45:42.743364 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Dec 13 01:45:42.743370 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Dec 13 01:45:42.743375 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Dec 13 01:45:42.743381 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Dec 13 01:45:42.743386 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Dec 13 01:45:42.743393 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Dec 13 01:45:42.743398 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Dec 13 01:45:42.743403 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Dec 13 01:45:42.743409 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Dec 13 01:45:42.743417 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Dec 13 01:45:42.743423 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Dec 13 01:45:42.743428 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Dec 13 01:45:42.743434 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Dec 13 01:45:42.743439 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Dec 13 01:45:42.743446 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Dec 13 01:45:42.743451 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Dec 13 01:45:42.743457 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Dec 13 01:45:42.743462 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Dec 13 01:45:42.743468 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Dec 13 01:45:42.743473 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Dec 13 01:45:42.743479 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Dec 13 01:45:42.743484 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Dec 13 01:45:42.743489 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:45:42.743495 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Dec 13 01:45:42.743502 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:45:42.743507 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Dec 13 01:45:42.743513 kernel: TSC deadline timer available Dec 13 01:45:42.743519 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Dec 13 01:45:42.743524 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Dec 13 01:45:42.743529 kernel: Booting paravirtualized kernel on VMware hypervisor Dec 13 01:45:42.743535 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:45:42.743541 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Dec 13 01:45:42.743546 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Dec 13 01:45:42.743553 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Dec 13 01:45:42.743559 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Dec 13 01:45:42.743564 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Dec 13 01:45:42.743570 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Dec 13 01:45:42.743575 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Dec 13 01:45:42.743580 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Dec 13 01:45:42.743593 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Dec 13 01:45:42.743599 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Dec 13 01:45:42.743605 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Dec 13 01:45:42.743612 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Dec 13 01:45:42.743618 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Dec 13 01:45:42.743623 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Dec 13 01:45:42.743629 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Dec 13 01:45:42.743634 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Dec 13 01:45:42.743640 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Dec 13 01:45:42.743646 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Dec 13 01:45:42.743651 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Dec 13 01:45:42.743659 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:45:42.743665 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:45:42.743671 kernel: random: crng init done Dec 13 01:45:42.743677 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Dec 13 01:45:42.743683 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Dec 13 01:45:42.743688 kernel: printk: log_buf_len min size: 262144 bytes Dec 13 01:45:42.743694 kernel: printk: log_buf_len: 1048576 bytes Dec 13 01:45:42.743700 kernel: printk: early log buf free: 239648(91%) Dec 13 01:45:42.743707 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:45:42.743713 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 01:45:42.743719 kernel: Fallback order for Node 0: 0 Dec 13 01:45:42.743725 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Dec 13 01:45:42.743731 kernel: Policy zone: DMA32 Dec 13 01:45:42.743736 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:45:42.743743 kernel: Memory: 1936372K/2096628K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 159996K reserved, 0K cma-reserved) Dec 13 01:45:42.743750 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Dec 13 01:45:42.743756 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:45:42.743761 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:45:42.743767 kernel: Dynamic Preempt: voluntary Dec 13 01:45:42.743773 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:45:42.743779 kernel: rcu: RCU event tracing is enabled. Dec 13 01:45:42.743785 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Dec 13 01:45:42.743791 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:45:42.743798 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:45:42.743804 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:45:42.743810 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:45:42.743849 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Dec 13 01:45:42.743856 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Dec 13 01:45:42.743862 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Dec 13 01:45:42.743868 kernel: Console: colour VGA+ 80x25 Dec 13 01:45:42.743874 kernel: printk: console [tty0] enabled Dec 13 01:45:42.743880 kernel: printk: console [ttyS0] enabled Dec 13 01:45:42.743885 kernel: ACPI: Core revision 20230628 Dec 13 01:45:42.743894 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Dec 13 01:45:42.743899 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:45:42.743905 kernel: x2apic enabled Dec 13 01:45:42.743911 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:45:42.743917 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 01:45:42.743923 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Dec 13 01:45:42.743929 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Dec 13 01:45:42.743935 kernel: Disabled fast string operations Dec 13 01:45:42.743941 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 01:45:42.743948 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 01:45:42.743955 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:45:42.743961 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Dec 13 01:45:42.743967 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Dec 13 01:45:42.743972 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Dec 13 01:45:42.743978 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:45:42.743984 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Dec 13 01:45:42.743990 kernel: RETBleed: Mitigation: Enhanced IBRS Dec 13 01:45:42.743996 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:45:42.744003 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 01:45:42.744009 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 01:45:42.744014 kernel: SRBDS: Unknown: Dependent on hypervisor status Dec 13 01:45:42.744020 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 01:45:42.744026 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:45:42.744032 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:45:42.744038 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:45:42.744044 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:45:42.744051 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 13 01:45:42.744057 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:45:42.744063 kernel: pid_max: default: 131072 minimum: 1024 Dec 13 01:45:42.744069 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:45:42.744075 kernel: landlock: Up and running. Dec 13 01:45:42.744081 kernel: SELinux: Initializing. Dec 13 01:45:42.744087 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 01:45:42.744093 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 01:45:42.744099 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Dec 13 01:45:42.744106 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Dec 13 01:45:42.744112 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Dec 13 01:45:42.744117 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Dec 13 01:45:42.744123 kernel: Performance Events: Skylake events, core PMU driver. Dec 13 01:45:42.744129 kernel: core: CPUID marked event: 'cpu cycles' unavailable Dec 13 01:45:42.744135 kernel: core: CPUID marked event: 'instructions' unavailable Dec 13 01:45:42.744141 kernel: core: CPUID marked event: 'bus cycles' unavailable Dec 13 01:45:42.744146 kernel: core: CPUID marked event: 'cache references' unavailable Dec 13 01:45:42.744152 kernel: core: CPUID marked event: 'cache misses' unavailable Dec 13 01:45:42.744158 kernel: core: CPUID marked event: 'branch instructions' unavailable Dec 13 01:45:42.744164 kernel: core: CPUID marked event: 'branch misses' unavailable Dec 13 01:45:42.744170 kernel: ... version: 1 Dec 13 01:45:42.744176 kernel: ... bit width: 48 Dec 13 01:45:42.744182 kernel: ... generic registers: 4 Dec 13 01:45:42.744187 kernel: ... value mask: 0000ffffffffffff Dec 13 01:45:42.744193 kernel: ... max period: 000000007fffffff Dec 13 01:45:42.744199 kernel: ... fixed-purpose events: 0 Dec 13 01:45:42.744205 kernel: ... event mask: 000000000000000f Dec 13 01:45:42.744212 kernel: signal: max sigframe size: 1776 Dec 13 01:45:42.744218 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:45:42.744224 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:45:42.744230 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 01:45:42.744236 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:45:42.744242 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:45:42.744247 kernel: .... node #0, CPUs: #1 Dec 13 01:45:42.744253 kernel: Disabled fast string operations Dec 13 01:45:42.744259 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Dec 13 01:45:42.744266 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Dec 13 01:45:42.744271 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:45:42.744277 kernel: smpboot: Max logical packages: 128 Dec 13 01:45:42.744283 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Dec 13 01:45:42.744289 kernel: devtmpfs: initialized Dec 13 01:45:42.744295 kernel: x86/mm: Memory block size: 128MB Dec 13 01:45:42.744301 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Dec 13 01:45:42.744307 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:45:42.744313 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Dec 13 01:45:42.744319 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:45:42.744326 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:45:42.744332 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:45:42.744337 kernel: audit: type=2000 audit(1734054341.067:1): state=initialized audit_enabled=0 res=1 Dec 13 01:45:42.744343 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:45:42.744349 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:45:42.744355 kernel: cpuidle: using governor menu Dec 13 01:45:42.744361 kernel: Simple Boot Flag at 0x36 set to 0x80 Dec 13 01:45:42.744367 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:45:42.744373 kernel: dca service started, version 1.12.1 Dec 13 01:45:42.744380 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Dec 13 01:45:42.744386 kernel: PCI: Using configuration type 1 for base access Dec 13 01:45:42.744391 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:45:42.744397 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:45:42.744403 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:45:42.744409 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:45:42.744415 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:45:42.744421 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:45:42.744428 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:45:42.744434 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:45:42.744439 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:45:42.744445 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:45:42.744451 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Dec 13 01:45:42.744457 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:45:42.744463 kernel: ACPI: Interpreter enabled Dec 13 01:45:42.744469 kernel: ACPI: PM: (supports S0 S1 S5) Dec 13 01:45:42.744475 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:45:42.744482 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:45:42.744487 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 01:45:42.744493 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Dec 13 01:45:42.744499 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Dec 13 01:45:42.744579 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:45:42.744635 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Dec 13 01:45:42.744686 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Dec 13 01:45:42.744694 kernel: PCI host bridge to bus 0000:00 Dec 13 01:45:42.744748 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:45:42.744793 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Dec 13 01:45:42.744858 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 01:45:42.744904 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:45:42.744950 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Dec 13 01:45:42.744994 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Dec 13 01:45:42.745056 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Dec 13 01:45:42.745112 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Dec 13 01:45:42.745166 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Dec 13 01:45:42.745221 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Dec 13 01:45:42.745271 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Dec 13 01:45:42.745333 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Dec 13 01:45:42.745406 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Dec 13 01:45:42.745457 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Dec 13 01:45:42.745507 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Dec 13 01:45:42.745561 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Dec 13 01:45:42.745611 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Dec 13 01:45:42.745660 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Dec 13 01:45:42.745716 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Dec 13 01:45:42.745769 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Dec 13 01:45:42.745831 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Dec 13 01:45:42.745900 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Dec 13 01:45:42.745983 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Dec 13 01:45:42.746036 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Dec 13 01:45:42.746086 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Dec 13 01:45:42.746135 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Dec 13 01:45:42.746188 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 01:45:42.746242 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Dec 13 01:45:42.746296 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.746347 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.746403 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.746464 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.746521 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.746572 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.746626 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.746676 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.746730 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.746781 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.749004 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.749067 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.749125 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.749176 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.749231 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.749281 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.749334 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.749388 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.749448 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.749499 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.749552 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.749602 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.749659 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.749709 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.749763 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.749813 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.750905 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.750960 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.751019 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.751070 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.751122 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.751172 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.751224 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.751273 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.751329 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.751379 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.751431 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.751479 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.751531 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.751579 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.751631 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.751682 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.751734 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.751784 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.752880 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.752938 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.752993 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.753048 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.753101 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.753151 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.753204 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.753253 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.753306 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.753358 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.753412 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.753462 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.753514 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.753564 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.753631 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.753682 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.753737 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.753786 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.754251 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.754305 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.754356 kernel: pci_bus 0000:01: extended config space not accessible Dec 13 01:45:42.754410 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 13 01:45:42.754493 kernel: pci_bus 0000:02: extended config space not accessible Dec 13 01:45:42.754502 kernel: acpiphp: Slot [32] registered Dec 13 01:45:42.754508 kernel: acpiphp: Slot [33] registered Dec 13 01:45:42.754514 kernel: acpiphp: Slot [34] registered Dec 13 01:45:42.754519 kernel: acpiphp: Slot [35] registered Dec 13 01:45:42.754525 kernel: acpiphp: Slot [36] registered Dec 13 01:45:42.754531 kernel: acpiphp: Slot [37] registered Dec 13 01:45:42.754537 kernel: acpiphp: Slot [38] registered Dec 13 01:45:42.754544 kernel: acpiphp: Slot [39] registered Dec 13 01:45:42.754550 kernel: acpiphp: Slot [40] registered Dec 13 01:45:42.754555 kernel: acpiphp: Slot [41] registered Dec 13 01:45:42.754561 kernel: acpiphp: Slot [42] registered Dec 13 01:45:42.754566 kernel: acpiphp: Slot [43] registered Dec 13 01:45:42.754572 kernel: acpiphp: Slot [44] registered Dec 13 01:45:42.754577 kernel: acpiphp: Slot [45] registered Dec 13 01:45:42.754583 kernel: acpiphp: Slot [46] registered Dec 13 01:45:42.754589 kernel: acpiphp: Slot [47] registered Dec 13 01:45:42.754596 kernel: acpiphp: Slot [48] registered Dec 13 01:45:42.754601 kernel: acpiphp: Slot [49] registered Dec 13 01:45:42.754607 kernel: acpiphp: Slot [50] registered Dec 13 01:45:42.754612 kernel: acpiphp: Slot [51] registered Dec 13 01:45:42.754618 kernel: acpiphp: Slot [52] registered Dec 13 01:45:42.754624 kernel: acpiphp: Slot [53] registered Dec 13 01:45:42.754629 kernel: acpiphp: Slot [54] registered Dec 13 01:45:42.754635 kernel: acpiphp: Slot [55] registered Dec 13 01:45:42.754657 kernel: acpiphp: Slot [56] registered Dec 13 01:45:42.754663 kernel: acpiphp: Slot [57] registered Dec 13 01:45:42.754670 kernel: acpiphp: Slot [58] registered Dec 13 01:45:42.754675 kernel: acpiphp: Slot [59] registered Dec 13 01:45:42.754681 kernel: acpiphp: Slot [60] registered Dec 13 01:45:42.754687 kernel: acpiphp: Slot [61] registered Dec 13 01:45:42.754693 kernel: acpiphp: Slot [62] registered Dec 13 01:45:42.754713 kernel: acpiphp: Slot [63] registered Dec 13 01:45:42.754762 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Dec 13 01:45:42.754810 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Dec 13 01:45:42.755890 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Dec 13 01:45:42.755945 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Dec 13 01:45:42.755993 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Dec 13 01:45:42.756042 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Dec 13 01:45:42.756090 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Dec 13 01:45:42.756138 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Dec 13 01:45:42.756186 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Dec 13 01:45:42.756241 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Dec 13 01:45:42.756296 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Dec 13 01:45:42.756346 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Dec 13 01:45:42.756396 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Dec 13 01:45:42.756475 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.756526 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Dec 13 01:45:42.756577 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Dec 13 01:45:42.756627 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Dec 13 01:45:42.756681 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Dec 13 01:45:42.756732 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Dec 13 01:45:42.756782 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Dec 13 01:45:42.759907 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Dec 13 01:45:42.759967 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Dec 13 01:45:42.760022 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Dec 13 01:45:42.760073 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Dec 13 01:45:42.760123 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Dec 13 01:45:42.760175 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Dec 13 01:45:42.760226 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Dec 13 01:45:42.760275 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Dec 13 01:45:42.760324 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Dec 13 01:45:42.760375 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Dec 13 01:45:42.760424 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Dec 13 01:45:42.760474 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Dec 13 01:45:42.760527 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Dec 13 01:45:42.760577 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Dec 13 01:45:42.760627 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Dec 13 01:45:42.760678 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Dec 13 01:45:42.760727 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Dec 13 01:45:42.760779 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Dec 13 01:45:42.760859 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Dec 13 01:45:42.760912 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Dec 13 01:45:42.760962 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Dec 13 01:45:42.761018 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Dec 13 01:45:42.761071 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Dec 13 01:45:42.761139 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Dec 13 01:45:42.761193 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Dec 13 01:45:42.761243 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Dec 13 01:45:42.761409 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Dec 13 01:45:42.761624 kernel: pci 0000:0b:00.0: supports D1 D2 Dec 13 01:45:42.761678 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 01:45:42.763895 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Dec 13 01:45:42.763955 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Dec 13 01:45:42.764006 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Dec 13 01:45:42.764061 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Dec 13 01:45:42.764112 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Dec 13 01:45:42.764161 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Dec 13 01:45:42.764211 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Dec 13 01:45:42.764261 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Dec 13 01:45:42.764313 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Dec 13 01:45:42.764363 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Dec 13 01:45:42.764413 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Dec 13 01:45:42.764465 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Dec 13 01:45:42.764516 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Dec 13 01:45:42.764565 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Dec 13 01:45:42.764615 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Dec 13 01:45:42.764666 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Dec 13 01:45:42.764716 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Dec 13 01:45:42.764765 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Dec 13 01:45:42.765836 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Dec 13 01:45:42.765899 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Dec 13 01:45:42.765950 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Dec 13 01:45:42.766001 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Dec 13 01:45:42.766050 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Dec 13 01:45:42.766098 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Dec 13 01:45:42.766150 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Dec 13 01:45:42.766199 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Dec 13 01:45:42.766247 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Dec 13 01:45:42.766301 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Dec 13 01:45:42.766350 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Dec 13 01:45:42.766399 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Dec 13 01:45:42.766454 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Dec 13 01:45:42.766505 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Dec 13 01:45:42.766568 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Dec 13 01:45:42.766616 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Dec 13 01:45:42.766664 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Dec 13 01:45:42.766717 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Dec 13 01:45:42.766766 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Dec 13 01:45:42.768688 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Dec 13 01:45:42.768748 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Dec 13 01:45:42.768800 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Dec 13 01:45:42.768863 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Dec 13 01:45:42.768913 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Dec 13 01:45:42.768984 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Dec 13 01:45:42.769048 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Dec 13 01:45:42.769096 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Dec 13 01:45:42.769146 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Dec 13 01:45:42.769193 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Dec 13 01:45:42.769241 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Dec 13 01:45:42.769290 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Dec 13 01:45:42.769340 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Dec 13 01:45:42.769390 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Dec 13 01:45:42.769440 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Dec 13 01:45:42.769489 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Dec 13 01:45:42.769537 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Dec 13 01:45:42.769587 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Dec 13 01:45:42.769635 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Dec 13 01:45:42.769683 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Dec 13 01:45:42.769731 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Dec 13 01:45:42.769783 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Dec 13 01:45:42.769845 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Dec 13 01:45:42.769894 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Dec 13 01:45:42.769943 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Dec 13 01:45:42.769992 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Dec 13 01:45:42.770059 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Dec 13 01:45:42.770122 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Dec 13 01:45:42.770175 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Dec 13 01:45:42.770234 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Dec 13 01:45:42.770283 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Dec 13 01:45:42.770332 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Dec 13 01:45:42.770381 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Dec 13 01:45:42.770430 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Dec 13 01:45:42.770480 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Dec 13 01:45:42.770529 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Dec 13 01:45:42.770579 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Dec 13 01:45:42.770632 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Dec 13 01:45:42.770686 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Dec 13 01:45:42.770734 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Dec 13 01:45:42.770784 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Dec 13 01:45:42.770878 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Dec 13 01:45:42.770930 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Dec 13 01:45:42.770938 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Dec 13 01:45:42.770945 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Dec 13 01:45:42.770953 kernel: ACPI: PCI: Interrupt link LNKB disabled Dec 13 01:45:42.770959 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:45:42.770965 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Dec 13 01:45:42.770971 kernel: iommu: Default domain type: Translated Dec 13 01:45:42.770977 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:45:42.770983 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:45:42.770989 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:45:42.770995 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Dec 13 01:45:42.771001 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Dec 13 01:45:42.771051 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Dec 13 01:45:42.771118 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Dec 13 01:45:42.771182 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 01:45:42.771191 kernel: vgaarb: loaded Dec 13 01:45:42.771198 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Dec 13 01:45:42.771204 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Dec 13 01:45:42.771210 kernel: clocksource: Switched to clocksource tsc-early Dec 13 01:45:42.771216 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:45:42.771222 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:45:42.771230 kernel: pnp: PnP ACPI init Dec 13 01:45:42.771280 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Dec 13 01:45:42.771343 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Dec 13 01:45:42.771402 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Dec 13 01:45:42.771457 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Dec 13 01:45:42.771504 kernel: pnp 00:06: [dma 2] Dec 13 01:45:42.771552 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Dec 13 01:45:42.771599 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Dec 13 01:45:42.771643 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Dec 13 01:45:42.771651 kernel: pnp: PnP ACPI: found 8 devices Dec 13 01:45:42.771657 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:45:42.771663 kernel: NET: Registered PF_INET protocol family Dec 13 01:45:42.771669 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:45:42.771675 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 01:45:42.771683 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:45:42.771689 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:45:42.771694 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 01:45:42.771700 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 01:45:42.771706 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 01:45:42.771712 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 01:45:42.771718 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:45:42.771723 kernel: NET: Registered PF_XDP protocol family Dec 13 01:45:42.771773 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Dec 13 01:45:42.771842 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 13 01:45:42.771896 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 01:45:42.771945 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 01:45:42.772009 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 01:45:42.772060 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Dec 13 01:45:42.772110 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Dec 13 01:45:42.772164 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Dec 13 01:45:42.772213 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Dec 13 01:45:42.772264 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Dec 13 01:45:42.772312 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Dec 13 01:45:42.772362 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Dec 13 01:45:42.772414 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Dec 13 01:45:42.772463 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Dec 13 01:45:42.772512 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Dec 13 01:45:42.772562 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Dec 13 01:45:42.772611 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Dec 13 01:45:42.772660 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Dec 13 01:45:42.772711 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Dec 13 01:45:42.772759 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Dec 13 01:45:42.772807 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Dec 13 01:45:42.772901 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Dec 13 01:45:42.772950 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Dec 13 01:45:42.772998 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Dec 13 01:45:42.773051 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Dec 13 01:45:42.773100 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.773149 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.773197 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.773245 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.773294 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.773342 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.773391 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.773443 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.773492 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.773540 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.773622 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.773671 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.773736 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.773786 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.773848 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.773904 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.773954 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.774003 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.774054 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.774103 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.774153 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.774203 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.774253 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.774305 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.774355 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.774404 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.774459 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.774509 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.774559 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.774609 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.774658 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.774711 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.774760 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.774810 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.775241 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.775295 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.775347 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.775398 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.775448 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.775501 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.775552 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.775602 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.775651 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.775701 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.775750 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.775799 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.775856 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.775907 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.775959 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.776041 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.776093 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.776144 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.776194 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.776244 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.776294 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.776344 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.776393 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.776457 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.776511 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.776562 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.776613 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.776662 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.776711 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.776761 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.776810 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.776897 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.776947 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.777000 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.777051 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.777100 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.777150 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.777200 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.777249 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.777300 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.777350 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.777400 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.777450 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.777503 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.777554 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.777605 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.777655 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.777705 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.777755 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.777804 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.777877 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 13 01:45:42.777930 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Dec 13 01:45:42.777984 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Dec 13 01:45:42.778033 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Dec 13 01:45:42.778083 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Dec 13 01:45:42.778138 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Dec 13 01:45:42.778190 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Dec 13 01:45:42.778241 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Dec 13 01:45:42.778292 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Dec 13 01:45:42.778343 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Dec 13 01:45:42.778397 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Dec 13 01:45:42.778448 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Dec 13 01:45:42.778499 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Dec 13 01:45:42.778549 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Dec 13 01:45:42.778601 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Dec 13 01:45:42.778651 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Dec 13 01:45:42.778702 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Dec 13 01:45:42.778751 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Dec 13 01:45:42.778801 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Dec 13 01:45:42.780920 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Dec 13 01:45:42.780992 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Dec 13 01:45:42.781047 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Dec 13 01:45:42.781098 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Dec 13 01:45:42.781149 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Dec 13 01:45:42.781205 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Dec 13 01:45:42.781256 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Dec 13 01:45:42.781308 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Dec 13 01:45:42.781360 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Dec 13 01:45:42.781410 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Dec 13 01:45:42.781465 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Dec 13 01:45:42.781518 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Dec 13 01:45:42.781568 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Dec 13 01:45:42.781618 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Dec 13 01:45:42.781673 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Dec 13 01:45:42.781724 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Dec 13 01:45:42.781777 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Dec 13 01:45:42.781837 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Dec 13 01:45:42.781888 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Dec 13 01:45:42.781941 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Dec 13 01:45:42.781991 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Dec 13 01:45:42.782041 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Dec 13 01:45:42.782091 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Dec 13 01:45:42.782142 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Dec 13 01:45:42.782192 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Dec 13 01:45:42.782254 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Dec 13 01:45:42.782320 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Dec 13 01:45:42.782372 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Dec 13 01:45:42.782422 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Dec 13 01:45:42.782471 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Dec 13 01:45:42.782523 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Dec 13 01:45:42.782572 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Dec 13 01:45:42.782622 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Dec 13 01:45:42.782673 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Dec 13 01:45:42.782722 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Dec 13 01:45:42.782775 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Dec 13 01:45:42.783685 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Dec 13 01:45:42.783750 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Dec 13 01:45:42.783804 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Dec 13 01:45:42.783901 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Dec 13 01:45:42.783954 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Dec 13 01:45:42.784004 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Dec 13 01:45:42.784057 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Dec 13 01:45:42.784109 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Dec 13 01:45:42.784163 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Dec 13 01:45:42.784214 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Dec 13 01:45:42.784266 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Dec 13 01:45:42.784316 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Dec 13 01:45:42.784366 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Dec 13 01:45:42.784421 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Dec 13 01:45:42.784475 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Dec 13 01:45:42.784525 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Dec 13 01:45:42.784575 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Dec 13 01:45:42.784626 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Dec 13 01:45:42.784682 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Dec 13 01:45:42.784732 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Dec 13 01:45:42.784782 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Dec 13 01:45:42.784841 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Dec 13 01:45:42.784892 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Dec 13 01:45:42.784943 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Dec 13 01:45:42.784994 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Dec 13 01:45:42.785044 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Dec 13 01:45:42.785094 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Dec 13 01:45:42.785149 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Dec 13 01:45:42.785199 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Dec 13 01:45:42.785250 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Dec 13 01:45:42.785301 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Dec 13 01:45:42.785352 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Dec 13 01:45:42.785402 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Dec 13 01:45:42.785455 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Dec 13 01:45:42.785505 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Dec 13 01:45:42.785556 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Dec 13 01:45:42.785606 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Dec 13 01:45:42.785661 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Dec 13 01:45:42.785711 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Dec 13 01:45:42.785761 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Dec 13 01:45:42.786300 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Dec 13 01:45:42.787687 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Dec 13 01:45:42.787748 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Dec 13 01:45:42.787802 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Dec 13 01:45:42.787871 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Dec 13 01:45:42.787923 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Dec 13 01:45:42.787978 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Dec 13 01:45:42.788031 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Dec 13 01:45:42.788082 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Dec 13 01:45:42.788132 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Dec 13 01:45:42.788184 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Dec 13 01:45:42.788234 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Dec 13 01:45:42.788285 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Dec 13 01:45:42.788338 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Dec 13 01:45:42.788389 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Dec 13 01:45:42.788440 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Dec 13 01:45:42.788495 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Dec 13 01:45:42.788545 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Dec 13 01:45:42.788595 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Dec 13 01:45:42.788646 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Dec 13 01:45:42.788692 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Dec 13 01:45:42.788736 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Dec 13 01:45:42.788781 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Dec 13 01:45:42.788833 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Dec 13 01:45:42.788887 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Dec 13 01:45:42.788934 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Dec 13 01:45:42.788980 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Dec 13 01:45:42.789026 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Dec 13 01:45:42.789072 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Dec 13 01:45:42.789118 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Dec 13 01:45:42.789163 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Dec 13 01:45:42.789213 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Dec 13 01:45:42.789264 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Dec 13 01:45:42.789311 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Dec 13 01:45:42.789357 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Dec 13 01:45:42.789407 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Dec 13 01:45:42.789458 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Dec 13 01:45:42.789505 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Dec 13 01:45:42.789556 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Dec 13 01:45:42.789603 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Dec 13 01:45:42.789648 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Dec 13 01:45:42.789697 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Dec 13 01:45:42.789744 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Dec 13 01:45:42.789794 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Dec 13 01:45:42.791462 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Dec 13 01:45:42.791525 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Dec 13 01:45:42.791573 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Dec 13 01:45:42.791624 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Dec 13 01:45:42.791671 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Dec 13 01:45:42.791724 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Dec 13 01:45:42.791779 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Dec 13 01:45:42.791847 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Dec 13 01:45:42.791897 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Dec 13 01:45:42.791942 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Dec 13 01:45:42.791993 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Dec 13 01:45:42.792039 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Dec 13 01:45:42.792086 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Dec 13 01:45:42.792140 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Dec 13 01:45:42.792188 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Dec 13 01:45:42.792237 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Dec 13 01:45:42.792289 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Dec 13 01:45:42.792339 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Dec 13 01:45:42.792409 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Dec 13 01:45:42.792465 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Dec 13 01:45:42.792519 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Dec 13 01:45:42.792566 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Dec 13 01:45:42.792617 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Dec 13 01:45:42.792663 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Dec 13 01:45:42.792715 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Dec 13 01:45:42.792762 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Dec 13 01:45:42.792821 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Dec 13 01:45:42.792871 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Dec 13 01:45:42.792917 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Dec 13 01:45:42.792968 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Dec 13 01:45:42.793015 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Dec 13 01:45:42.793062 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Dec 13 01:45:42.793115 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Dec 13 01:45:42.793162 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Dec 13 01:45:42.793207 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Dec 13 01:45:42.793257 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Dec 13 01:45:42.793304 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Dec 13 01:45:42.793356 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Dec 13 01:45:42.793403 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Dec 13 01:45:42.793459 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Dec 13 01:45:42.793507 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Dec 13 01:45:42.793558 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Dec 13 01:45:42.793605 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Dec 13 01:45:42.793655 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Dec 13 01:45:42.793701 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Dec 13 01:45:42.793755 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Dec 13 01:45:42.793802 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Dec 13 01:45:42.793981 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Dec 13 01:45:42.794032 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Dec 13 01:45:42.794080 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Dec 13 01:45:42.794127 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Dec 13 01:45:42.794181 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Dec 13 01:45:42.794228 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Dec 13 01:45:42.794278 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Dec 13 01:45:42.794325 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Dec 13 01:45:42.794375 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Dec 13 01:45:42.794422 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Dec 13 01:45:42.794474 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Dec 13 01:45:42.794521 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Dec 13 01:45:42.794575 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Dec 13 01:45:42.794622 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Dec 13 01:45:42.794673 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Dec 13 01:45:42.794720 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Dec 13 01:45:42.794780 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 01:45:42.794790 kernel: PCI: CLS 32 bytes, default 64 Dec 13 01:45:42.794797 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 01:45:42.794804 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Dec 13 01:45:42.794810 kernel: clocksource: Switched to clocksource tsc Dec 13 01:45:42.794823 kernel: Initialise system trusted keyrings Dec 13 01:45:42.794830 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 01:45:42.794837 kernel: Key type asymmetric registered Dec 13 01:45:42.794843 kernel: Asymmetric key parser 'x509' registered Dec 13 01:45:42.794851 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:45:42.794858 kernel: io scheduler mq-deadline registered Dec 13 01:45:42.794864 kernel: io scheduler kyber registered Dec 13 01:45:42.794870 kernel: io scheduler bfq registered Dec 13 01:45:42.795091 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Dec 13 01:45:42.795151 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.795205 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Dec 13 01:45:42.795257 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.795314 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Dec 13 01:45:42.795366 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.795426 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Dec 13 01:45:42.795482 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.795535 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Dec 13 01:45:42.795587 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.795643 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Dec 13 01:45:42.795695 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.795747 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Dec 13 01:45:42.795799 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.796097 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Dec 13 01:45:42.796156 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.796209 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Dec 13 01:45:42.796260 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.796311 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Dec 13 01:45:42.796361 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.796418 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Dec 13 01:45:42.796480 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.796536 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Dec 13 01:45:42.796587 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.796639 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Dec 13 01:45:42.796690 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.796741 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Dec 13 01:45:42.796792 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.797081 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Dec 13 01:45:42.797139 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.797195 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Dec 13 01:45:42.797247 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.797300 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Dec 13 01:45:42.797357 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.797410 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Dec 13 01:45:42.797462 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.797514 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Dec 13 01:45:42.797565 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.797623 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Dec 13 01:45:42.797682 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.797738 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Dec 13 01:45:42.797789 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.797877 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Dec 13 01:45:42.797930 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.797982 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Dec 13 01:45:42.798037 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.798104 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Dec 13 01:45:42.798155 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.798207 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Dec 13 01:45:42.798258 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.798309 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Dec 13 01:45:42.798361 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.798412 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Dec 13 01:45:42.798497 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.798549 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Dec 13 01:45:42.798598 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.798649 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Dec 13 01:45:42.798701 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.798752 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Dec 13 01:45:42.798801 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.799908 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Dec 13 01:45:42.799968 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.800027 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Dec 13 01:45:42.800079 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.800089 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:45:42.800095 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:45:42.800102 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:45:42.800108 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Dec 13 01:45:42.800114 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:45:42.800123 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:45:42.800175 kernel: rtc_cmos 00:01: registered as rtc0 Dec 13 01:45:42.800222 kernel: rtc_cmos 00:01: setting system clock to 2024-12-13T01:45:42 UTC (1734054342) Dec 13 01:45:42.800267 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Dec 13 01:45:42.800276 kernel: intel_pstate: CPU model not supported Dec 13 01:45:42.800282 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:45:42.800306 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:45:42.800312 kernel: Segment Routing with IPv6 Dec 13 01:45:42.800321 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:45:42.800328 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:45:42.800334 kernel: Key type dns_resolver registered Dec 13 01:45:42.800340 kernel: IPI shorthand broadcast: enabled Dec 13 01:45:42.800347 kernel: sched_clock: Marking stable (906003767, 226214227)->(1190140307, -57922313) Dec 13 01:45:42.800353 kernel: registered taskstats version 1 Dec 13 01:45:42.800360 kernel: Loading compiled-in X.509 certificates Dec 13 01:45:42.800366 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:45:42.800372 kernel: Key type .fscrypt registered Dec 13 01:45:42.800378 kernel: Key type fscrypt-provisioning registered Dec 13 01:45:42.800386 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:45:42.800392 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:45:42.800398 kernel: ima: No architecture policies found Dec 13 01:45:42.800404 kernel: clk: Disabling unused clocks Dec 13 01:45:42.800411 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:45:42.800417 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:45:42.800424 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:45:42.800430 kernel: Run /init as init process Dec 13 01:45:42.800437 kernel: with arguments: Dec 13 01:45:42.800444 kernel: /init Dec 13 01:45:42.800450 kernel: with environment: Dec 13 01:45:42.800456 kernel: HOME=/ Dec 13 01:45:42.800462 kernel: TERM=linux Dec 13 01:45:42.800469 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:45:42.800477 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:45:42.800486 systemd[1]: Detected virtualization vmware. Dec 13 01:45:42.800494 systemd[1]: Detected architecture x86-64. Dec 13 01:45:42.800501 systemd[1]: Running in initrd. Dec 13 01:45:42.800507 systemd[1]: No hostname configured, using default hostname. Dec 13 01:45:42.800514 systemd[1]: Hostname set to . Dec 13 01:45:42.800520 systemd[1]: Initializing machine ID from random generator. Dec 13 01:45:42.800527 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:45:42.800533 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:45:42.800540 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:45:42.800548 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:45:42.800555 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:45:42.800561 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:45:42.800568 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:45:42.800576 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:45:42.800583 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:45:42.800590 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:45:42.800597 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:45:42.800604 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:45:42.800610 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:45:42.800617 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:45:42.800623 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:45:42.800630 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:45:42.800637 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:45:42.800644 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:45:42.800650 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:45:42.800658 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:45:42.800664 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:45:42.800671 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:45:42.800678 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:45:42.800684 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:45:42.800691 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:45:42.800697 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:45:42.800704 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:45:42.800712 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:45:42.800718 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:45:42.800725 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:45:42.800743 systemd-journald[216]: Collecting audit messages is disabled. Dec 13 01:45:42.800760 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:45:42.800767 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:45:42.800774 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:45:42.800781 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:45:42.800788 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:45:42.800796 kernel: Bridge firewalling registered Dec 13 01:45:42.800802 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:45:42.800809 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:45:42.801412 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:45:42.801422 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:45:42.801429 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:45:42.801436 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:45:42.801443 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:45:42.801452 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:45:42.801460 systemd-journald[216]: Journal started Dec 13 01:45:42.801475 systemd-journald[216]: Runtime Journal (/run/log/journal/4438d13f07ea4aff844c23e0f2f7c25f) is 4.8M, max 38.6M, 33.8M free. Dec 13 01:45:42.744645 systemd-modules-load[217]: Inserted module 'overlay' Dec 13 01:45:42.803214 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:45:42.765208 systemd-modules-load[217]: Inserted module 'br_netfilter' Dec 13 01:45:42.808953 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:45:42.809415 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:45:42.812892 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:45:42.814074 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:45:42.814907 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:45:42.822041 dracut-cmdline[247]: dracut-dracut-053 Dec 13 01:45:42.823746 dracut-cmdline[247]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:45:42.841379 systemd-resolved[249]: Positive Trust Anchors: Dec 13 01:45:42.841388 systemd-resolved[249]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:45:42.841411 systemd-resolved[249]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:45:42.843734 systemd-resolved[249]: Defaulting to hostname 'linux'. Dec 13 01:45:42.844298 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:45:42.844460 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:45:42.871830 kernel: SCSI subsystem initialized Dec 13 01:45:42.878833 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:45:42.885832 kernel: iscsi: registered transport (tcp) Dec 13 01:45:42.898830 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:45:42.898860 kernel: QLogic iSCSI HBA Driver Dec 13 01:45:42.918626 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:45:42.922954 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:45:42.939189 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:45:42.939235 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:45:42.939245 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:45:42.969828 kernel: raid6: avx2x4 gen() 52937 MB/s Dec 13 01:45:42.986830 kernel: raid6: avx2x2 gen() 53595 MB/s Dec 13 01:45:43.004047 kernel: raid6: avx2x1 gen() 44582 MB/s Dec 13 01:45:43.004074 kernel: raid6: using algorithm avx2x2 gen() 53595 MB/s Dec 13 01:45:43.022010 kernel: raid6: .... xor() 30877 MB/s, rmw enabled Dec 13 01:45:43.022031 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:45:43.035837 kernel: xor: automatically using best checksumming function avx Dec 13 01:45:43.133834 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:45:43.139177 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:45:43.143917 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:45:43.151145 systemd-udevd[432]: Using default interface naming scheme 'v255'. Dec 13 01:45:43.153572 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:45:43.158904 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:45:43.165707 dracut-pre-trigger[437]: rd.md=0: removing MD RAID activation Dec 13 01:45:43.180995 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:45:43.184902 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:45:43.255308 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:45:43.261934 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:45:43.270357 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:45:43.271171 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:45:43.271686 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:45:43.271954 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:45:43.277976 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:45:43.285517 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:45:43.310829 kernel: libata version 3.00 loaded. Dec 13 01:45:43.313829 kernel: ata_piix 0000:00:07.1: version 2.13 Dec 13 01:45:43.321158 kernel: scsi host0: ata_piix Dec 13 01:45:43.321495 kernel: scsi host1: ata_piix Dec 13 01:45:43.321699 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Dec 13 01:45:43.321710 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Dec 13 01:45:43.340832 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Dec 13 01:45:43.340870 kernel: VMware PVSCSI driver - version 1.0.7.0-k Dec 13 01:45:43.343069 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Dec 13 01:45:43.353114 kernel: vmw_pvscsi: using 64bit dma Dec 13 01:45:43.353134 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Dec 13 01:45:43.353216 kernel: vmw_pvscsi: max_id: 16 Dec 13 01:45:43.353229 kernel: vmw_pvscsi: setting ring_pages to 8 Dec 13 01:45:43.360044 kernel: vmw_pvscsi: enabling reqCallThreshold Dec 13 01:45:43.360077 kernel: vmw_pvscsi: driver-based request coalescing enabled Dec 13 01:45:43.360085 kernel: vmw_pvscsi: using MSI-X Dec 13 01:45:43.361269 kernel: scsi host2: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Dec 13 01:45:43.362849 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:45:43.364985 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #2 Dec 13 01:45:43.365985 kernel: scsi 2:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Dec 13 01:45:43.366633 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:45:43.366707 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:45:43.367080 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:45:43.367190 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:45:43.367255 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:45:43.367410 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:45:43.377006 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:45:43.387876 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:45:43.388522 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:45:43.401400 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:45:43.485861 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Dec 13 01:45:43.491832 kernel: scsi 1:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Dec 13 01:45:43.499291 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:45:43.499320 kernel: AES CTR mode by8 optimization enabled Dec 13 01:45:43.502830 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Dec 13 01:45:43.520501 kernel: sd 2:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Dec 13 01:45:43.578517 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Dec 13 01:45:43.579031 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:45:43.579042 kernel: sd 2:0:0:0: [sda] Write Protect is off Dec 13 01:45:43.579111 kernel: sd 2:0:0:0: [sda] Mode Sense: 31 00 00 00 Dec 13 01:45:43.579413 kernel: sd 2:0:0:0: [sda] Cache data unavailable Dec 13 01:45:43.579485 kernel: sd 2:0:0:0: [sda] Assuming drive cache: write through Dec 13 01:45:43.579548 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Dec 13 01:45:43.579615 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:45:43.579624 kernel: sd 2:0:0:0: [sda] Attached SCSI disk Dec 13 01:45:43.881855 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (476) Dec 13 01:45:43.887894 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Dec 13 01:45:43.891890 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Dec 13 01:45:43.897037 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Dec 13 01:45:43.943837 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (480) Dec 13 01:45:43.949166 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Dec 13 01:45:43.949327 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Dec 13 01:45:43.952907 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:45:44.021847 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:45:44.054958 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:45:45.247862 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:45:45.248021 disk-uuid[589]: The operation has completed successfully. Dec 13 01:45:45.620157 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:45:45.620211 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:45:45.624936 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:45:45.627203 sh[606]: Success Dec 13 01:45:45.635833 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 01:45:45.680858 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:45:45.682896 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:45:45.683255 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:45:45.726297 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:45:45.726340 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:45:45.726351 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:45:45.726362 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:45:45.727391 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:45:45.829838 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 01:45:45.855318 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:45:45.867045 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Dec 13 01:45:45.868510 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:45:45.967623 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:45:45.967674 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:45:45.967690 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:45:46.001877 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:45:46.007186 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:45:46.008826 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:45:46.011683 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:45:46.018721 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:45:46.050351 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Dec 13 01:45:46.059335 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:45:46.096358 ignition[666]: Ignition 2.19.0 Dec 13 01:45:46.096365 ignition[666]: Stage: fetch-offline Dec 13 01:45:46.096384 ignition[666]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:45:46.096389 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 01:45:46.096465 ignition[666]: parsed url from cmdline: "" Dec 13 01:45:46.096467 ignition[666]: no config URL provided Dec 13 01:45:46.096470 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:45:46.096475 ignition[666]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:45:46.097128 ignition[666]: config successfully fetched Dec 13 01:45:46.097423 ignition[666]: parsing config with SHA512: fd1f3ba291a2a7f6f1e6e4a1f1ec18995c41de4cc652e5c96314a9dec3b6107935c283eb0893e9e9e51b6911c617d32873095553cc48b0357bddde7de3a6fb0a Dec 13 01:45:46.100119 unknown[666]: fetched base config from "system" Dec 13 01:45:46.100125 unknown[666]: fetched user config from "vmware" Dec 13 01:45:46.100364 ignition[666]: fetch-offline: fetch-offline passed Dec 13 01:45:46.100401 ignition[666]: Ignition finished successfully Dec 13 01:45:46.101266 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:45:46.115901 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:45:46.118936 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:45:46.131912 systemd-networkd[801]: lo: Link UP Dec 13 01:45:46.131918 systemd-networkd[801]: lo: Gained carrier Dec 13 01:45:46.132588 systemd-networkd[801]: Enumeration completed Dec 13 01:45:46.132849 systemd-networkd[801]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Dec 13 01:45:46.133047 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:45:46.133222 systemd[1]: Reached target network.target - Network. Dec 13 01:45:46.134907 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Dec 13 01:45:46.135022 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Dec 13 01:45:46.133318 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 01:45:46.136025 systemd-networkd[801]: ens192: Link UP Dec 13 01:45:46.136028 systemd-networkd[801]: ens192: Gained carrier Dec 13 01:45:46.140931 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:45:46.148309 ignition[803]: Ignition 2.19.0 Dec 13 01:45:46.148315 ignition[803]: Stage: kargs Dec 13 01:45:46.148435 ignition[803]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:45:46.148442 ignition[803]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 01:45:46.149018 ignition[803]: kargs: kargs passed Dec 13 01:45:46.149048 ignition[803]: Ignition finished successfully Dec 13 01:45:46.150246 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:45:46.153964 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:45:46.160736 ignition[810]: Ignition 2.19.0 Dec 13 01:45:46.160746 ignition[810]: Stage: disks Dec 13 01:45:46.161116 ignition[810]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:45:46.161126 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 01:45:46.161685 ignition[810]: disks: disks passed Dec 13 01:45:46.161709 ignition[810]: Ignition finished successfully Dec 13 01:45:46.162197 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:45:46.162597 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:45:46.162733 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:45:46.162939 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:45:46.163125 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:45:46.163293 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:45:46.169936 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:45:46.221517 systemd-fsck[818]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 01:45:46.222524 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:45:46.227907 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:45:46.284869 kernel: EXT4-fs (sda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:45:46.285051 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:45:46.285453 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:45:46.289902 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:45:46.290878 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:45:46.291735 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:45:46.291765 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:45:46.291781 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:45:46.297846 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (826) Dec 13 01:45:46.299849 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:45:46.301058 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:45:46.301071 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:45:46.301079 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:45:46.301646 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:45:46.305836 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:45:46.306746 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:45:46.331939 initrd-setup-root[850]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:45:46.334511 initrd-setup-root[857]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:45:46.336584 initrd-setup-root[864]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:45:46.338991 initrd-setup-root[871]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:45:46.389338 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:45:46.393890 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:45:46.394992 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:45:46.399840 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:45:46.414840 ignition[938]: INFO : Ignition 2.19.0 Dec 13 01:45:46.414840 ignition[938]: INFO : Stage: mount Dec 13 01:45:46.414840 ignition[938]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:45:46.414840 ignition[938]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 01:45:46.415417 ignition[938]: INFO : mount: mount passed Dec 13 01:45:46.416157 ignition[938]: INFO : Ignition finished successfully Dec 13 01:45:46.417028 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:45:46.417537 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:45:46.423971 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:45:46.709213 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:45:46.713967 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:45:46.723856 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (950) Dec 13 01:45:46.727263 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:45:46.727299 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:45:46.727307 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:45:46.731837 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:45:46.733152 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:45:46.745094 ignition[967]: INFO : Ignition 2.19.0 Dec 13 01:45:46.745094 ignition[967]: INFO : Stage: files Dec 13 01:45:46.745621 ignition[967]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:45:46.745621 ignition[967]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 01:45:46.746032 ignition[967]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:45:46.747329 ignition[967]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:45:46.747329 ignition[967]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:45:46.749405 ignition[967]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:45:46.749600 ignition[967]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:45:46.749948 unknown[967]: wrote ssh authorized keys file for user: core Dec 13 01:45:46.750210 ignition[967]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:45:46.751939 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:45:46.751939 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:45:46.787099 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:45:46.899193 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:45:46.899193 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:45:46.899193 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:45:46.899193 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:45:46.899193 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:45:46.899193 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:45:46.899193 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:45:46.899193 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:45:46.899193 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:45:46.899193 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:45:46.899193 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:45:46.899193 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:45:46.899193 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:45:46.899193 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:45:46.899193 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 01:45:47.277469 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 01:45:47.522506 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:45:47.522506 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Dec 13 01:45:47.522506 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Dec 13 01:45:47.522506 ignition[967]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 01:45:47.522506 ignition[967]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:45:47.522506 ignition[967]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:45:47.522506 ignition[967]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 01:45:47.522506 ignition[967]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 13 01:45:47.522506 ignition[967]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:45:47.522506 ignition[967]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:45:47.522506 ignition[967]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 13 01:45:47.522506 ignition[967]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 01:45:47.622390 ignition[967]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:45:47.626078 ignition[967]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:45:47.626332 ignition[967]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 01:45:47.626332 ignition[967]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:45:47.626332 ignition[967]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:45:47.626941 ignition[967]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:45:47.626941 ignition[967]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:45:47.626941 ignition[967]: INFO : files: files passed Dec 13 01:45:47.626941 ignition[967]: INFO : Ignition finished successfully Dec 13 01:45:47.627562 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:45:47.630956 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:45:47.632623 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:45:47.635231 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:45:47.635454 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:45:47.639479 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:45:47.639479 initrd-setup-root-after-ignition[997]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:45:47.640081 initrd-setup-root-after-ignition[1001]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:45:47.641194 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:45:47.641583 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:45:47.645936 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:45:47.659903 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:45:47.659964 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:45:47.660372 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:45:47.660499 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:45:47.660710 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:45:47.661179 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:45:47.671608 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:45:47.675931 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:45:47.681752 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:45:47.682207 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:45:47.682422 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:45:47.682583 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:45:47.682664 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:45:47.683557 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:45:47.683735 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:45:47.683903 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:45:47.684074 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:45:47.684239 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:45:47.684400 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:45:47.684557 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:45:47.684741 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:45:47.684910 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:45:47.685131 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:45:47.685452 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:45:47.685520 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:45:47.685800 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:45:47.686062 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:45:47.686253 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:45:47.686300 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:45:47.686466 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:45:47.686528 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:45:47.686787 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:45:47.686874 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:45:47.687078 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:45:47.687215 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:45:47.690843 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:45:47.691042 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:45:47.691254 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:45:47.691436 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:45:47.691510 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:45:47.691737 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:45:47.691805 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:45:47.692031 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:45:47.692099 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:45:47.692333 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:45:47.692393 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:45:47.696940 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:45:47.697060 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:45:47.697134 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:45:47.700032 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:45:47.700164 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:45:47.700264 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:45:47.700558 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:45:47.700642 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:45:47.704624 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:45:47.705865 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:45:47.706830 ignition[1022]: INFO : Ignition 2.19.0 Dec 13 01:45:47.706830 ignition[1022]: INFO : Stage: umount Dec 13 01:45:47.706830 ignition[1022]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:45:47.706830 ignition[1022]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 01:45:47.708038 ignition[1022]: INFO : umount: umount passed Dec 13 01:45:47.708038 ignition[1022]: INFO : Ignition finished successfully Dec 13 01:45:47.708250 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:45:47.708299 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:45:47.709156 systemd[1]: Stopped target network.target - Network. Dec 13 01:45:47.709393 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:45:47.709538 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:45:47.709892 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:45:47.709916 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:45:47.710426 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:45:47.710452 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:45:47.710565 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:45:47.710588 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:45:47.710786 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:45:47.711583 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:45:47.717126 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:45:47.717195 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:45:47.718071 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:45:47.718109 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:45:47.718902 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:45:47.718962 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:45:47.719395 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:45:47.719424 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:45:47.722897 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:45:47.723004 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:45:47.723030 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:45:47.723172 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Dec 13 01:45:47.723194 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Dec 13 01:45:47.723323 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:45:47.723344 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:45:47.723462 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:45:47.723483 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:45:47.723644 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:45:47.733769 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:45:47.734000 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:45:47.734433 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:45:47.734501 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:45:47.734879 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:45:47.734911 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:45:47.735303 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:45:47.735320 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:45:47.735478 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:45:47.735501 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:45:47.735807 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:45:47.735875 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:45:47.736154 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:45:47.736176 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:45:47.740058 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:45:47.740172 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:45:47.740201 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:45:47.740339 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:45:47.740362 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:45:47.741108 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:45:47.744033 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:45:47.744094 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:45:47.980156 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:45:47.980219 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:45:47.980610 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:45:47.980705 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:45:47.980736 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:45:47.985907 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:45:47.993065 systemd[1]: Switching root. Dec 13 01:45:48.028507 systemd-journald[216]: Journal stopped Dec 13 01:45:42.741238 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:45:42.741255 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:45:42.741261 kernel: Disabled fast string operations Dec 13 01:45:42.741265 kernel: BIOS-provided physical RAM map: Dec 13 01:45:42.741269 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Dec 13 01:45:42.741273 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Dec 13 01:45:42.741279 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Dec 13 01:45:42.741283 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Dec 13 01:45:42.741287 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Dec 13 01:45:42.741292 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Dec 13 01:45:42.741296 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Dec 13 01:45:42.741300 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Dec 13 01:45:42.741304 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Dec 13 01:45:42.741308 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Dec 13 01:45:42.741314 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Dec 13 01:45:42.741319 kernel: NX (Execute Disable) protection: active Dec 13 01:45:42.741324 kernel: APIC: Static calls initialized Dec 13 01:45:42.741329 kernel: SMBIOS 2.7 present. Dec 13 01:45:42.741334 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Dec 13 01:45:42.741338 kernel: vmware: hypercall mode: 0x00 Dec 13 01:45:42.741343 kernel: Hypervisor detected: VMware Dec 13 01:45:42.741348 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Dec 13 01:45:42.741354 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Dec 13 01:45:42.741358 kernel: vmware: using clock offset of 2502493562 ns Dec 13 01:45:42.741363 kernel: tsc: Detected 3408.000 MHz processor Dec 13 01:45:42.741368 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:45:42.741373 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:45:42.741378 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Dec 13 01:45:42.741383 kernel: total RAM covered: 3072M Dec 13 01:45:42.741388 kernel: Found optimal setting for mtrr clean up Dec 13 01:45:42.741393 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Dec 13 01:45:42.741399 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Dec 13 01:45:42.741404 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:45:42.741409 kernel: Using GB pages for direct mapping Dec 13 01:45:42.741413 kernel: ACPI: Early table checksum verification disabled Dec 13 01:45:42.741418 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Dec 13 01:45:42.741423 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Dec 13 01:45:42.741428 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Dec 13 01:45:42.741433 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Dec 13 01:45:42.741438 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Dec 13 01:45:42.741446 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Dec 13 01:45:42.741451 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Dec 13 01:45:42.741456 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Dec 13 01:45:42.741461 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Dec 13 01:45:42.741466 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Dec 13 01:45:42.741473 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Dec 13 01:45:42.741478 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Dec 13 01:45:42.741483 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Dec 13 01:45:42.741488 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Dec 13 01:45:42.741493 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Dec 13 01:45:42.741498 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Dec 13 01:45:42.741503 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Dec 13 01:45:42.741508 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Dec 13 01:45:42.741513 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Dec 13 01:45:42.741518 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Dec 13 01:45:42.741525 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Dec 13 01:45:42.741530 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Dec 13 01:45:42.741535 kernel: system APIC only can use physical flat Dec 13 01:45:42.741540 kernel: APIC: Switched APIC routing to: physical flat Dec 13 01:45:42.741545 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 01:45:42.741550 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Dec 13 01:45:42.741555 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Dec 13 01:45:42.741560 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Dec 13 01:45:42.741565 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Dec 13 01:45:42.741571 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Dec 13 01:45:42.741576 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Dec 13 01:45:42.741581 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Dec 13 01:45:42.741586 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Dec 13 01:45:42.741591 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Dec 13 01:45:42.741596 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Dec 13 01:45:42.741601 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Dec 13 01:45:42.741606 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Dec 13 01:45:42.741611 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Dec 13 01:45:42.741616 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Dec 13 01:45:42.741622 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Dec 13 01:45:42.741627 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Dec 13 01:45:42.741632 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Dec 13 01:45:42.741637 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Dec 13 01:45:42.741642 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Dec 13 01:45:42.741647 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Dec 13 01:45:42.741652 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Dec 13 01:45:42.741657 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Dec 13 01:45:42.741661 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Dec 13 01:45:42.741666 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Dec 13 01:45:42.741672 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Dec 13 01:45:42.741677 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Dec 13 01:45:42.741682 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Dec 13 01:45:42.741687 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Dec 13 01:45:42.741692 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Dec 13 01:45:42.741697 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Dec 13 01:45:42.741702 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Dec 13 01:45:42.741707 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Dec 13 01:45:42.741712 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Dec 13 01:45:42.741717 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Dec 13 01:45:42.741723 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Dec 13 01:45:42.741728 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Dec 13 01:45:42.741733 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Dec 13 01:45:42.741738 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Dec 13 01:45:42.741743 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Dec 13 01:45:42.741748 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Dec 13 01:45:42.741753 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Dec 13 01:45:42.741758 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Dec 13 01:45:42.741763 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Dec 13 01:45:42.741768 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Dec 13 01:45:42.741773 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Dec 13 01:45:42.741779 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Dec 13 01:45:42.741784 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Dec 13 01:45:42.741789 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Dec 13 01:45:42.741794 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Dec 13 01:45:42.741799 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Dec 13 01:45:42.741804 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Dec 13 01:45:42.741809 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Dec 13 01:45:42.741814 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Dec 13 01:45:42.742135 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Dec 13 01:45:42.742141 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Dec 13 01:45:42.742148 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Dec 13 01:45:42.742154 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Dec 13 01:45:42.742159 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Dec 13 01:45:42.742168 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Dec 13 01:45:42.742174 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Dec 13 01:45:42.742180 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Dec 13 01:45:42.742185 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Dec 13 01:45:42.742190 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Dec 13 01:45:42.742197 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Dec 13 01:45:42.742203 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Dec 13 01:45:42.742208 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Dec 13 01:45:42.742213 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Dec 13 01:45:42.742219 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Dec 13 01:45:42.742224 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Dec 13 01:45:42.742229 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Dec 13 01:45:42.742234 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Dec 13 01:45:42.742240 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Dec 13 01:45:42.742245 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Dec 13 01:45:42.742257 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Dec 13 01:45:42.742272 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Dec 13 01:45:42.742283 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Dec 13 01:45:42.742289 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Dec 13 01:45:42.742300 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Dec 13 01:45:42.742307 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Dec 13 01:45:42.742312 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Dec 13 01:45:42.742317 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Dec 13 01:45:42.742322 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Dec 13 01:45:42.742328 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Dec 13 01:45:42.742335 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Dec 13 01:45:42.742341 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Dec 13 01:45:42.742346 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Dec 13 01:45:42.742351 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Dec 13 01:45:42.742357 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Dec 13 01:45:42.742362 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Dec 13 01:45:42.742367 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Dec 13 01:45:42.742373 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Dec 13 01:45:42.742378 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Dec 13 01:45:42.742384 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Dec 13 01:45:42.742390 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Dec 13 01:45:42.742395 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Dec 13 01:45:42.742401 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Dec 13 01:45:42.742406 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Dec 13 01:45:42.742411 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Dec 13 01:45:42.742417 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Dec 13 01:45:42.742422 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Dec 13 01:45:42.742428 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Dec 13 01:45:42.742433 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Dec 13 01:45:42.742438 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Dec 13 01:45:42.742444 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Dec 13 01:45:42.742450 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Dec 13 01:45:42.742456 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Dec 13 01:45:42.742461 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Dec 13 01:45:42.742466 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Dec 13 01:45:42.742472 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Dec 13 01:45:42.742477 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Dec 13 01:45:42.742482 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Dec 13 01:45:42.742488 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Dec 13 01:45:42.742493 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Dec 13 01:45:42.742498 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Dec 13 01:45:42.742505 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Dec 13 01:45:42.742510 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Dec 13 01:45:42.742516 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Dec 13 01:45:42.742521 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Dec 13 01:45:42.742527 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Dec 13 01:45:42.742532 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Dec 13 01:45:42.742538 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Dec 13 01:45:42.742543 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Dec 13 01:45:42.742548 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Dec 13 01:45:42.742553 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Dec 13 01:45:42.742561 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Dec 13 01:45:42.742566 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Dec 13 01:45:42.742572 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Dec 13 01:45:42.742577 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 01:45:42.742582 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 13 01:45:42.742588 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Dec 13 01:45:42.742594 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Dec 13 01:45:42.742599 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Dec 13 01:45:42.742605 kernel: Zone ranges: Dec 13 01:45:42.742612 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:45:42.742617 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Dec 13 01:45:42.742623 kernel: Normal empty Dec 13 01:45:42.742628 kernel: Movable zone start for each node Dec 13 01:45:42.742634 kernel: Early memory node ranges Dec 13 01:45:42.742639 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Dec 13 01:45:42.742644 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Dec 13 01:45:42.742650 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Dec 13 01:45:42.742655 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Dec 13 01:45:42.742661 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:45:42.742668 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Dec 13 01:45:42.742673 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Dec 13 01:45:42.742678 kernel: ACPI: PM-Timer IO Port: 0x1008 Dec 13 01:45:42.742684 kernel: system APIC only can use physical flat Dec 13 01:45:42.742689 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Dec 13 01:45:42.742695 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Dec 13 01:45:42.742700 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Dec 13 01:45:42.742705 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Dec 13 01:45:42.742711 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Dec 13 01:45:42.742717 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Dec 13 01:45:42.742722 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Dec 13 01:45:42.742728 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Dec 13 01:45:42.742733 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Dec 13 01:45:42.742739 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Dec 13 01:45:42.742744 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Dec 13 01:45:42.742749 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Dec 13 01:45:42.742755 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Dec 13 01:45:42.742760 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Dec 13 01:45:42.742765 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Dec 13 01:45:42.742772 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Dec 13 01:45:42.742777 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Dec 13 01:45:42.742783 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Dec 13 01:45:42.742788 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Dec 13 01:45:42.742793 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Dec 13 01:45:42.742799 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Dec 13 01:45:42.742804 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Dec 13 01:45:42.742809 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Dec 13 01:45:42.742822 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Dec 13 01:45:42.742828 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Dec 13 01:45:42.742835 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Dec 13 01:45:42.742841 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Dec 13 01:45:42.742846 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Dec 13 01:45:42.742851 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Dec 13 01:45:42.742857 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Dec 13 01:45:42.742862 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Dec 13 01:45:42.742868 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Dec 13 01:45:42.742873 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Dec 13 01:45:42.742878 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Dec 13 01:45:42.742885 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Dec 13 01:45:42.742890 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Dec 13 01:45:42.742896 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Dec 13 01:45:42.742901 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Dec 13 01:45:42.742907 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Dec 13 01:45:42.742912 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Dec 13 01:45:42.742918 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Dec 13 01:45:42.742923 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Dec 13 01:45:42.742928 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Dec 13 01:45:42.742934 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Dec 13 01:45:42.742940 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Dec 13 01:45:42.742946 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Dec 13 01:45:42.742951 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Dec 13 01:45:42.742956 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Dec 13 01:45:42.742962 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Dec 13 01:45:42.742967 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Dec 13 01:45:42.742973 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Dec 13 01:45:42.742978 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Dec 13 01:45:42.742983 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Dec 13 01:45:42.742990 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Dec 13 01:45:42.742995 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Dec 13 01:45:42.743001 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Dec 13 01:45:42.743008 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Dec 13 01:45:42.743017 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Dec 13 01:45:42.743025 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Dec 13 01:45:42.743033 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Dec 13 01:45:42.743042 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Dec 13 01:45:42.743051 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Dec 13 01:45:42.743058 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Dec 13 01:45:42.743065 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Dec 13 01:45:42.743070 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Dec 13 01:45:42.743076 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Dec 13 01:45:42.743098 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Dec 13 01:45:42.743119 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Dec 13 01:45:42.743141 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Dec 13 01:45:42.743151 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Dec 13 01:45:42.743156 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Dec 13 01:45:42.743162 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Dec 13 01:45:42.743169 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Dec 13 01:45:42.743175 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Dec 13 01:45:42.743180 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Dec 13 01:45:42.743186 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Dec 13 01:45:42.743191 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Dec 13 01:45:42.743197 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Dec 13 01:45:42.743202 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Dec 13 01:45:42.743207 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Dec 13 01:45:42.743213 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Dec 13 01:45:42.743218 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Dec 13 01:45:42.743225 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Dec 13 01:45:42.743231 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Dec 13 01:45:42.743239 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Dec 13 01:45:42.743245 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Dec 13 01:45:42.743253 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Dec 13 01:45:42.743259 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Dec 13 01:45:42.743264 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Dec 13 01:45:42.743270 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Dec 13 01:45:42.743275 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Dec 13 01:45:42.743282 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Dec 13 01:45:42.743287 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Dec 13 01:45:42.743293 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Dec 13 01:45:42.743298 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Dec 13 01:45:42.743304 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Dec 13 01:45:42.743309 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Dec 13 01:45:42.743314 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Dec 13 01:45:42.743320 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Dec 13 01:45:42.743325 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Dec 13 01:45:42.743331 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Dec 13 01:45:42.743337 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Dec 13 01:45:42.743343 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Dec 13 01:45:42.743348 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Dec 13 01:45:42.743354 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Dec 13 01:45:42.743359 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Dec 13 01:45:42.743364 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Dec 13 01:45:42.743370 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Dec 13 01:45:42.743375 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Dec 13 01:45:42.743381 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Dec 13 01:45:42.743386 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Dec 13 01:45:42.743393 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Dec 13 01:45:42.743398 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Dec 13 01:45:42.743403 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Dec 13 01:45:42.743409 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Dec 13 01:45:42.743417 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Dec 13 01:45:42.743423 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Dec 13 01:45:42.743428 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Dec 13 01:45:42.743434 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Dec 13 01:45:42.743439 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Dec 13 01:45:42.743446 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Dec 13 01:45:42.743451 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Dec 13 01:45:42.743457 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Dec 13 01:45:42.743462 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Dec 13 01:45:42.743468 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Dec 13 01:45:42.743473 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Dec 13 01:45:42.743479 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Dec 13 01:45:42.743484 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Dec 13 01:45:42.743489 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:45:42.743495 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Dec 13 01:45:42.743502 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:45:42.743507 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Dec 13 01:45:42.743513 kernel: TSC deadline timer available Dec 13 01:45:42.743519 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Dec 13 01:45:42.743524 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Dec 13 01:45:42.743529 kernel: Booting paravirtualized kernel on VMware hypervisor Dec 13 01:45:42.743535 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:45:42.743541 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Dec 13 01:45:42.743546 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Dec 13 01:45:42.743553 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Dec 13 01:45:42.743559 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Dec 13 01:45:42.743564 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Dec 13 01:45:42.743570 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Dec 13 01:45:42.743575 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Dec 13 01:45:42.743580 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Dec 13 01:45:42.743593 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Dec 13 01:45:42.743599 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Dec 13 01:45:42.743605 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Dec 13 01:45:42.743612 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Dec 13 01:45:42.743618 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Dec 13 01:45:42.743623 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Dec 13 01:45:42.743629 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Dec 13 01:45:42.743634 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Dec 13 01:45:42.743640 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Dec 13 01:45:42.743646 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Dec 13 01:45:42.743651 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Dec 13 01:45:42.743659 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:45:42.743665 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:45:42.743671 kernel: random: crng init done Dec 13 01:45:42.743677 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Dec 13 01:45:42.743683 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Dec 13 01:45:42.743688 kernel: printk: log_buf_len min size: 262144 bytes Dec 13 01:45:42.743694 kernel: printk: log_buf_len: 1048576 bytes Dec 13 01:45:42.743700 kernel: printk: early log buf free: 239648(91%) Dec 13 01:45:42.743707 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:45:42.743713 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 01:45:42.743719 kernel: Fallback order for Node 0: 0 Dec 13 01:45:42.743725 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Dec 13 01:45:42.743731 kernel: Policy zone: DMA32 Dec 13 01:45:42.743736 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:45:42.743743 kernel: Memory: 1936372K/2096628K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 159996K reserved, 0K cma-reserved) Dec 13 01:45:42.743750 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Dec 13 01:45:42.743756 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:45:42.743761 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:45:42.743767 kernel: Dynamic Preempt: voluntary Dec 13 01:45:42.743773 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:45:42.743779 kernel: rcu: RCU event tracing is enabled. Dec 13 01:45:42.743785 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Dec 13 01:45:42.743791 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:45:42.743798 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:45:42.743804 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:45:42.743810 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:45:42.743849 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Dec 13 01:45:42.743856 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Dec 13 01:45:42.743862 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Dec 13 01:45:42.743868 kernel: Console: colour VGA+ 80x25 Dec 13 01:45:42.743874 kernel: printk: console [tty0] enabled Dec 13 01:45:42.743880 kernel: printk: console [ttyS0] enabled Dec 13 01:45:42.743885 kernel: ACPI: Core revision 20230628 Dec 13 01:45:42.743894 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Dec 13 01:45:42.743899 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:45:42.743905 kernel: x2apic enabled Dec 13 01:45:42.743911 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:45:42.743917 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 01:45:42.743923 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Dec 13 01:45:42.743929 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Dec 13 01:45:42.743935 kernel: Disabled fast string operations Dec 13 01:45:42.743941 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 01:45:42.743948 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 01:45:42.743955 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:45:42.743961 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Dec 13 01:45:42.743967 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Dec 13 01:45:42.743972 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Dec 13 01:45:42.743978 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:45:42.743984 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Dec 13 01:45:42.743990 kernel: RETBleed: Mitigation: Enhanced IBRS Dec 13 01:45:42.743996 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:45:42.744003 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 01:45:42.744009 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 01:45:42.744014 kernel: SRBDS: Unknown: Dependent on hypervisor status Dec 13 01:45:42.744020 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 01:45:42.744026 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:45:42.744032 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:45:42.744038 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:45:42.744044 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:45:42.744051 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 13 01:45:42.744057 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:45:42.744063 kernel: pid_max: default: 131072 minimum: 1024 Dec 13 01:45:42.744069 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:45:42.744075 kernel: landlock: Up and running. Dec 13 01:45:42.744081 kernel: SELinux: Initializing. Dec 13 01:45:42.744087 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 01:45:42.744093 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 01:45:42.744099 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Dec 13 01:45:42.744106 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Dec 13 01:45:42.744112 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Dec 13 01:45:42.744117 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Dec 13 01:45:42.744123 kernel: Performance Events: Skylake events, core PMU driver. Dec 13 01:45:42.744129 kernel: core: CPUID marked event: 'cpu cycles' unavailable Dec 13 01:45:42.744135 kernel: core: CPUID marked event: 'instructions' unavailable Dec 13 01:45:42.744141 kernel: core: CPUID marked event: 'bus cycles' unavailable Dec 13 01:45:42.744146 kernel: core: CPUID marked event: 'cache references' unavailable Dec 13 01:45:42.744152 kernel: core: CPUID marked event: 'cache misses' unavailable Dec 13 01:45:42.744158 kernel: core: CPUID marked event: 'branch instructions' unavailable Dec 13 01:45:42.744164 kernel: core: CPUID marked event: 'branch misses' unavailable Dec 13 01:45:42.744170 kernel: ... version: 1 Dec 13 01:45:42.744176 kernel: ... bit width: 48 Dec 13 01:45:42.744182 kernel: ... generic registers: 4 Dec 13 01:45:42.744187 kernel: ... value mask: 0000ffffffffffff Dec 13 01:45:42.744193 kernel: ... max period: 000000007fffffff Dec 13 01:45:42.744199 kernel: ... fixed-purpose events: 0 Dec 13 01:45:42.744205 kernel: ... event mask: 000000000000000f Dec 13 01:45:42.744212 kernel: signal: max sigframe size: 1776 Dec 13 01:45:42.744218 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:45:42.744224 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:45:42.744230 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 01:45:42.744236 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:45:42.744242 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:45:42.744247 kernel: .... node #0, CPUs: #1 Dec 13 01:45:42.744253 kernel: Disabled fast string operations Dec 13 01:45:42.744259 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Dec 13 01:45:42.744266 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Dec 13 01:45:42.744271 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:45:42.744277 kernel: smpboot: Max logical packages: 128 Dec 13 01:45:42.744283 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Dec 13 01:45:42.744289 kernel: devtmpfs: initialized Dec 13 01:45:42.744295 kernel: x86/mm: Memory block size: 128MB Dec 13 01:45:42.744301 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Dec 13 01:45:42.744307 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:45:42.744313 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Dec 13 01:45:42.744319 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:45:42.744326 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:45:42.744332 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:45:42.744337 kernel: audit: type=2000 audit(1734054341.067:1): state=initialized audit_enabled=0 res=1 Dec 13 01:45:42.744343 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:45:42.744349 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:45:42.744355 kernel: cpuidle: using governor menu Dec 13 01:45:42.744361 kernel: Simple Boot Flag at 0x36 set to 0x80 Dec 13 01:45:42.744367 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:45:42.744373 kernel: dca service started, version 1.12.1 Dec 13 01:45:42.744380 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Dec 13 01:45:42.744386 kernel: PCI: Using configuration type 1 for base access Dec 13 01:45:42.744391 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:45:42.744397 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:45:42.744403 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:45:42.744409 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:45:42.744415 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:45:42.744421 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:45:42.744428 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:45:42.744434 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:45:42.744439 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:45:42.744445 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:45:42.744451 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Dec 13 01:45:42.744457 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:45:42.744463 kernel: ACPI: Interpreter enabled Dec 13 01:45:42.744469 kernel: ACPI: PM: (supports S0 S1 S5) Dec 13 01:45:42.744475 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:45:42.744482 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:45:42.744487 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 01:45:42.744493 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Dec 13 01:45:42.744499 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Dec 13 01:45:42.744579 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:45:42.744635 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Dec 13 01:45:42.744686 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Dec 13 01:45:42.744694 kernel: PCI host bridge to bus 0000:00 Dec 13 01:45:42.744748 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:45:42.744793 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Dec 13 01:45:42.744858 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 01:45:42.744904 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:45:42.744950 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Dec 13 01:45:42.744994 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Dec 13 01:45:42.745056 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Dec 13 01:45:42.745112 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Dec 13 01:45:42.745166 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Dec 13 01:45:42.745221 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Dec 13 01:45:42.745271 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Dec 13 01:45:42.745333 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Dec 13 01:45:42.745406 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Dec 13 01:45:42.745457 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Dec 13 01:45:42.745507 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Dec 13 01:45:42.745561 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Dec 13 01:45:42.745611 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Dec 13 01:45:42.745660 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Dec 13 01:45:42.745716 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Dec 13 01:45:42.745769 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Dec 13 01:45:42.745831 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Dec 13 01:45:42.745900 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Dec 13 01:45:42.745983 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Dec 13 01:45:42.746036 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Dec 13 01:45:42.746086 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Dec 13 01:45:42.746135 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Dec 13 01:45:42.746188 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 01:45:42.746242 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Dec 13 01:45:42.746296 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.746347 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.746403 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.746464 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.746521 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.746572 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.746626 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.746676 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.746730 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.746781 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.749004 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.749067 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.749125 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.749176 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.749231 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.749281 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.749334 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.749388 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.749448 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.749499 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.749552 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.749602 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.749659 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.749709 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.749763 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.749813 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.750905 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.750960 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.751019 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.751070 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.751122 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.751172 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.751224 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.751273 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.751329 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.751379 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.751431 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.751479 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.751531 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.751579 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.751631 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.751682 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.751734 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.751784 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.752880 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.752938 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.752993 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.753048 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.753101 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.753151 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.753204 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.753253 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.753306 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.753358 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.753412 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.753462 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.753514 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.753564 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.753631 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.753682 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.753737 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.753786 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.754251 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Dec 13 01:45:42.754305 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.754356 kernel: pci_bus 0000:01: extended config space not accessible Dec 13 01:45:42.754410 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 13 01:45:42.754493 kernel: pci_bus 0000:02: extended config space not accessible Dec 13 01:45:42.754502 kernel: acpiphp: Slot [32] registered Dec 13 01:45:42.754508 kernel: acpiphp: Slot [33] registered Dec 13 01:45:42.754514 kernel: acpiphp: Slot [34] registered Dec 13 01:45:42.754519 kernel: acpiphp: Slot [35] registered Dec 13 01:45:42.754525 kernel: acpiphp: Slot [36] registered Dec 13 01:45:42.754531 kernel: acpiphp: Slot [37] registered Dec 13 01:45:42.754537 kernel: acpiphp: Slot [38] registered Dec 13 01:45:42.754544 kernel: acpiphp: Slot [39] registered Dec 13 01:45:42.754550 kernel: acpiphp: Slot [40] registered Dec 13 01:45:42.754555 kernel: acpiphp: Slot [41] registered Dec 13 01:45:42.754561 kernel: acpiphp: Slot [42] registered Dec 13 01:45:42.754566 kernel: acpiphp: Slot [43] registered Dec 13 01:45:42.754572 kernel: acpiphp: Slot [44] registered Dec 13 01:45:42.754577 kernel: acpiphp: Slot [45] registered Dec 13 01:45:42.754583 kernel: acpiphp: Slot [46] registered Dec 13 01:45:42.754589 kernel: acpiphp: Slot [47] registered Dec 13 01:45:42.754596 kernel: acpiphp: Slot [48] registered Dec 13 01:45:42.754601 kernel: acpiphp: Slot [49] registered Dec 13 01:45:42.754607 kernel: acpiphp: Slot [50] registered Dec 13 01:45:42.754612 kernel: acpiphp: Slot [51] registered Dec 13 01:45:42.754618 kernel: acpiphp: Slot [52] registered Dec 13 01:45:42.754624 kernel: acpiphp: Slot [53] registered Dec 13 01:45:42.754629 kernel: acpiphp: Slot [54] registered Dec 13 01:45:42.754635 kernel: acpiphp: Slot [55] registered Dec 13 01:45:42.754657 kernel: acpiphp: Slot [56] registered Dec 13 01:45:42.754663 kernel: acpiphp: Slot [57] registered Dec 13 01:45:42.754670 kernel: acpiphp: Slot [58] registered Dec 13 01:45:42.754675 kernel: acpiphp: Slot [59] registered Dec 13 01:45:42.754681 kernel: acpiphp: Slot [60] registered Dec 13 01:45:42.754687 kernel: acpiphp: Slot [61] registered Dec 13 01:45:42.754693 kernel: acpiphp: Slot [62] registered Dec 13 01:45:42.754713 kernel: acpiphp: Slot [63] registered Dec 13 01:45:42.754762 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Dec 13 01:45:42.754810 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Dec 13 01:45:42.755890 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Dec 13 01:45:42.755945 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Dec 13 01:45:42.755993 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Dec 13 01:45:42.756042 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Dec 13 01:45:42.756090 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Dec 13 01:45:42.756138 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Dec 13 01:45:42.756186 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Dec 13 01:45:42.756241 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Dec 13 01:45:42.756296 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Dec 13 01:45:42.756346 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Dec 13 01:45:42.756396 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Dec 13 01:45:42.756475 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Dec 13 01:45:42.756526 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Dec 13 01:45:42.756577 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Dec 13 01:45:42.756627 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Dec 13 01:45:42.756681 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Dec 13 01:45:42.756732 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Dec 13 01:45:42.756782 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Dec 13 01:45:42.759907 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Dec 13 01:45:42.759967 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Dec 13 01:45:42.760022 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Dec 13 01:45:42.760073 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Dec 13 01:45:42.760123 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Dec 13 01:45:42.760175 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Dec 13 01:45:42.760226 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Dec 13 01:45:42.760275 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Dec 13 01:45:42.760324 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Dec 13 01:45:42.760375 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Dec 13 01:45:42.760424 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Dec 13 01:45:42.760474 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Dec 13 01:45:42.760527 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Dec 13 01:45:42.760577 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Dec 13 01:45:42.760627 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Dec 13 01:45:42.760678 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Dec 13 01:45:42.760727 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Dec 13 01:45:42.760779 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Dec 13 01:45:42.760859 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Dec 13 01:45:42.760912 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Dec 13 01:45:42.760962 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Dec 13 01:45:42.761018 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Dec 13 01:45:42.761071 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Dec 13 01:45:42.761139 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Dec 13 01:45:42.761193 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Dec 13 01:45:42.761243 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Dec 13 01:45:42.761409 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Dec 13 01:45:42.761624 kernel: pci 0000:0b:00.0: supports D1 D2 Dec 13 01:45:42.761678 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 01:45:42.763895 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Dec 13 01:45:42.763955 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Dec 13 01:45:42.764006 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Dec 13 01:45:42.764061 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Dec 13 01:45:42.764112 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Dec 13 01:45:42.764161 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Dec 13 01:45:42.764211 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Dec 13 01:45:42.764261 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Dec 13 01:45:42.764313 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Dec 13 01:45:42.764363 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Dec 13 01:45:42.764413 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Dec 13 01:45:42.764465 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Dec 13 01:45:42.764516 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Dec 13 01:45:42.764565 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Dec 13 01:45:42.764615 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Dec 13 01:45:42.764666 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Dec 13 01:45:42.764716 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Dec 13 01:45:42.764765 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Dec 13 01:45:42.765836 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Dec 13 01:45:42.765899 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Dec 13 01:45:42.765950 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Dec 13 01:45:42.766001 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Dec 13 01:45:42.766050 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Dec 13 01:45:42.766098 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Dec 13 01:45:42.766150 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Dec 13 01:45:42.766199 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Dec 13 01:45:42.766247 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Dec 13 01:45:42.766301 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Dec 13 01:45:42.766350 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Dec 13 01:45:42.766399 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Dec 13 01:45:42.766454 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Dec 13 01:45:42.766505 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Dec 13 01:45:42.766568 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Dec 13 01:45:42.766616 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Dec 13 01:45:42.766664 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Dec 13 01:45:42.766717 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Dec 13 01:45:42.766766 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Dec 13 01:45:42.768688 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Dec 13 01:45:42.768748 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Dec 13 01:45:42.768800 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Dec 13 01:45:42.768863 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Dec 13 01:45:42.768913 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Dec 13 01:45:42.768984 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Dec 13 01:45:42.769048 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Dec 13 01:45:42.769096 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Dec 13 01:45:42.769146 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Dec 13 01:45:42.769193 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Dec 13 01:45:42.769241 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Dec 13 01:45:42.769290 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Dec 13 01:45:42.769340 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Dec 13 01:45:42.769390 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Dec 13 01:45:42.769440 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Dec 13 01:45:42.769489 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Dec 13 01:45:42.769537 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Dec 13 01:45:42.769587 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Dec 13 01:45:42.769635 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Dec 13 01:45:42.769683 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Dec 13 01:45:42.769731 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Dec 13 01:45:42.769783 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Dec 13 01:45:42.769845 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Dec 13 01:45:42.769894 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Dec 13 01:45:42.769943 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Dec 13 01:45:42.769992 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Dec 13 01:45:42.770059 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Dec 13 01:45:42.770122 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Dec 13 01:45:42.770175 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Dec 13 01:45:42.770234 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Dec 13 01:45:42.770283 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Dec 13 01:45:42.770332 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Dec 13 01:45:42.770381 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Dec 13 01:45:42.770430 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Dec 13 01:45:42.770480 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Dec 13 01:45:42.770529 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Dec 13 01:45:42.770579 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Dec 13 01:45:42.770632 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Dec 13 01:45:42.770686 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Dec 13 01:45:42.770734 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Dec 13 01:45:42.770784 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Dec 13 01:45:42.770878 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Dec 13 01:45:42.770930 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Dec 13 01:45:42.770938 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Dec 13 01:45:42.770945 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Dec 13 01:45:42.770953 kernel: ACPI: PCI: Interrupt link LNKB disabled Dec 13 01:45:42.770959 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:45:42.770965 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Dec 13 01:45:42.770971 kernel: iommu: Default domain type: Translated Dec 13 01:45:42.770977 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:45:42.770983 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:45:42.770989 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:45:42.770995 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Dec 13 01:45:42.771001 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Dec 13 01:45:42.771051 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Dec 13 01:45:42.771118 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Dec 13 01:45:42.771182 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 01:45:42.771191 kernel: vgaarb: loaded Dec 13 01:45:42.771198 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Dec 13 01:45:42.771204 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Dec 13 01:45:42.771210 kernel: clocksource: Switched to clocksource tsc-early Dec 13 01:45:42.771216 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:45:42.771222 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:45:42.771230 kernel: pnp: PnP ACPI init Dec 13 01:45:42.771280 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Dec 13 01:45:42.771343 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Dec 13 01:45:42.771402 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Dec 13 01:45:42.771457 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Dec 13 01:45:42.771504 kernel: pnp 00:06: [dma 2] Dec 13 01:45:42.771552 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Dec 13 01:45:42.771599 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Dec 13 01:45:42.771643 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Dec 13 01:45:42.771651 kernel: pnp: PnP ACPI: found 8 devices Dec 13 01:45:42.771657 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:45:42.771663 kernel: NET: Registered PF_INET protocol family Dec 13 01:45:42.771669 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:45:42.771675 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 01:45:42.771683 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:45:42.771689 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:45:42.771694 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 01:45:42.771700 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 01:45:42.771706 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 01:45:42.771712 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 01:45:42.771718 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:45:42.771723 kernel: NET: Registered PF_XDP protocol family Dec 13 01:45:42.771773 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Dec 13 01:45:42.771842 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 13 01:45:42.771896 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 01:45:42.771945 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 01:45:42.772009 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 01:45:42.772060 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Dec 13 01:45:42.772110 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Dec 13 01:45:42.772164 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Dec 13 01:45:42.772213 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Dec 13 01:45:42.772264 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Dec 13 01:45:42.772312 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Dec 13 01:45:42.772362 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Dec 13 01:45:42.772414 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Dec 13 01:45:42.772463 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Dec 13 01:45:42.772512 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Dec 13 01:45:42.772562 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Dec 13 01:45:42.772611 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Dec 13 01:45:42.772660 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Dec 13 01:45:42.772711 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Dec 13 01:45:42.772759 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Dec 13 01:45:42.772807 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Dec 13 01:45:42.772901 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Dec 13 01:45:42.772950 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Dec 13 01:45:42.772998 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Dec 13 01:45:42.773051 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Dec 13 01:45:42.773100 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.773149 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.773197 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.773245 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.773294 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.773342 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.773391 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.773443 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.773492 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.773540 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.773622 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.773671 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.773736 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.773786 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.773848 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.773904 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.773954 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.774003 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.774054 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.774103 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.774153 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.774203 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.774253 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.774305 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.774355 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.774404 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.774459 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.774509 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.774559 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.774609 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.774658 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.774711 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.774760 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.774810 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.775241 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.775295 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.775347 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.775398 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.775448 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.775501 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.775552 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.775602 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.775651 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.775701 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.775750 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.775799 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.775856 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.775907 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.775959 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.776041 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.776093 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.776144 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.776194 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.776244 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.776294 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.776344 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.776393 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.776457 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.776511 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.776562 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.776613 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.776662 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.776711 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.776761 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.776810 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.776897 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.776947 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.777000 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.777051 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.777100 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.777150 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.777200 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.777249 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.777300 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.777350 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.777400 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.777450 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.777503 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.777554 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.777605 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.777655 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.777705 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.777755 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Dec 13 01:45:42.777804 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:45:42.777877 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 13 01:45:42.777930 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Dec 13 01:45:42.777984 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Dec 13 01:45:42.778033 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Dec 13 01:45:42.778083 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Dec 13 01:45:42.778138 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Dec 13 01:45:42.778190 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Dec 13 01:45:42.778241 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Dec 13 01:45:42.778292 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Dec 13 01:45:42.778343 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Dec 13 01:45:42.778397 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Dec 13 01:45:42.778448 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Dec 13 01:45:42.778499 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Dec 13 01:45:42.778549 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Dec 13 01:45:42.778601 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Dec 13 01:45:42.778651 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Dec 13 01:45:42.778702 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Dec 13 01:45:42.778751 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Dec 13 01:45:42.778801 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Dec 13 01:45:42.780920 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Dec 13 01:45:42.780992 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Dec 13 01:45:42.781047 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Dec 13 01:45:42.781098 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Dec 13 01:45:42.781149 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Dec 13 01:45:42.781205 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Dec 13 01:45:42.781256 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Dec 13 01:45:42.781308 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Dec 13 01:45:42.781360 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Dec 13 01:45:42.781410 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Dec 13 01:45:42.781465 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Dec 13 01:45:42.781518 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Dec 13 01:45:42.781568 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Dec 13 01:45:42.781618 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Dec 13 01:45:42.781673 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Dec 13 01:45:42.781724 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Dec 13 01:45:42.781777 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Dec 13 01:45:42.781837 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Dec 13 01:45:42.781888 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Dec 13 01:45:42.781941 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Dec 13 01:45:42.781991 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Dec 13 01:45:42.782041 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Dec 13 01:45:42.782091 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Dec 13 01:45:42.782142 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Dec 13 01:45:42.782192 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Dec 13 01:45:42.782254 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Dec 13 01:45:42.782320 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Dec 13 01:45:42.782372 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Dec 13 01:45:42.782422 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Dec 13 01:45:42.782471 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Dec 13 01:45:42.782523 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Dec 13 01:45:42.782572 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Dec 13 01:45:42.782622 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Dec 13 01:45:42.782673 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Dec 13 01:45:42.782722 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Dec 13 01:45:42.782775 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Dec 13 01:45:42.783685 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Dec 13 01:45:42.783750 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Dec 13 01:45:42.783804 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Dec 13 01:45:42.783901 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Dec 13 01:45:42.783954 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Dec 13 01:45:42.784004 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Dec 13 01:45:42.784057 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Dec 13 01:45:42.784109 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Dec 13 01:45:42.784163 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Dec 13 01:45:42.784214 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Dec 13 01:45:42.784266 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Dec 13 01:45:42.784316 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Dec 13 01:45:42.784366 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Dec 13 01:45:42.784421 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Dec 13 01:45:42.784475 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Dec 13 01:45:42.784525 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Dec 13 01:45:42.784575 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Dec 13 01:45:42.784626 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Dec 13 01:45:42.784682 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Dec 13 01:45:42.784732 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Dec 13 01:45:42.784782 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Dec 13 01:45:42.784841 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Dec 13 01:45:42.784892 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Dec 13 01:45:42.784943 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Dec 13 01:45:42.784994 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Dec 13 01:45:42.785044 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Dec 13 01:45:42.785094 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Dec 13 01:45:42.785149 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Dec 13 01:45:42.785199 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Dec 13 01:45:42.785250 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Dec 13 01:45:42.785301 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Dec 13 01:45:42.785352 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Dec 13 01:45:42.785402 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Dec 13 01:45:42.785455 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Dec 13 01:45:42.785505 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Dec 13 01:45:42.785556 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Dec 13 01:45:42.785606 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Dec 13 01:45:42.785661 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Dec 13 01:45:42.785711 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Dec 13 01:45:42.785761 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Dec 13 01:45:42.786300 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Dec 13 01:45:42.787687 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Dec 13 01:45:42.787748 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Dec 13 01:45:42.787802 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Dec 13 01:45:42.787871 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Dec 13 01:45:42.787923 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Dec 13 01:45:42.787978 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Dec 13 01:45:42.788031 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Dec 13 01:45:42.788082 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Dec 13 01:45:42.788132 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Dec 13 01:45:42.788184 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Dec 13 01:45:42.788234 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Dec 13 01:45:42.788285 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Dec 13 01:45:42.788338 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Dec 13 01:45:42.788389 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Dec 13 01:45:42.788440 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Dec 13 01:45:42.788495 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Dec 13 01:45:42.788545 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Dec 13 01:45:42.788595 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Dec 13 01:45:42.788646 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Dec 13 01:45:42.788692 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Dec 13 01:45:42.788736 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Dec 13 01:45:42.788781 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Dec 13 01:45:42.788833 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Dec 13 01:45:42.788887 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Dec 13 01:45:42.788934 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Dec 13 01:45:42.788980 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Dec 13 01:45:42.789026 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Dec 13 01:45:42.789072 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Dec 13 01:45:42.789118 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Dec 13 01:45:42.789163 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Dec 13 01:45:42.789213 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Dec 13 01:45:42.789264 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Dec 13 01:45:42.789311 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Dec 13 01:45:42.789357 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Dec 13 01:45:42.789407 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Dec 13 01:45:42.789458 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Dec 13 01:45:42.789505 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Dec 13 01:45:42.789556 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Dec 13 01:45:42.789603 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Dec 13 01:45:42.789648 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Dec 13 01:45:42.789697 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Dec 13 01:45:42.789744 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Dec 13 01:45:42.789794 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Dec 13 01:45:42.791462 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Dec 13 01:45:42.791525 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Dec 13 01:45:42.791573 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Dec 13 01:45:42.791624 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Dec 13 01:45:42.791671 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Dec 13 01:45:42.791724 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Dec 13 01:45:42.791779 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Dec 13 01:45:42.791847 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Dec 13 01:45:42.791897 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Dec 13 01:45:42.791942 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Dec 13 01:45:42.791993 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Dec 13 01:45:42.792039 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Dec 13 01:45:42.792086 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Dec 13 01:45:42.792140 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Dec 13 01:45:42.792188 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Dec 13 01:45:42.792237 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Dec 13 01:45:42.792289 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Dec 13 01:45:42.792339 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Dec 13 01:45:42.792409 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Dec 13 01:45:42.792465 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Dec 13 01:45:42.792519 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Dec 13 01:45:42.792566 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Dec 13 01:45:42.792617 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Dec 13 01:45:42.792663 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Dec 13 01:45:42.792715 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Dec 13 01:45:42.792762 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Dec 13 01:45:42.792821 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Dec 13 01:45:42.792871 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Dec 13 01:45:42.792917 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Dec 13 01:45:42.792968 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Dec 13 01:45:42.793015 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Dec 13 01:45:42.793062 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Dec 13 01:45:42.793115 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Dec 13 01:45:42.793162 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Dec 13 01:45:42.793207 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Dec 13 01:45:42.793257 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Dec 13 01:45:42.793304 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Dec 13 01:45:42.793356 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Dec 13 01:45:42.793403 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Dec 13 01:45:42.793459 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Dec 13 01:45:42.793507 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Dec 13 01:45:42.793558 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Dec 13 01:45:42.793605 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Dec 13 01:45:42.793655 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Dec 13 01:45:42.793701 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Dec 13 01:45:42.793755 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Dec 13 01:45:42.793802 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Dec 13 01:45:42.793981 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Dec 13 01:45:42.794032 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Dec 13 01:45:42.794080 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Dec 13 01:45:42.794127 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Dec 13 01:45:42.794181 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Dec 13 01:45:42.794228 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Dec 13 01:45:42.794278 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Dec 13 01:45:42.794325 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Dec 13 01:45:42.794375 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Dec 13 01:45:42.794422 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Dec 13 01:45:42.794474 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Dec 13 01:45:42.794521 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Dec 13 01:45:42.794575 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Dec 13 01:45:42.794622 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Dec 13 01:45:42.794673 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Dec 13 01:45:42.794720 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Dec 13 01:45:42.794780 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 01:45:42.794790 kernel: PCI: CLS 32 bytes, default 64 Dec 13 01:45:42.794797 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 01:45:42.794804 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Dec 13 01:45:42.794810 kernel: clocksource: Switched to clocksource tsc Dec 13 01:45:42.794823 kernel: Initialise system trusted keyrings Dec 13 01:45:42.794830 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 01:45:42.794837 kernel: Key type asymmetric registered Dec 13 01:45:42.794843 kernel: Asymmetric key parser 'x509' registered Dec 13 01:45:42.794851 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:45:42.794858 kernel: io scheduler mq-deadline registered Dec 13 01:45:42.794864 kernel: io scheduler kyber registered Dec 13 01:45:42.794870 kernel: io scheduler bfq registered Dec 13 01:45:42.795091 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Dec 13 01:45:42.795151 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.795205 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Dec 13 01:45:42.795257 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.795314 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Dec 13 01:45:42.795366 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.795426 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Dec 13 01:45:42.795482 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.795535 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Dec 13 01:45:42.795587 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.795643 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Dec 13 01:45:42.795695 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.795747 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Dec 13 01:45:42.795799 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.796097 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Dec 13 01:45:42.796156 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.796209 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Dec 13 01:45:42.796260 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.796311 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Dec 13 01:45:42.796361 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.796418 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Dec 13 01:45:42.796480 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.796536 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Dec 13 01:45:42.796587 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.796639 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Dec 13 01:45:42.796690 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.796741 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Dec 13 01:45:42.796792 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.797081 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Dec 13 01:45:42.797139 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.797195 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Dec 13 01:45:42.797247 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.797300 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Dec 13 01:45:42.797357 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.797410 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Dec 13 01:45:42.797462 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.797514 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Dec 13 01:45:42.797565 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.797623 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Dec 13 01:45:42.797682 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.797738 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Dec 13 01:45:42.797789 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.797877 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Dec 13 01:45:42.797930 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.797982 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Dec 13 01:45:42.798037 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.798104 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Dec 13 01:45:42.798155 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.798207 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Dec 13 01:45:42.798258 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.798309 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Dec 13 01:45:42.798361 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.798412 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Dec 13 01:45:42.798497 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.798549 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Dec 13 01:45:42.798598 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.798649 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Dec 13 01:45:42.798701 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.798752 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Dec 13 01:45:42.798801 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.799908 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Dec 13 01:45:42.799968 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.800027 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Dec 13 01:45:42.800079 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:45:42.800089 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:45:42.800095 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:45:42.800102 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:45:42.800108 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Dec 13 01:45:42.800114 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:45:42.800123 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:45:42.800175 kernel: rtc_cmos 00:01: registered as rtc0 Dec 13 01:45:42.800222 kernel: rtc_cmos 00:01: setting system clock to 2024-12-13T01:45:42 UTC (1734054342) Dec 13 01:45:42.800267 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Dec 13 01:45:42.800276 kernel: intel_pstate: CPU model not supported Dec 13 01:45:42.800282 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:45:42.800306 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:45:42.800312 kernel: Segment Routing with IPv6 Dec 13 01:45:42.800321 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:45:42.800328 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:45:42.800334 kernel: Key type dns_resolver registered Dec 13 01:45:42.800340 kernel: IPI shorthand broadcast: enabled Dec 13 01:45:42.800347 kernel: sched_clock: Marking stable (906003767, 226214227)->(1190140307, -57922313) Dec 13 01:45:42.800353 kernel: registered taskstats version 1 Dec 13 01:45:42.800360 kernel: Loading compiled-in X.509 certificates Dec 13 01:45:42.800366 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:45:42.800372 kernel: Key type .fscrypt registered Dec 13 01:45:42.800378 kernel: Key type fscrypt-provisioning registered Dec 13 01:45:42.800386 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:45:42.800392 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:45:42.800398 kernel: ima: No architecture policies found Dec 13 01:45:42.800404 kernel: clk: Disabling unused clocks Dec 13 01:45:42.800411 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:45:42.800417 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:45:42.800424 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:45:42.800430 kernel: Run /init as init process Dec 13 01:45:42.800437 kernel: with arguments: Dec 13 01:45:42.800444 kernel: /init Dec 13 01:45:42.800450 kernel: with environment: Dec 13 01:45:42.800456 kernel: HOME=/ Dec 13 01:45:42.800462 kernel: TERM=linux Dec 13 01:45:42.800469 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:45:42.800477 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:45:42.800486 systemd[1]: Detected virtualization vmware. Dec 13 01:45:42.800494 systemd[1]: Detected architecture x86-64. Dec 13 01:45:42.800501 systemd[1]: Running in initrd. Dec 13 01:45:42.800507 systemd[1]: No hostname configured, using default hostname. Dec 13 01:45:42.800514 systemd[1]: Hostname set to . Dec 13 01:45:42.800520 systemd[1]: Initializing machine ID from random generator. Dec 13 01:45:42.800527 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:45:42.800533 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:45:42.800540 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:45:42.800548 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:45:42.800555 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:45:42.800561 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:45:42.800568 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:45:42.800576 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:45:42.800583 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:45:42.800590 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:45:42.800597 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:45:42.800604 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:45:42.800610 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:45:42.800617 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:45:42.800623 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:45:42.800630 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:45:42.800637 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:45:42.800644 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:45:42.800650 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:45:42.800658 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:45:42.800664 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:45:42.800671 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:45:42.800678 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:45:42.800684 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:45:42.800691 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:45:42.800697 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:45:42.800704 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:45:42.800712 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:45:42.800718 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:45:42.800725 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:45:42.800743 systemd-journald[216]: Collecting audit messages is disabled. Dec 13 01:45:42.800760 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:45:42.800767 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:45:42.800774 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:45:42.800781 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:45:42.800788 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:45:42.800796 kernel: Bridge firewalling registered Dec 13 01:45:42.800802 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:45:42.800809 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:45:42.801412 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:45:42.801422 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:45:42.801429 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:45:42.801436 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:45:42.801443 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:45:42.801452 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:45:42.801460 systemd-journald[216]: Journal started Dec 13 01:45:42.801475 systemd-journald[216]: Runtime Journal (/run/log/journal/4438d13f07ea4aff844c23e0f2f7c25f) is 4.8M, max 38.6M, 33.8M free. Dec 13 01:45:42.744645 systemd-modules-load[217]: Inserted module 'overlay' Dec 13 01:45:42.803214 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:45:42.765208 systemd-modules-load[217]: Inserted module 'br_netfilter' Dec 13 01:45:42.808953 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:45:42.809415 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:45:42.812892 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:45:42.814074 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:45:42.814907 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:45:42.822041 dracut-cmdline[247]: dracut-dracut-053 Dec 13 01:45:42.823746 dracut-cmdline[247]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:45:42.841379 systemd-resolved[249]: Positive Trust Anchors: Dec 13 01:45:42.841388 systemd-resolved[249]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:45:42.841411 systemd-resolved[249]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:45:42.843734 systemd-resolved[249]: Defaulting to hostname 'linux'. Dec 13 01:45:42.844298 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:45:42.844460 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:45:42.871830 kernel: SCSI subsystem initialized Dec 13 01:45:42.878833 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:45:42.885832 kernel: iscsi: registered transport (tcp) Dec 13 01:45:42.898830 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:45:42.898860 kernel: QLogic iSCSI HBA Driver Dec 13 01:45:42.918626 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:45:42.922954 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:45:42.939189 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:45:42.939235 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:45:42.939245 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:45:42.969828 kernel: raid6: avx2x4 gen() 52937 MB/s Dec 13 01:45:42.986830 kernel: raid6: avx2x2 gen() 53595 MB/s Dec 13 01:45:43.004047 kernel: raid6: avx2x1 gen() 44582 MB/s Dec 13 01:45:43.004074 kernel: raid6: using algorithm avx2x2 gen() 53595 MB/s Dec 13 01:45:43.022010 kernel: raid6: .... xor() 30877 MB/s, rmw enabled Dec 13 01:45:43.022031 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:45:43.035837 kernel: xor: automatically using best checksumming function avx Dec 13 01:45:43.133834 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:45:43.139177 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:45:43.143917 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:45:43.151145 systemd-udevd[432]: Using default interface naming scheme 'v255'. Dec 13 01:45:43.153572 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:45:43.158904 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:45:43.165707 dracut-pre-trigger[437]: rd.md=0: removing MD RAID activation Dec 13 01:45:43.180995 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:45:43.184902 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:45:43.255308 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:45:43.261934 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:45:43.270357 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:45:43.271171 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:45:43.271686 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:45:43.271954 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:45:43.277976 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:45:43.285517 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:45:43.310829 kernel: libata version 3.00 loaded. Dec 13 01:45:43.313829 kernel: ata_piix 0000:00:07.1: version 2.13 Dec 13 01:45:43.321158 kernel: scsi host0: ata_piix Dec 13 01:45:43.321495 kernel: scsi host1: ata_piix Dec 13 01:45:43.321699 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Dec 13 01:45:43.321710 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Dec 13 01:45:43.340832 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Dec 13 01:45:43.340870 kernel: VMware PVSCSI driver - version 1.0.7.0-k Dec 13 01:45:43.343069 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Dec 13 01:45:43.353114 kernel: vmw_pvscsi: using 64bit dma Dec 13 01:45:43.353134 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Dec 13 01:45:43.353216 kernel: vmw_pvscsi: max_id: 16 Dec 13 01:45:43.353229 kernel: vmw_pvscsi: setting ring_pages to 8 Dec 13 01:45:43.360044 kernel: vmw_pvscsi: enabling reqCallThreshold Dec 13 01:45:43.360077 kernel: vmw_pvscsi: driver-based request coalescing enabled Dec 13 01:45:43.360085 kernel: vmw_pvscsi: using MSI-X Dec 13 01:45:43.361269 kernel: scsi host2: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Dec 13 01:45:43.362849 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:45:43.364985 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #2 Dec 13 01:45:43.365985 kernel: scsi 2:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Dec 13 01:45:43.366633 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:45:43.366707 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:45:43.367080 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:45:43.367190 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:45:43.367255 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:45:43.367410 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:45:43.377006 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:45:43.387876 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:45:43.388522 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:45:43.401400 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:45:43.485861 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Dec 13 01:45:43.491832 kernel: scsi 1:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Dec 13 01:45:43.499291 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:45:43.499320 kernel: AES CTR mode by8 optimization enabled Dec 13 01:45:43.502830 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Dec 13 01:45:43.520501 kernel: sd 2:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Dec 13 01:45:43.578517 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Dec 13 01:45:43.579031 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:45:43.579042 kernel: sd 2:0:0:0: [sda] Write Protect is off Dec 13 01:45:43.579111 kernel: sd 2:0:0:0: [sda] Mode Sense: 31 00 00 00 Dec 13 01:45:43.579413 kernel: sd 2:0:0:0: [sda] Cache data unavailable Dec 13 01:45:43.579485 kernel: sd 2:0:0:0: [sda] Assuming drive cache: write through Dec 13 01:45:43.579548 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Dec 13 01:45:43.579615 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:45:43.579624 kernel: sd 2:0:0:0: [sda] Attached SCSI disk Dec 13 01:45:43.881855 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (476) Dec 13 01:45:43.887894 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Dec 13 01:45:43.891890 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Dec 13 01:45:43.897037 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Dec 13 01:45:43.943837 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (480) Dec 13 01:45:43.949166 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Dec 13 01:45:43.949327 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Dec 13 01:45:43.952907 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:45:44.021847 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:45:44.054958 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:45:45.247862 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:45:45.248021 disk-uuid[589]: The operation has completed successfully. Dec 13 01:45:45.620157 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:45:45.620211 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:45:45.624936 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:45:45.627203 sh[606]: Success Dec 13 01:45:45.635833 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 01:45:45.680858 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:45:45.682896 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:45:45.683255 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:45:45.726297 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:45:45.726340 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:45:45.726351 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:45:45.726362 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:45:45.727391 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:45:45.829838 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 01:45:45.855318 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:45:45.867045 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Dec 13 01:45:45.868510 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:45:45.967623 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:45:45.967674 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:45:45.967690 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:45:46.001877 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:45:46.007186 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:45:46.008826 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:45:46.011683 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:45:46.018721 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:45:46.050351 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Dec 13 01:45:46.059335 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:45:46.096358 ignition[666]: Ignition 2.19.0 Dec 13 01:45:46.096365 ignition[666]: Stage: fetch-offline Dec 13 01:45:46.096384 ignition[666]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:45:46.096389 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 01:45:46.096465 ignition[666]: parsed url from cmdline: "" Dec 13 01:45:46.096467 ignition[666]: no config URL provided Dec 13 01:45:46.096470 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:45:46.096475 ignition[666]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:45:46.097128 ignition[666]: config successfully fetched Dec 13 01:45:46.097423 ignition[666]: parsing config with SHA512: fd1f3ba291a2a7f6f1e6e4a1f1ec18995c41de4cc652e5c96314a9dec3b6107935c283eb0893e9e9e51b6911c617d32873095553cc48b0357bddde7de3a6fb0a Dec 13 01:45:46.100119 unknown[666]: fetched base config from "system" Dec 13 01:45:46.100125 unknown[666]: fetched user config from "vmware" Dec 13 01:45:46.100364 ignition[666]: fetch-offline: fetch-offline passed Dec 13 01:45:46.100401 ignition[666]: Ignition finished successfully Dec 13 01:45:46.101266 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:45:46.115901 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:45:46.118936 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:45:46.131912 systemd-networkd[801]: lo: Link UP Dec 13 01:45:46.131918 systemd-networkd[801]: lo: Gained carrier Dec 13 01:45:46.132588 systemd-networkd[801]: Enumeration completed Dec 13 01:45:46.132849 systemd-networkd[801]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Dec 13 01:45:46.133047 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:45:46.133222 systemd[1]: Reached target network.target - Network. Dec 13 01:45:46.134907 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Dec 13 01:45:46.135022 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Dec 13 01:45:46.133318 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 01:45:46.136025 systemd-networkd[801]: ens192: Link UP Dec 13 01:45:46.136028 systemd-networkd[801]: ens192: Gained carrier Dec 13 01:45:46.140931 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:45:46.148309 ignition[803]: Ignition 2.19.0 Dec 13 01:45:46.148315 ignition[803]: Stage: kargs Dec 13 01:45:46.148435 ignition[803]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:45:46.148442 ignition[803]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 01:45:46.149018 ignition[803]: kargs: kargs passed Dec 13 01:45:46.149048 ignition[803]: Ignition finished successfully Dec 13 01:45:46.150246 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:45:46.153964 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:45:46.160736 ignition[810]: Ignition 2.19.0 Dec 13 01:45:46.160746 ignition[810]: Stage: disks Dec 13 01:45:46.161116 ignition[810]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:45:46.161126 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 01:45:46.161685 ignition[810]: disks: disks passed Dec 13 01:45:46.161709 ignition[810]: Ignition finished successfully Dec 13 01:45:46.162197 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:45:46.162597 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:45:46.162733 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:45:46.162939 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:45:46.163125 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:45:46.163293 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:45:46.169936 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:45:46.221517 systemd-fsck[818]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 01:45:46.222524 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:45:46.227907 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:45:46.284869 kernel: EXT4-fs (sda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:45:46.285051 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:45:46.285453 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:45:46.289902 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:45:46.290878 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:45:46.291735 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:45:46.291765 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:45:46.291781 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:45:46.297846 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (826) Dec 13 01:45:46.299849 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:45:46.301058 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:45:46.301071 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:45:46.301079 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:45:46.301646 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:45:46.305836 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:45:46.306746 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:45:46.331939 initrd-setup-root[850]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:45:46.334511 initrd-setup-root[857]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:45:46.336584 initrd-setup-root[864]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:45:46.338991 initrd-setup-root[871]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:45:46.389338 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:45:46.393890 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:45:46.394992 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:45:46.399840 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:45:46.414840 ignition[938]: INFO : Ignition 2.19.0 Dec 13 01:45:46.414840 ignition[938]: INFO : Stage: mount Dec 13 01:45:46.414840 ignition[938]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:45:46.414840 ignition[938]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 01:45:46.415417 ignition[938]: INFO : mount: mount passed Dec 13 01:45:46.416157 ignition[938]: INFO : Ignition finished successfully Dec 13 01:45:46.417028 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:45:46.417537 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:45:46.423971 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:45:46.709213 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:45:46.713967 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:45:46.723856 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (950) Dec 13 01:45:46.727263 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:45:46.727299 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:45:46.727307 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:45:46.731837 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:45:46.733152 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:45:46.745094 ignition[967]: INFO : Ignition 2.19.0 Dec 13 01:45:46.745094 ignition[967]: INFO : Stage: files Dec 13 01:45:46.745621 ignition[967]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:45:46.745621 ignition[967]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 01:45:46.746032 ignition[967]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:45:46.747329 ignition[967]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:45:46.747329 ignition[967]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:45:46.749405 ignition[967]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:45:46.749600 ignition[967]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:45:46.749948 unknown[967]: wrote ssh authorized keys file for user: core Dec 13 01:45:46.750210 ignition[967]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:45:46.751939 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:45:46.751939 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:45:46.787099 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:45:46.899193 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:45:46.899193 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:45:46.899193 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:45:46.899193 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:45:46.899193 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:45:46.899193 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:45:46.899193 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:45:46.899193 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:45:46.899193 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:45:46.899193 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:45:46.899193 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:45:46.899193 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:45:46.899193 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:45:46.899193 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:45:46.899193 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 01:45:47.277469 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 01:45:47.522506 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:45:47.522506 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Dec 13 01:45:47.522506 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Dec 13 01:45:47.522506 ignition[967]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 01:45:47.522506 ignition[967]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:45:47.522506 ignition[967]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:45:47.522506 ignition[967]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 01:45:47.522506 ignition[967]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 13 01:45:47.522506 ignition[967]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:45:47.522506 ignition[967]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:45:47.522506 ignition[967]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 13 01:45:47.522506 ignition[967]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 01:45:47.622390 ignition[967]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:45:47.626078 ignition[967]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:45:47.626332 ignition[967]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 01:45:47.626332 ignition[967]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:45:47.626332 ignition[967]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:45:47.626941 ignition[967]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:45:47.626941 ignition[967]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:45:47.626941 ignition[967]: INFO : files: files passed Dec 13 01:45:47.626941 ignition[967]: INFO : Ignition finished successfully Dec 13 01:45:47.627562 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:45:47.630956 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:45:47.632623 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:45:47.635231 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:45:47.635454 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:45:47.639479 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:45:47.639479 initrd-setup-root-after-ignition[997]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:45:47.640081 initrd-setup-root-after-ignition[1001]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:45:47.641194 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:45:47.641583 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:45:47.645936 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:45:47.659903 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:45:47.659964 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:45:47.660372 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:45:47.660499 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:45:47.660710 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:45:47.661179 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:45:47.671608 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:45:47.675931 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:45:47.681752 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:45:47.682207 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:45:47.682422 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:45:47.682583 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:45:47.682664 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:45:47.683557 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:45:47.683735 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:45:47.683903 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:45:47.684074 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:45:47.684239 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:45:47.684400 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:45:47.684557 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:45:47.684741 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:45:47.684910 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:45:47.685131 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:45:47.685452 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:45:47.685520 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:45:47.685800 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:45:47.686062 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:45:47.686253 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:45:47.686300 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:45:47.686466 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:45:47.686528 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:45:47.686787 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:45:47.686874 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:45:47.687078 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:45:47.687215 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:45:47.690843 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:45:47.691042 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:45:47.691254 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:45:47.691436 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:45:47.691510 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:45:47.691737 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:45:47.691805 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:45:47.692031 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:45:47.692099 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:45:47.692333 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:45:47.692393 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:45:47.696940 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:45:47.697060 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:45:47.697134 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:45:47.700032 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:45:47.700164 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:45:47.700264 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:45:47.700558 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:45:47.700642 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:45:47.704624 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:45:47.705865 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:45:47.706830 ignition[1022]: INFO : Ignition 2.19.0 Dec 13 01:45:47.706830 ignition[1022]: INFO : Stage: umount Dec 13 01:45:47.706830 ignition[1022]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:45:47.706830 ignition[1022]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 01:45:47.708038 ignition[1022]: INFO : umount: umount passed Dec 13 01:45:47.708038 ignition[1022]: INFO : Ignition finished successfully Dec 13 01:45:47.708250 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:45:47.708299 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:45:47.709156 systemd[1]: Stopped target network.target - Network. Dec 13 01:45:47.709393 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:45:47.709538 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:45:47.709892 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:45:47.709916 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:45:47.710426 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:45:47.710452 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:45:47.710565 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:45:47.710588 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:45:47.710786 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:45:47.711583 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:45:47.717126 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:45:47.717195 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:45:47.718071 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:45:47.718109 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:45:47.718902 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:45:47.718962 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:45:47.719395 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:45:47.719424 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:45:47.722897 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:45:47.723004 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:45:47.723030 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:45:47.723172 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Dec 13 01:45:47.723194 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Dec 13 01:45:47.723323 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:45:47.723344 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:45:47.723462 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:45:47.723483 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:45:47.723644 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:45:47.733769 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:45:47.734000 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:45:47.734433 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:45:47.734501 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:45:47.734879 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:45:47.734911 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:45:47.735303 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:45:47.735320 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:45:47.735478 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:45:47.735501 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:45:47.735807 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:45:47.735875 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:45:47.736154 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:45:47.736176 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:45:47.740058 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:45:47.740172 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:45:47.740201 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:45:47.740339 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:45:47.740362 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:45:47.741108 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:45:47.744033 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:45:47.744094 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:45:47.980156 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:45:47.980219 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:45:47.980610 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:45:47.980705 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:45:47.980736 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:45:47.985907 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:45:47.993065 systemd[1]: Switching root. Dec 13 01:45:48.028507 systemd-journald[216]: Journal stopped Dec 13 01:45:49.732471 systemd-journald[216]: Received SIGTERM from PID 1 (systemd). Dec 13 01:45:49.732495 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:45:49.732503 kernel: SELinux: policy capability open_perms=1 Dec 13 01:45:49.732509 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:45:49.732514 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:45:49.732519 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:45:49.732526 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:45:49.732532 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:45:49.732538 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:45:49.732543 kernel: audit: type=1403 audit(1734054348.417:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:45:49.732550 systemd[1]: Successfully loaded SELinux policy in 32.974ms. Dec 13 01:45:49.732556 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.748ms. Dec 13 01:45:49.732563 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:45:49.732571 systemd[1]: Detected virtualization vmware. Dec 13 01:45:49.732578 systemd[1]: Detected architecture x86-64. Dec 13 01:45:49.732585 systemd[1]: Detected first boot. Dec 13 01:45:49.732591 systemd[1]: Initializing machine ID from random generator. Dec 13 01:45:49.732600 zram_generator::config[1066]: No configuration found. Dec 13 01:45:49.732607 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:45:49.732614 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Dec 13 01:45:49.732621 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Dec 13 01:45:49.732628 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:45:49.732634 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:45:49.732640 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:45:49.732649 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:45:49.732656 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:45:49.732663 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:45:49.732669 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:45:49.732676 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:45:49.732683 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:45:49.732690 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:45:49.732697 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:45:49.732704 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:45:49.732711 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:45:49.732763 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:45:49.732773 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:45:49.732781 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:45:49.732788 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:45:49.732794 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:45:49.732803 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:45:49.732810 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:45:49.732827 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:45:49.732836 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:45:49.732843 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:45:49.732850 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:45:49.732857 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:45:49.732863 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:45:49.732872 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:45:49.732879 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:45:49.732886 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:45:49.732893 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:45:49.732900 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:45:49.732908 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:45:49.732915 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:45:49.732922 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:45:49.732929 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:45:49.732937 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:45:49.732944 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:45:49.732951 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:45:49.732958 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:45:49.732966 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:45:49.732974 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:45:49.732981 systemd[1]: Reached target machines.target - Containers. Dec 13 01:45:49.732988 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:45:49.732995 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Dec 13 01:45:49.733002 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:45:49.733009 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:45:49.733016 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:45:49.733024 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:45:49.733031 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:45:49.733038 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:45:49.733045 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:45:49.733052 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:45:49.733059 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:45:49.733066 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:45:49.733073 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:45:49.733080 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:45:49.733088 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:45:49.733095 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:45:49.733102 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:45:49.733109 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:45:49.733116 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:45:49.733123 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:45:49.733130 systemd[1]: Stopped verity-setup.service. Dec 13 01:45:49.733137 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:45:49.733145 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:45:49.733152 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:45:49.733159 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:45:49.733166 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:45:49.733174 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:45:49.733181 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:45:49.733188 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:45:49.733194 kernel: loop: module loaded Dec 13 01:45:49.733201 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:45:49.733209 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:45:49.733216 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:45:49.733223 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:45:49.733230 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:45:49.733237 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:45:49.733244 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:45:49.733251 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:45:49.733258 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:45:49.733266 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:45:49.733273 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:45:49.733280 kernel: fuse: init (API version 7.39) Dec 13 01:45:49.733286 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:45:49.733293 kernel: ACPI: bus type drm_connector registered Dec 13 01:45:49.733299 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:45:49.733306 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:45:49.733313 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:45:49.733320 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:45:49.733329 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:45:49.733336 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:45:49.733344 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:45:49.733350 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:45:49.733357 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:45:49.733364 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:45:49.733386 systemd-journald[1156]: Collecting audit messages is disabled. Dec 13 01:45:49.733406 systemd-journald[1156]: Journal started Dec 13 01:45:49.733424 systemd-journald[1156]: Runtime Journal (/run/log/journal/6eac7533705e4ae6bc6af172e79be68a) is 4.8M, max 38.6M, 33.8M free. Dec 13 01:45:49.500373 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:45:49.516164 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 13 01:45:49.516497 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:45:49.733995 jq[1133]: true Dec 13 01:45:49.734400 jq[1165]: true Dec 13 01:45:49.735830 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:45:49.737873 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:45:49.778523 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:45:49.778565 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:45:49.794269 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:45:49.794319 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:45:49.810139 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:45:49.833311 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:45:49.833362 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:45:49.832933 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:45:49.833188 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:45:49.833565 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:45:49.838219 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:45:49.838940 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:45:49.853029 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:45:49.858962 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:45:49.861916 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:45:49.864742 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:45:49.871981 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:45:49.873886 systemd-journald[1156]: Time spent on flushing to /var/log/journal/6eac7533705e4ae6bc6af172e79be68a is 18.793ms for 1836 entries. Dec 13 01:45:49.873886 systemd-journald[1156]: System Journal (/var/log/journal/6eac7533705e4ae6bc6af172e79be68a) is 8.0M, max 584.8M, 576.8M free. Dec 13 01:45:50.130912 systemd-journald[1156]: Received client request to flush runtime journal. Dec 13 01:45:50.130962 kernel: loop0: detected capacity change from 0 to 2976 Dec 13 01:45:50.130981 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:45:49.890355 ignition[1176]: Ignition 2.19.0 Dec 13 01:45:49.881011 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:45:49.890593 ignition[1176]: deleting config from guestinfo properties Dec 13 01:45:49.959506 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Dec 13 01:45:49.958490 ignition[1176]: Successfully deleted config Dec 13 01:45:49.960423 udevadm[1217]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 01:45:49.975850 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:45:50.132064 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:45:50.152959 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:45:50.153330 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:45:50.167176 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:45:50.172979 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:45:50.188977 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 01:45:50.194559 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. Dec 13 01:45:50.194571 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. Dec 13 01:45:50.199008 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:45:50.257830 kernel: loop2: detected capacity change from 0 to 142488 Dec 13 01:45:50.621836 kernel: loop3: detected capacity change from 0 to 140768 Dec 13 01:45:51.025022 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:45:51.031115 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:45:51.038841 kernel: loop4: detected capacity change from 0 to 2976 Dec 13 01:45:51.044437 systemd-udevd[1236]: Using default interface naming scheme 'v255'. Dec 13 01:45:51.136834 kernel: loop5: detected capacity change from 0 to 211296 Dec 13 01:45:51.225258 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:45:51.236023 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:45:51.255770 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 01:45:51.274922 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:45:51.316696 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:45:51.320908 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Dec 13 01:45:51.321060 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 01:45:51.331891 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:45:51.353854 kernel: loop6: detected capacity change from 0 to 142488 Dec 13 01:45:51.356831 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Dec 13 01:45:51.367770 kernel: Guest personality initialized and is active Dec 13 01:45:51.372828 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Dec 13 01:45:51.372871 kernel: Initialized host personality Dec 13 01:45:51.385830 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1240) Dec 13 01:45:51.391056 systemd-networkd[1244]: lo: Link UP Dec 13 01:45:51.392576 systemd-networkd[1244]: lo: Gained carrier Dec 13 01:45:51.393296 systemd-networkd[1244]: Enumeration completed Dec 13 01:45:51.393512 systemd-networkd[1244]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Dec 13 01:45:51.394206 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:45:51.395941 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Dec 13 01:45:51.396062 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Dec 13 01:45:51.397827 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1240) Dec 13 01:45:51.397885 systemd-networkd[1244]: ens192: Link UP Dec 13 01:45:51.398026 systemd-networkd[1244]: ens192: Gained carrier Dec 13 01:45:51.401877 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1254) Dec 13 01:45:51.402888 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:45:51.445895 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 01:45:51.447662 (udev-worker)[1253]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Dec 13 01:45:51.452834 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:45:51.466013 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:45:51.467548 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Dec 13 01:45:51.471696 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:45:51.491109 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:45:51.494966 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:45:51.506236 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:45:51.515833 lvm[1281]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:45:51.538712 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:45:51.538979 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:45:51.543833 kernel: loop7: detected capacity change from 0 to 140768 Dec 13 01:45:51.543956 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:45:51.547167 lvm[1285]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:45:51.572001 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:45:51.637917 (sd-merge)[1237]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Dec 13 01:45:51.638497 (sd-merge)[1237]: Merged extensions into '/usr'. Dec 13 01:45:51.642048 systemd[1]: Reloading requested from client PID 1184 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:45:51.642057 systemd[1]: Reloading... Dec 13 01:45:51.693857 zram_generator::config[1316]: No configuration found. Dec 13 01:45:51.767977 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Dec 13 01:45:51.784078 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:45:51.821096 systemd[1]: Reloading finished in 178 ms. Dec 13 01:45:51.852424 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:45:51.852879 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:45:51.860733 systemd[1]: Starting ensure-sysext.service... Dec 13 01:45:51.864863 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:45:51.866357 systemd[1]: Reloading requested from client PID 1373 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:45:51.866411 systemd[1]: Reloading... Dec 13 01:45:51.876075 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:45:51.876296 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:45:51.876789 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:45:51.876972 systemd-tmpfiles[1374]: ACLs are not supported, ignoring. Dec 13 01:45:51.877013 systemd-tmpfiles[1374]: ACLs are not supported, ignoring. Dec 13 01:45:51.890879 systemd-tmpfiles[1374]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:45:51.890886 systemd-tmpfiles[1374]: Skipping /boot Dec 13 01:45:51.899387 systemd-tmpfiles[1374]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:45:51.899396 systemd-tmpfiles[1374]: Skipping /boot Dec 13 01:45:51.909886 zram_generator::config[1399]: No configuration found. Dec 13 01:45:51.998562 ldconfig[1180]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:45:51.999335 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Dec 13 01:45:52.014987 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:45:52.052030 systemd[1]: Reloading finished in 185 ms. Dec 13 01:45:52.067740 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:45:52.068193 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:45:52.077917 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:45:52.084529 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:45:52.086124 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:45:52.090935 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:45:52.091915 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:45:52.094659 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:45:52.098982 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:45:52.100771 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:45:52.102973 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:45:52.103665 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:45:52.103742 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:45:52.104284 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:45:52.104386 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:45:52.106575 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:45:52.112119 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:45:52.112322 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:45:52.112402 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:45:52.112835 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:45:52.112988 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:45:52.113353 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:45:52.113443 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:45:52.120495 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:45:52.127073 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:45:52.129516 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:45:52.133216 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:45:52.133443 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:45:52.133581 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:45:52.134303 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:45:52.134716 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:45:52.134808 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:45:52.137612 systemd[1]: Finished ensure-sysext.service. Dec 13 01:45:52.143645 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:45:52.150045 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 01:45:52.151315 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:45:52.152008 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:45:52.152117 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:45:52.154058 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:45:52.154207 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:45:52.154638 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:45:52.158704 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:45:52.158836 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:45:52.160000 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:45:52.167892 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:45:52.172728 systemd-resolved[1466]: Positive Trust Anchors: Dec 13 01:45:52.172937 systemd-resolved[1466]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:45:52.172990 systemd-resolved[1466]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:45:52.188849 systemd-resolved[1466]: Defaulting to hostname 'linux'. Dec 13 01:45:52.190515 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:45:52.190780 systemd[1]: Reached target network.target - Network. Dec 13 01:45:52.190941 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:45:52.201513 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 01:45:52.201713 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:45:52.206767 augenrules[1503]: No rules Dec 13 01:45:52.207690 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:45:52.297476 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:45:52.297961 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:45:52.298070 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:45:52.298369 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:45:52.298617 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:45:52.298968 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:45:52.299177 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:45:52.299323 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:45:52.299504 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:45:52.299531 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:45:52.299644 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:45:52.301744 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:45:52.303468 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:45:52.312496 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:45:52.313268 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:45:52.313532 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:45:52.313705 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:45:52.313965 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:45:52.313985 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:45:52.315186 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:45:52.317936 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:45:52.321808 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:45:52.323953 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:45:52.324060 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:45:52.325553 jq[1514]: false Dec 13 01:45:52.325697 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:45:52.327885 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:45:52.329003 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:45:52.331883 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:45:52.339883 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:45:52.340222 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:45:52.340688 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:45:52.344451 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:45:52.346897 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:45:52.356918 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Dec 13 01:45:52.360514 jq[1524]: true Dec 13 01:45:52.360880 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:45:52.361017 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:45:52.363490 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:45:52.364204 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:45:52.367272 dbus-daemon[1513]: [system] SELinux support is enabled Dec 13 01:45:52.369090 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:45:52.373084 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:45:52.373114 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:45:52.375516 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:45:52.375535 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:45:52.378344 extend-filesystems[1515]: Found loop4 Dec 13 01:45:52.385746 extend-filesystems[1515]: Found loop5 Dec 13 01:45:52.385746 extend-filesystems[1515]: Found loop6 Dec 13 01:45:52.385746 extend-filesystems[1515]: Found loop7 Dec 13 01:45:52.385746 extend-filesystems[1515]: Found sda Dec 13 01:45:52.385746 extend-filesystems[1515]: Found sda1 Dec 13 01:45:52.385746 extend-filesystems[1515]: Found sda2 Dec 13 01:45:52.385746 extend-filesystems[1515]: Found sda3 Dec 13 01:45:52.385746 extend-filesystems[1515]: Found usr Dec 13 01:45:52.385746 extend-filesystems[1515]: Found sda4 Dec 13 01:45:52.385746 extend-filesystems[1515]: Found sda6 Dec 13 01:45:52.385746 extend-filesystems[1515]: Found sda7 Dec 13 01:45:52.385746 extend-filesystems[1515]: Found sda9 Dec 13 01:45:52.385746 extend-filesystems[1515]: Checking size of /dev/sda9 Dec 13 01:45:52.399033 jq[1531]: true Dec 13 01:45:52.398972 systemd-logind[1521]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 01:45:52.398984 systemd-logind[1521]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:45:52.401535 update_engine[1522]: I20241213 01:45:52.398993 1522 main.cc:92] Flatcar Update Engine starting Dec 13 01:45:52.400951 systemd-logind[1521]: New seat seat0. Dec 13 01:45:52.406515 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Dec 13 01:45:52.406748 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:45:52.407373 (ntainerd)[1540]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:45:52.413948 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Dec 13 01:45:52.414148 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:45:52.418029 update_engine[1522]: I20241213 01:45:52.417042 1522 update_check_scheduler.cc:74] Next update check in 2m16s Dec 13 01:45:52.417325 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:45:52.420033 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:45:52.420146 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:45:52.423647 extend-filesystems[1515]: Old size kept for /dev/sda9 Dec 13 01:45:52.423804 extend-filesystems[1515]: Found sr0 Dec 13 01:45:52.424548 tar[1529]: linux-amd64/helm Dec 13 01:45:52.424932 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:45:52.425063 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:45:52.454140 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Dec 13 01:45:52.456600 unknown[1548]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Dec 13 01:45:52.480828 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1245) Dec 13 01:45:52.483384 unknown[1548]: Core dump limit set to -1 Dec 13 01:47:02.199270 systemd-resolved[1466]: Clock change detected. Flushing caches. Dec 13 01:47:02.199329 systemd-timesyncd[1492]: Contacted time server 45.83.234.123:123 (0.flatcar.pool.ntp.org). Dec 13 01:47:02.199365 systemd-timesyncd[1492]: Initial clock synchronization to Fri 2024-12-13 01:47:02.199237 UTC. Dec 13 01:47:02.223941 kernel: NET: Registered PF_VSOCK protocol family Dec 13 01:47:02.301712 locksmithd[1554]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:47:02.439222 bash[1576]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:47:02.438173 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:47:02.439041 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 01:47:02.566007 sshd_keygen[1530]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:47:02.604567 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:47:02.610106 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:47:02.616364 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:47:02.616627 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:47:02.624512 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:47:02.626565 containerd[1540]: time="2024-12-13T01:47:02.626513913Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:47:02.632327 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:47:02.637194 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:47:02.638630 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:47:02.638965 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:47:02.666602 containerd[1540]: time="2024-12-13T01:47:02.666565469Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:47:02.667582 containerd[1540]: time="2024-12-13T01:47:02.667554533Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:47:02.667582 containerd[1540]: time="2024-12-13T01:47:02.667577398Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:47:02.667661 containerd[1540]: time="2024-12-13T01:47:02.667593954Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:47:02.667728 containerd[1540]: time="2024-12-13T01:47:02.667712884Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:47:02.667751 containerd[1540]: time="2024-12-13T01:47:02.667731213Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:47:02.667799 containerd[1540]: time="2024-12-13T01:47:02.667784124Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:47:02.667815 containerd[1540]: time="2024-12-13T01:47:02.667799812Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:47:02.667964 containerd[1540]: time="2024-12-13T01:47:02.667950347Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:47:02.667964 containerd[1540]: time="2024-12-13T01:47:02.667961880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:47:02.667998 containerd[1540]: time="2024-12-13T01:47:02.667970784Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:47:02.667998 containerd[1540]: time="2024-12-13T01:47:02.667976782Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:47:02.668272 containerd[1540]: time="2024-12-13T01:47:02.668036258Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:47:02.668272 containerd[1540]: time="2024-12-13T01:47:02.668196201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:47:02.668557 containerd[1540]: time="2024-12-13T01:47:02.668540186Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:47:02.668557 containerd[1540]: time="2024-12-13T01:47:02.668553613Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:47:02.668783 containerd[1540]: time="2024-12-13T01:47:02.668609127Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:47:02.668783 containerd[1540]: time="2024-12-13T01:47:02.668645851Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:47:02.671027 containerd[1540]: time="2024-12-13T01:47:02.670964325Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:47:02.671027 containerd[1540]: time="2024-12-13T01:47:02.671011825Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:47:02.671027 containerd[1540]: time="2024-12-13T01:47:02.671027449Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:47:02.671136 containerd[1540]: time="2024-12-13T01:47:02.671042275Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:47:02.671136 containerd[1540]: time="2024-12-13T01:47:02.671057763Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:47:02.671496 containerd[1540]: time="2024-12-13T01:47:02.671175367Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:47:02.671496 containerd[1540]: time="2024-12-13T01:47:02.671361720Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:47:02.671496 containerd[1540]: time="2024-12-13T01:47:02.671432169Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:47:02.671496 containerd[1540]: time="2024-12-13T01:47:02.671446768Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:47:02.671496 containerd[1540]: time="2024-12-13T01:47:02.671459157Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:47:02.671496 containerd[1540]: time="2024-12-13T01:47:02.671472871Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:47:02.671496 containerd[1540]: time="2024-12-13T01:47:02.671482647Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:47:02.671496 containerd[1540]: time="2024-12-13T01:47:02.671490110Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:47:02.671496 containerd[1540]: time="2024-12-13T01:47:02.671498430Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:47:02.671669 containerd[1540]: time="2024-12-13T01:47:02.671506838Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:47:02.671669 containerd[1540]: time="2024-12-13T01:47:02.671514155Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:47:02.671669 containerd[1540]: time="2024-12-13T01:47:02.671521702Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:47:02.671669 containerd[1540]: time="2024-12-13T01:47:02.671528948Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:47:02.671669 containerd[1540]: time="2024-12-13T01:47:02.671542188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:47:02.671669 containerd[1540]: time="2024-12-13T01:47:02.671555226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:47:02.671669 containerd[1540]: time="2024-12-13T01:47:02.671575972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:47:02.671669 containerd[1540]: time="2024-12-13T01:47:02.671589693Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:47:02.671669 containerd[1540]: time="2024-12-13T01:47:02.671601689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:47:02.671669 containerd[1540]: time="2024-12-13T01:47:02.671614435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:47:02.671669 containerd[1540]: time="2024-12-13T01:47:02.671627150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:47:02.671669 containerd[1540]: time="2024-12-13T01:47:02.671639881Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:47:02.671669 containerd[1540]: time="2024-12-13T01:47:02.671660612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:47:02.671963 containerd[1540]: time="2024-12-13T01:47:02.671677002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:47:02.671963 containerd[1540]: time="2024-12-13T01:47:02.671688143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:47:02.671963 containerd[1540]: time="2024-12-13T01:47:02.671697884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:47:02.671963 containerd[1540]: time="2024-12-13T01:47:02.671706421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:47:02.671963 containerd[1540]: time="2024-12-13T01:47:02.671716374Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:47:02.671963 containerd[1540]: time="2024-12-13T01:47:02.671729648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:47:02.671963 containerd[1540]: time="2024-12-13T01:47:02.671736589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:47:02.671963 containerd[1540]: time="2024-12-13T01:47:02.671745944Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:47:02.671963 containerd[1540]: time="2024-12-13T01:47:02.671784405Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:47:02.671963 containerd[1540]: time="2024-12-13T01:47:02.671800090Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:47:02.671963 containerd[1540]: time="2024-12-13T01:47:02.671807827Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:47:02.671963 containerd[1540]: time="2024-12-13T01:47:02.671818393Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:47:02.671963 containerd[1540]: time="2024-12-13T01:47:02.671828083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:47:02.672170 containerd[1540]: time="2024-12-13T01:47:02.671842611Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:47:02.672170 containerd[1540]: time="2024-12-13T01:47:02.671852150Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:47:02.672170 containerd[1540]: time="2024-12-13T01:47:02.671862023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:47:02.673198 containerd[1540]: time="2024-12-13T01:47:02.672535108Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:47:02.673198 containerd[1540]: time="2024-12-13T01:47:02.672593284Z" level=info msg="Connect containerd service" Dec 13 01:47:02.673198 containerd[1540]: time="2024-12-13T01:47:02.672626467Z" level=info msg="using legacy CRI server" Dec 13 01:47:02.673198 containerd[1540]: time="2024-12-13T01:47:02.672635105Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:47:02.673198 containerd[1540]: time="2024-12-13T01:47:02.672708603Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:47:02.674150 containerd[1540]: time="2024-12-13T01:47:02.673711220Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:47:02.674150 containerd[1540]: time="2024-12-13T01:47:02.673822656Z" level=info msg="Start subscribing containerd event" Dec 13 01:47:02.674150 containerd[1540]: time="2024-12-13T01:47:02.673855278Z" level=info msg="Start recovering state" Dec 13 01:47:02.674150 containerd[1540]: time="2024-12-13T01:47:02.673904326Z" level=info msg="Start event monitor" Dec 13 01:47:02.674150 containerd[1540]: time="2024-12-13T01:47:02.673916316Z" level=info msg="Start snapshots syncer" Dec 13 01:47:02.674150 containerd[1540]: time="2024-12-13T01:47:02.673948782Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:47:02.674150 containerd[1540]: time="2024-12-13T01:47:02.673955293Z" level=info msg="Start streaming server" Dec 13 01:47:02.674991 containerd[1540]: time="2024-12-13T01:47:02.674401786Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:47:02.674991 containerd[1540]: time="2024-12-13T01:47:02.674462384Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:47:02.682098 containerd[1540]: time="2024-12-13T01:47:02.681881443Z" level=info msg="containerd successfully booted in 0.056071s" Dec 13 01:47:02.681965 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:47:02.754264 tar[1529]: linux-amd64/LICENSE Dec 13 01:47:02.754358 tar[1529]: linux-amd64/README.md Dec 13 01:47:02.760871 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:47:02.886065 systemd-networkd[1244]: ens192: Gained IPv6LL Dec 13 01:47:02.887862 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:47:02.888833 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:47:02.894119 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Dec 13 01:47:02.895896 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:47:02.898314 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:47:02.919482 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:47:02.935351 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 01:47:02.935479 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Dec 13 01:47:02.936214 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:47:03.993042 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:47:03.998459 systemd[1]: Started sshd@0-139.178.70.110:22-36.138.19.180:49322.service - OpenSSH per-connection server daemon (36.138.19.180:49322). Dec 13 01:47:04.098420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:47:04.098809 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:47:04.099387 systemd[1]: Startup finished in 989ms (kernel) + 5.788s (initrd) + 6.013s (userspace) = 12.791s. Dec 13 01:47:04.105336 (kubelet)[1697]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:47:04.130106 login[1607]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 01:47:04.131800 login[1608]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 01:47:04.138826 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:47:04.145162 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:47:04.148979 systemd-logind[1521]: New session 2 of user core. Dec 13 01:47:04.153864 systemd-logind[1521]: New session 1 of user core. Dec 13 01:47:04.158223 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:47:04.163105 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:47:04.165372 (systemd)[1704]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:47:04.235941 systemd[1704]: Queued start job for default target default.target. Dec 13 01:47:04.246875 systemd[1704]: Created slice app.slice - User Application Slice. Dec 13 01:47:04.246903 systemd[1704]: Reached target paths.target - Paths. Dec 13 01:47:04.246937 systemd[1704]: Reached target timers.target - Timers. Dec 13 01:47:04.247899 systemd[1704]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:47:04.255882 systemd[1704]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:47:04.255934 systemd[1704]: Reached target sockets.target - Sockets. Dec 13 01:47:04.255947 systemd[1704]: Reached target basic.target - Basic System. Dec 13 01:47:04.255970 systemd[1704]: Reached target default.target - Main User Target. Dec 13 01:47:04.255989 systemd[1704]: Startup finished in 86ms. Dec 13 01:47:04.256291 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:47:04.257616 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:47:04.259626 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:47:04.788811 sshd[1690]: Invalid user user from 36.138.19.180 port 49322 Dec 13 01:47:04.982947 sshd[1690]: Connection closed by invalid user user 36.138.19.180 port 49322 [preauth] Dec 13 01:47:04.983700 systemd[1]: sshd@0-139.178.70.110:22-36.138.19.180:49322.service: Deactivated successfully. Dec 13 01:47:05.175118 systemd[1]: Started sshd@1-139.178.70.110:22-36.138.19.180:49324.service - OpenSSH per-connection server daemon (36.138.19.180:49324). Dec 13 01:47:05.192970 kubelet[1697]: E1213 01:47:05.192911 1697 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:47:05.194348 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:47:05.194442 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:47:05.879802 sshd[1742]: Invalid user user from 36.138.19.180 port 49324 Dec 13 01:47:06.053813 sshd[1742]: Connection closed by invalid user user 36.138.19.180 port 49324 [preauth] Dec 13 01:47:06.055261 systemd[1]: sshd@1-139.178.70.110:22-36.138.19.180:49324.service: Deactivated successfully. Dec 13 01:47:06.258700 systemd[1]: Started sshd@2-139.178.70.110:22-36.138.19.180:49328.service - OpenSSH per-connection server daemon (36.138.19.180:49328). Dec 13 01:47:07.069432 sshd[1749]: Invalid user user from 36.138.19.180 port 49328 Dec 13 01:47:07.268904 sshd[1749]: Connection closed by invalid user user 36.138.19.180 port 49328 [preauth] Dec 13 01:47:07.270176 systemd[1]: sshd@2-139.178.70.110:22-36.138.19.180:49328.service: Deactivated successfully. Dec 13 01:47:07.481126 systemd[1]: Started sshd@3-139.178.70.110:22-36.138.19.180:49338.service - OpenSSH per-connection server daemon (36.138.19.180:49338). Dec 13 01:47:08.289787 sshd[1754]: Invalid user user from 36.138.19.180 port 49338 Dec 13 01:47:08.489753 sshd[1754]: Connection closed by invalid user user 36.138.19.180 port 49338 [preauth] Dec 13 01:47:08.491030 systemd[1]: sshd@3-139.178.70.110:22-36.138.19.180:49338.service: Deactivated successfully. Dec 13 01:47:08.702887 systemd[1]: Started sshd@4-139.178.70.110:22-36.138.19.180:49342.service - OpenSSH per-connection server daemon (36.138.19.180:49342). Dec 13 01:47:09.520464 sshd[1759]: Invalid user user from 36.138.19.180 port 49342 Dec 13 01:47:09.722018 sshd[1759]: Connection closed by invalid user user 36.138.19.180 port 49342 [preauth] Dec 13 01:47:09.723127 systemd[1]: sshd@4-139.178.70.110:22-36.138.19.180:49342.service: Deactivated successfully. Dec 13 01:47:09.911603 systemd[1]: Started sshd@5-139.178.70.110:22-36.138.19.180:49346.service - OpenSSH per-connection server daemon (36.138.19.180:49346). Dec 13 01:47:10.680726 sshd[1764]: Invalid user user from 36.138.19.180 port 49346 Dec 13 01:47:10.870298 sshd[1764]: Connection closed by invalid user user 36.138.19.180 port 49346 [preauth] Dec 13 01:47:10.871500 systemd[1]: sshd@5-139.178.70.110:22-36.138.19.180:49346.service: Deactivated successfully. Dec 13 01:47:11.075294 systemd[1]: Started sshd@6-139.178.70.110:22-36.138.19.180:49360.service - OpenSSH per-connection server daemon (36.138.19.180:49360). Dec 13 01:47:11.861703 sshd[1769]: Invalid user user from 36.138.19.180 port 49360 Dec 13 01:47:12.055114 sshd[1769]: Connection closed by invalid user user 36.138.19.180 port 49360 [preauth] Dec 13 01:47:12.056271 systemd[1]: sshd@6-139.178.70.110:22-36.138.19.180:49360.service: Deactivated successfully. Dec 13 01:47:12.224144 systemd[1]: Started sshd@7-139.178.70.110:22-36.138.19.180:49368.service - OpenSSH per-connection server daemon (36.138.19.180:49368). Dec 13 01:47:12.898576 sshd[1774]: Invalid user user from 36.138.19.180 port 49368 Dec 13 01:47:13.064254 sshd[1774]: Connection closed by invalid user user 36.138.19.180 port 49368 [preauth] Dec 13 01:47:13.065114 systemd[1]: sshd@7-139.178.70.110:22-36.138.19.180:49368.service: Deactivated successfully. Dec 13 01:47:13.334025 systemd[1]: Started sshd@8-139.178.70.110:22-36.138.19.180:49382.service - OpenSSH per-connection server daemon (36.138.19.180:49382). Dec 13 01:47:14.142497 sshd[1779]: Invalid user ubuntu from 36.138.19.180 port 49382 Dec 13 01:47:14.342114 sshd[1779]: Connection closed by invalid user ubuntu 36.138.19.180 port 49382 [preauth] Dec 13 01:47:14.343487 systemd[1]: sshd@8-139.178.70.110:22-36.138.19.180:49382.service: Deactivated successfully. Dec 13 01:47:14.545551 systemd[1]: Started sshd@9-139.178.70.110:22-36.138.19.180:38758.service - OpenSSH per-connection server daemon (36.138.19.180:38758). Dec 13 01:47:15.338876 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:47:15.339261 sshd[1784]: Invalid user ubuntu from 36.138.19.180 port 38758 Dec 13 01:47:15.346139 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:47:15.402092 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:47:15.404547 (kubelet)[1794]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:47:15.504449 kubelet[1794]: E1213 01:47:15.504352 1794 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:47:15.507736 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:47:15.507829 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:47:15.533870 sshd[1784]: Connection closed by invalid user ubuntu 36.138.19.180 port 38758 [preauth] Dec 13 01:47:15.534529 systemd[1]: sshd@9-139.178.70.110:22-36.138.19.180:38758.service: Deactivated successfully. Dec 13 01:47:15.743197 systemd[1]: Started sshd@10-139.178.70.110:22-36.138.19.180:38770.service - OpenSSH per-connection server daemon (36.138.19.180:38770). Dec 13 01:47:16.556669 sshd[1804]: Invalid user ubuntu from 36.138.19.180 port 38770 Dec 13 01:47:16.756739 sshd[1804]: Connection closed by invalid user ubuntu 36.138.19.180 port 38770 [preauth] Dec 13 01:47:16.758060 systemd[1]: sshd@10-139.178.70.110:22-36.138.19.180:38770.service: Deactivated successfully. Dec 13 01:47:16.936027 systemd[1]: Started sshd@11-139.178.70.110:22-36.138.19.180:38778.service - OpenSSH per-connection server daemon (36.138.19.180:38778). Dec 13 01:47:17.633736 sshd[1809]: Invalid user ubuntu from 36.138.19.180 port 38778 Dec 13 01:47:17.805774 sshd[1809]: Connection closed by invalid user ubuntu 36.138.19.180 port 38778 [preauth] Dec 13 01:47:17.807150 systemd[1]: sshd@11-139.178.70.110:22-36.138.19.180:38778.service: Deactivated successfully. Dec 13 01:47:18.011936 systemd[1]: Started sshd@12-139.178.70.110:22-36.138.19.180:38780.service - OpenSSH per-connection server daemon (36.138.19.180:38780). Dec 13 01:47:18.817402 sshd[1814]: Invalid user ubuntu from 36.138.19.180 port 38780 Dec 13 01:47:19.019982 sshd[1814]: Connection closed by invalid user ubuntu 36.138.19.180 port 38780 [preauth] Dec 13 01:47:19.020891 systemd[1]: sshd@12-139.178.70.110:22-36.138.19.180:38780.service: Deactivated successfully. Dec 13 01:47:19.229861 systemd[1]: Started sshd@13-139.178.70.110:22-36.138.19.180:38782.service - OpenSSH per-connection server daemon (36.138.19.180:38782). Dec 13 01:47:20.049603 sshd[1819]: Invalid user ubuntu from 36.138.19.180 port 38782 Dec 13 01:47:20.251964 sshd[1819]: Connection closed by invalid user ubuntu 36.138.19.180 port 38782 [preauth] Dec 13 01:47:20.251672 systemd[1]: sshd@13-139.178.70.110:22-36.138.19.180:38782.service: Deactivated successfully. Dec 13 01:47:20.427068 systemd[1]: Started sshd@14-139.178.70.110:22-36.138.19.180:38798.service - OpenSSH per-connection server daemon (36.138.19.180:38798). Dec 13 01:47:21.113663 sshd[1824]: Invalid user ubuntu from 36.138.19.180 port 38798 Dec 13 01:47:21.282741 sshd[1824]: Connection closed by invalid user ubuntu 36.138.19.180 port 38798 [preauth] Dec 13 01:47:21.283767 systemd[1]: sshd@14-139.178.70.110:22-36.138.19.180:38798.service: Deactivated successfully. Dec 13 01:47:21.482553 systemd[1]: Started sshd@15-139.178.70.110:22-36.138.19.180:38802.service - OpenSSH per-connection server daemon (36.138.19.180:38802). Dec 13 01:47:22.267718 sshd[1829]: Invalid user ubuntu from 36.138.19.180 port 38802 Dec 13 01:47:22.460001 sshd[1829]: Connection closed by invalid user ubuntu 36.138.19.180 port 38802 [preauth] Dec 13 01:47:22.461074 systemd[1]: sshd@15-139.178.70.110:22-36.138.19.180:38802.service: Deactivated successfully. Dec 13 01:47:22.652399 systemd[1]: Started sshd@16-139.178.70.110:22-36.138.19.180:38818.service - OpenSSH per-connection server daemon (36.138.19.180:38818). Dec 13 01:47:23.365770 sshd[1834]: Invalid user ubuntu from 36.138.19.180 port 38818 Dec 13 01:47:23.541386 sshd[1834]: Connection closed by invalid user ubuntu 36.138.19.180 port 38818 [preauth] Dec 13 01:47:23.542412 systemd[1]: sshd@16-139.178.70.110:22-36.138.19.180:38818.service: Deactivated successfully. Dec 13 01:47:23.717640 systemd[1]: Started sshd@17-139.178.70.110:22-36.138.19.180:43430.service - OpenSSH per-connection server daemon (36.138.19.180:43430). Dec 13 01:47:24.413985 sshd[1839]: Invalid user ubuntu from 36.138.19.180 port 43430 Dec 13 01:47:24.585316 sshd[1839]: Connection closed by invalid user ubuntu 36.138.19.180 port 43430 [preauth] Dec 13 01:47:24.586598 systemd[1]: sshd@17-139.178.70.110:22-36.138.19.180:43430.service: Deactivated successfully. Dec 13 01:47:24.764780 systemd[1]: Started sshd@18-139.178.70.110:22-36.138.19.180:43442.service - OpenSSH per-connection server daemon (36.138.19.180:43442). Dec 13 01:47:25.462978 sshd[1844]: Invalid user ubuntu from 36.138.19.180 port 43442 Dec 13 01:47:25.634689 sshd[1844]: Connection closed by invalid user ubuntu 36.138.19.180 port 43442 [preauth] Dec 13 01:47:25.635736 systemd[1]: sshd@18-139.178.70.110:22-36.138.19.180:43442.service: Deactivated successfully. Dec 13 01:47:25.637237 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:47:25.641205 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:47:25.745502 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:47:25.748340 (kubelet)[1856]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:47:25.803748 kubelet[1856]: E1213 01:47:25.803713 1856 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:47:25.805074 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:47:25.805150 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:47:25.846837 systemd[1]: Started sshd@19-139.178.70.110:22-36.138.19.180:43458.service - OpenSSH per-connection server daemon (36.138.19.180:43458). Dec 13 01:47:26.673945 sshd[1865]: Invalid user ubuntu from 36.138.19.180 port 43458 Dec 13 01:47:26.877359 sshd[1865]: Connection closed by invalid user ubuntu 36.138.19.180 port 43458 [preauth] Dec 13 01:47:26.878768 systemd[1]: sshd@19-139.178.70.110:22-36.138.19.180:43458.service: Deactivated successfully. Dec 13 01:47:27.084587 systemd[1]: Started sshd@20-139.178.70.110:22-36.138.19.180:43468.service - OpenSSH per-connection server daemon (36.138.19.180:43468). Dec 13 01:47:27.899366 sshd[1870]: Invalid user ubuntu from 36.138.19.180 port 43468 Dec 13 01:47:28.100201 sshd[1870]: Connection closed by invalid user ubuntu 36.138.19.180 port 43468 [preauth] Dec 13 01:47:28.100834 systemd[1]: sshd@20-139.178.70.110:22-36.138.19.180:43468.service: Deactivated successfully. Dec 13 01:47:28.285569 systemd[1]: Started sshd@21-139.178.70.110:22-36.138.19.180:43484.service - OpenSSH per-connection server daemon (36.138.19.180:43484). Dec 13 01:47:28.987251 sshd[1875]: Invalid user ubuntu from 36.138.19.180 port 43484 Dec 13 01:47:29.160028 sshd[1875]: Connection closed by invalid user ubuntu 36.138.19.180 port 43484 [preauth] Dec 13 01:47:29.161181 systemd[1]: sshd@21-139.178.70.110:22-36.138.19.180:43484.service: Deactivated successfully. Dec 13 01:47:29.362078 systemd[1]: Started sshd@22-139.178.70.110:22-36.138.19.180:43486.service - OpenSSH per-connection server daemon (36.138.19.180:43486). Dec 13 01:47:30.165555 sshd[1880]: Invalid user ubuntu from 36.138.19.180 port 43486 Dec 13 01:47:30.363470 sshd[1880]: Connection closed by invalid user ubuntu 36.138.19.180 port 43486 [preauth] Dec 13 01:47:30.364497 systemd[1]: sshd@22-139.178.70.110:22-36.138.19.180:43486.service: Deactivated successfully. Dec 13 01:47:30.572522 systemd[1]: Started sshd@23-139.178.70.110:22-36.138.19.180:43496.service - OpenSSH per-connection server daemon (36.138.19.180:43496). Dec 13 01:47:31.385916 sshd[1885]: Invalid user ubuntu from 36.138.19.180 port 43496 Dec 13 01:47:31.587147 sshd[1885]: Connection closed by invalid user ubuntu 36.138.19.180 port 43496 [preauth] Dec 13 01:47:31.588562 systemd[1]: sshd@23-139.178.70.110:22-36.138.19.180:43496.service: Deactivated successfully. Dec 13 01:47:31.772273 systemd[1]: Started sshd@24-139.178.70.110:22-36.138.19.180:43498.service - OpenSSH per-connection server daemon (36.138.19.180:43498). Dec 13 01:47:32.476556 sshd[1890]: Invalid user ubuntu from 36.138.19.180 port 43498 Dec 13 01:47:32.649963 sshd[1890]: Connection closed by invalid user ubuntu 36.138.19.180 port 43498 [preauth] Dec 13 01:47:32.650614 systemd[1]: sshd@24-139.178.70.110:22-36.138.19.180:43498.service: Deactivated successfully. Dec 13 01:47:32.861151 systemd[1]: Started sshd@25-139.178.70.110:22-36.138.19.180:43506.service - OpenSSH per-connection server daemon (36.138.19.180:43506). Dec 13 01:47:33.684320 sshd[1895]: Invalid user ubuntu from 36.138.19.180 port 43506 Dec 13 01:47:33.888776 sshd[1895]: Connection closed by invalid user ubuntu 36.138.19.180 port 43506 [preauth] Dec 13 01:47:33.889846 systemd[1]: sshd@25-139.178.70.110:22-36.138.19.180:43506.service: Deactivated successfully. Dec 13 01:47:34.098963 systemd[1]: Started sshd@26-139.178.70.110:22-36.138.19.180:57716.service - OpenSSH per-connection server daemon (36.138.19.180:57716). Dec 13 01:47:34.922216 sshd[1900]: Invalid user ubuntu from 36.138.19.180 port 57716 Dec 13 01:47:35.125497 sshd[1900]: Connection closed by invalid user ubuntu 36.138.19.180 port 57716 [preauth] Dec 13 01:47:35.126747 systemd[1]: sshd@26-139.178.70.110:22-36.138.19.180:57716.service: Deactivated successfully. Dec 13 01:47:35.332750 systemd[1]: Started sshd@27-139.178.70.110:22-36.138.19.180:57722.service - OpenSSH per-connection server daemon (36.138.19.180:57722). Dec 13 01:47:35.936163 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 01:47:35.941374 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:47:36.087698 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:47:36.090580 (kubelet)[1915]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:47:36.150097 sshd[1905]: Invalid user ubuntu from 36.138.19.180 port 57722 Dec 13 01:47:36.166869 kubelet[1915]: E1213 01:47:36.166824 1915 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:47:36.167943 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:47:36.168025 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:47:36.351360 sshd[1905]: Connection closed by invalid user ubuntu 36.138.19.180 port 57722 [preauth] Dec 13 01:47:36.352238 systemd[1]: sshd@27-139.178.70.110:22-36.138.19.180:57722.service: Deactivated successfully. Dec 13 01:47:36.559305 systemd[1]: Started sshd@28-139.178.70.110:22-36.138.19.180:57730.service - OpenSSH per-connection server daemon (36.138.19.180:57730). Dec 13 01:47:37.365295 sshd[1926]: Invalid user ubuntu from 36.138.19.180 port 57730 Dec 13 01:47:37.563885 sshd[1926]: Connection closed by invalid user ubuntu 36.138.19.180 port 57730 [preauth] Dec 13 01:47:37.564660 systemd[1]: sshd@28-139.178.70.110:22-36.138.19.180:57730.service: Deactivated successfully. Dec 13 01:47:37.768432 systemd[1]: Started sshd@29-139.178.70.110:22-36.138.19.180:57744.service - OpenSSH per-connection server daemon (36.138.19.180:57744). Dec 13 01:47:38.570569 sshd[1931]: Invalid user ubuntu from 36.138.19.180 port 57744 Dec 13 01:47:38.768425 sshd[1931]: Connection closed by invalid user ubuntu 36.138.19.180 port 57744 [preauth] Dec 13 01:47:38.769665 systemd[1]: sshd@29-139.178.70.110:22-36.138.19.180:57744.service: Deactivated successfully. Dec 13 01:47:38.984524 systemd[1]: Started sshd@30-139.178.70.110:22-36.138.19.180:57746.service - OpenSSH per-connection server daemon (36.138.19.180:57746). Dec 13 01:47:39.813555 sshd[1936]: Invalid user ubuntu from 36.138.19.180 port 57746 Dec 13 01:47:40.023307 sshd[1936]: Connection closed by invalid user ubuntu 36.138.19.180 port 57746 [preauth] Dec 13 01:47:40.024113 systemd[1]: sshd@30-139.178.70.110:22-36.138.19.180:57746.service: Deactivated successfully. Dec 13 01:47:40.223977 systemd[1]: Started sshd@31-139.178.70.110:22-36.138.19.180:57750.service - OpenSSH per-connection server daemon (36.138.19.180:57750). Dec 13 01:47:41.012168 sshd[1941]: Invalid user ubuntu from 36.138.19.180 port 57750 Dec 13 01:47:41.206794 sshd[1941]: Connection closed by invalid user ubuntu 36.138.19.180 port 57750 [preauth] Dec 13 01:47:41.208070 systemd[1]: sshd@31-139.178.70.110:22-36.138.19.180:57750.service: Deactivated successfully. Dec 13 01:47:41.421533 systemd[1]: Started sshd@32-139.178.70.110:22-36.138.19.180:57754.service - OpenSSH per-connection server daemon (36.138.19.180:57754). Dec 13 01:47:42.251372 sshd[1946]: Invalid user ubuntu from 36.138.19.180 port 57754 Dec 13 01:47:42.448350 systemd[1]: Started sshd@33-139.178.70.110:22-139.178.89.65:39664.service - OpenSSH per-connection server daemon (139.178.89.65:39664). Dec 13 01:47:42.457694 sshd[1946]: Connection closed by invalid user ubuntu 36.138.19.180 port 57754 [preauth] Dec 13 01:47:42.458169 systemd[1]: sshd@32-139.178.70.110:22-36.138.19.180:57754.service: Deactivated successfully. Dec 13 01:47:42.478099 sshd[1949]: Accepted publickey for core from 139.178.89.65 port 39664 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:47:42.478806 sshd[1949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:47:42.480985 systemd-logind[1521]: New session 3 of user core. Dec 13 01:47:42.490997 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:47:42.542129 systemd[1]: Started sshd@34-139.178.70.110:22-139.178.89.65:39668.service - OpenSSH per-connection server daemon (139.178.89.65:39668). Dec 13 01:47:42.568592 sshd[1956]: Accepted publickey for core from 139.178.89.65 port 39668 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:47:42.569545 sshd[1956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:47:42.573251 systemd-logind[1521]: New session 4 of user core. Dec 13 01:47:42.578048 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:47:42.629969 systemd[1]: Started sshd@35-139.178.70.110:22-36.138.19.180:57758.service - OpenSSH per-connection server daemon (36.138.19.180:57758). Dec 13 01:47:42.631440 sshd[1956]: pam_unix(sshd:session): session closed for user core Dec 13 01:47:42.634597 systemd[1]: sshd@34-139.178.70.110:22-139.178.89.65:39668.service: Deactivated successfully. Dec 13 01:47:42.636338 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:47:42.638027 systemd-logind[1521]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:47:42.643097 systemd[1]: Started sshd@36-139.178.70.110:22-139.178.89.65:39672.service - OpenSSH per-connection server daemon (139.178.89.65:39672). Dec 13 01:47:42.644005 systemd-logind[1521]: Removed session 4. Dec 13 01:47:42.669176 sshd[1966]: Accepted publickey for core from 139.178.89.65 port 39672 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:47:42.670074 sshd[1966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:47:42.674382 systemd-logind[1521]: New session 5 of user core. Dec 13 01:47:42.681078 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:47:42.729484 sshd[1966]: pam_unix(sshd:session): session closed for user core Dec 13 01:47:42.737687 systemd[1]: sshd@36-139.178.70.110:22-139.178.89.65:39672.service: Deactivated successfully. Dec 13 01:47:42.738767 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:47:42.739832 systemd-logind[1521]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:47:42.744121 systemd[1]: Started sshd@37-139.178.70.110:22-139.178.89.65:39674.service - OpenSSH per-connection server daemon (139.178.89.65:39674). Dec 13 01:47:42.745501 systemd-logind[1521]: Removed session 5. Dec 13 01:47:42.770725 sshd[1973]: Accepted publickey for core from 139.178.89.65 port 39674 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:47:42.771567 sshd[1973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:47:42.775091 systemd-logind[1521]: New session 6 of user core. Dec 13 01:47:42.784061 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:47:42.832304 sshd[1973]: pam_unix(sshd:session): session closed for user core Dec 13 01:47:42.840396 systemd[1]: sshd@37-139.178.70.110:22-139.178.89.65:39674.service: Deactivated successfully. Dec 13 01:47:42.841203 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:47:42.841564 systemd-logind[1521]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:47:42.842703 systemd[1]: Started sshd@38-139.178.70.110:22-139.178.89.65:39686.service - OpenSSH per-connection server daemon (139.178.89.65:39686). Dec 13 01:47:42.844280 systemd-logind[1521]: Removed session 6. Dec 13 01:47:42.871475 sshd[1980]: Accepted publickey for core from 139.178.89.65 port 39686 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:47:42.872240 sshd[1980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:47:42.875670 systemd-logind[1521]: New session 7 of user core. Dec 13 01:47:42.885153 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:47:42.943962 sudo[1983]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:47:42.944181 sudo[1983]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:47:42.953762 sudo[1983]: pam_unix(sudo:session): session closed for user root Dec 13 01:47:42.954964 sshd[1980]: pam_unix(sshd:session): session closed for user core Dec 13 01:47:42.963519 systemd[1]: sshd@38-139.178.70.110:22-139.178.89.65:39686.service: Deactivated successfully. Dec 13 01:47:42.964439 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:47:42.964847 systemd-logind[1521]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:47:42.965915 systemd[1]: Started sshd@39-139.178.70.110:22-139.178.89.65:39700.service - OpenSSH per-connection server daemon (139.178.89.65:39700). Dec 13 01:47:42.967084 systemd-logind[1521]: Removed session 7. Dec 13 01:47:43.001204 sshd[1988]: Accepted publickey for core from 139.178.89.65 port 39700 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:47:43.001992 sshd[1988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:47:43.004489 systemd-logind[1521]: New session 8 of user core. Dec 13 01:47:43.011008 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:47:43.058886 sudo[1992]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:47:43.059250 sudo[1992]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:47:43.061167 sudo[1992]: pam_unix(sudo:session): session closed for user root Dec 13 01:47:43.063875 sudo[1991]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:47:43.064037 sudo[1991]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:47:43.077177 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:47:43.078099 auditctl[1995]: No rules Dec 13 01:47:43.078258 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:47:43.078481 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:47:43.079964 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:47:43.094580 augenrules[2013]: No rules Dec 13 01:47:43.095685 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:47:43.096358 sudo[1991]: pam_unix(sudo:session): session closed for user root Dec 13 01:47:43.097161 sshd[1988]: pam_unix(sshd:session): session closed for user core Dec 13 01:47:43.101246 systemd[1]: sshd@39-139.178.70.110:22-139.178.89.65:39700.service: Deactivated successfully. Dec 13 01:47:43.102092 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:47:43.102876 systemd-logind[1521]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:47:43.103650 systemd[1]: Started sshd@40-139.178.70.110:22-139.178.89.65:39712.service - OpenSSH per-connection server daemon (139.178.89.65:39712). Dec 13 01:47:43.105114 systemd-logind[1521]: Removed session 8. Dec 13 01:47:43.132177 sshd[2021]: Accepted publickey for core from 139.178.89.65 port 39712 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:47:43.132907 sshd[2021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:47:43.135066 systemd-logind[1521]: New session 9 of user core. Dec 13 01:47:43.143031 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:47:43.189701 sudo[2024]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:47:43.190044 sudo[2024]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:47:43.322169 sshd[1961]: Invalid user ubuntu from 36.138.19.180 port 57758 Dec 13 01:47:43.483065 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:47:43.483134 (dockerd)[2041]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:47:43.492850 sshd[1961]: Connection closed by invalid user ubuntu 36.138.19.180 port 57758 [preauth] Dec 13 01:47:43.493352 systemd[1]: sshd@35-139.178.70.110:22-36.138.19.180:57758.service: Deactivated successfully. Dec 13 01:47:43.664780 systemd[1]: Started sshd@41-139.178.70.110:22-36.138.19.180:36440.service - OpenSSH per-connection server daemon (36.138.19.180:36440). Dec 13 01:47:43.746314 dockerd[2041]: time="2024-12-13T01:47:43.746064138Z" level=info msg="Starting up" Dec 13 01:47:43.852778 dockerd[2041]: time="2024-12-13T01:47:43.852750838Z" level=info msg="Loading containers: start." Dec 13 01:47:43.924940 kernel: Initializing XFRM netlink socket Dec 13 01:47:43.974560 systemd-networkd[1244]: docker0: Link UP Dec 13 01:47:43.988965 dockerd[2041]: time="2024-12-13T01:47:43.988933194Z" level=info msg="Loading containers: done." Dec 13 01:47:44.000148 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2455407494-merged.mount: Deactivated successfully. Dec 13 01:47:44.001414 dockerd[2041]: time="2024-12-13T01:47:44.001116520Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:47:44.001414 dockerd[2041]: time="2024-12-13T01:47:44.001200413Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:47:44.001414 dockerd[2041]: time="2024-12-13T01:47:44.001272273Z" level=info msg="Daemon has completed initialization" Dec 13 01:47:44.017476 dockerd[2041]: time="2024-12-13T01:47:44.017441184Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:47:44.017578 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:47:44.346604 sshd[2049]: Invalid user ubuntu from 36.138.19.180 port 36440 Dec 13 01:47:44.513204 sshd[2049]: Connection closed by invalid user ubuntu 36.138.19.180 port 36440 [preauth] Dec 13 01:47:44.514850 systemd[1]: sshd@41-139.178.70.110:22-36.138.19.180:36440.service: Deactivated successfully. Dec 13 01:47:44.734981 systemd[1]: Started sshd@42-139.178.70.110:22-36.138.19.180:36444.service - OpenSSH per-connection server daemon (36.138.19.180:36444). Dec 13 01:47:44.782443 containerd[1540]: time="2024-12-13T01:47:44.782413931Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 01:47:45.349542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2803364855.mount: Deactivated successfully. Dec 13 01:47:45.560894 sshd[2191]: Invalid user ubuntu from 36.138.19.180 port 36444 Dec 13 01:47:45.764870 sshd[2191]: Connection closed by invalid user ubuntu 36.138.19.180 port 36444 [preauth] Dec 13 01:47:45.765524 systemd[1]: sshd@42-139.178.70.110:22-36.138.19.180:36444.service: Deactivated successfully. Dec 13 01:47:45.940572 systemd[1]: Started sshd@43-139.178.70.110:22-36.138.19.180:36448.service - OpenSSH per-connection server daemon (36.138.19.180:36448). Dec 13 01:47:46.186100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 01:47:46.191092 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:47:46.249391 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:47:46.259182 (kubelet)[2262]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:47:46.288778 kubelet[2262]: E1213 01:47:46.288588 2262 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:47:46.289747 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:47:46.289823 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:47:46.641526 sshd[2249]: Invalid user ubuntu from 36.138.19.180 port 36448 Dec 13 01:47:46.643151 containerd[1540]: time="2024-12-13T01:47:46.643120423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:47:46.652880 containerd[1540]: time="2024-12-13T01:47:46.652841642Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139254" Dec 13 01:47:46.665215 containerd[1540]: time="2024-12-13T01:47:46.664959333Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:47:46.680492 containerd[1540]: time="2024-12-13T01:47:46.680463507Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:47:46.681104 containerd[1540]: time="2024-12-13T01:47:46.681087669Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 1.89864854s" Dec 13 01:47:46.681159 containerd[1540]: time="2024-12-13T01:47:46.681151209Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 01:47:46.693332 containerd[1540]: time="2024-12-13T01:47:46.693309935Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 01:47:46.812559 sshd[2249]: Connection closed by invalid user ubuntu 36.138.19.180 port 36448 [preauth] Dec 13 01:47:46.814140 systemd[1]: sshd@43-139.178.70.110:22-36.138.19.180:36448.service: Deactivated successfully. Dec 13 01:47:47.027611 systemd[1]: Started sshd@44-139.178.70.110:22-36.138.19.180:36458.service - OpenSSH per-connection server daemon (36.138.19.180:36458). Dec 13 01:47:47.312383 update_engine[1522]: I20241213 01:47:47.312258 1522 update_attempter.cc:509] Updating boot flags... Dec 13 01:47:47.381018 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2290) Dec 13 01:47:47.856664 sshd[2279]: Invalid user ubuntu from 36.138.19.180 port 36458 Dec 13 01:47:48.061375 sshd[2279]: Connection closed by invalid user ubuntu 36.138.19.180 port 36458 [preauth] Dec 13 01:47:48.062709 systemd[1]: sshd@44-139.178.70.110:22-36.138.19.180:36458.service: Deactivated successfully. Dec 13 01:47:48.241753 systemd[1]: Started sshd@45-139.178.70.110:22-36.138.19.180:36462.service - OpenSSH per-connection server daemon (36.138.19.180:36462). Dec 13 01:47:48.564000 containerd[1540]: time="2024-12-13T01:47:48.563310183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:47:48.569834 containerd[1540]: time="2024-12-13T01:47:48.569801098Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217732" Dec 13 01:47:48.575723 containerd[1540]: time="2024-12-13T01:47:48.575673870Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:47:48.581510 containerd[1540]: time="2024-12-13T01:47:48.581471801Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:47:48.582270 containerd[1540]: time="2024-12-13T01:47:48.582191862Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 1.888765403s" Dec 13 01:47:48.582270 containerd[1540]: time="2024-12-13T01:47:48.582214437Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 01:47:48.599469 containerd[1540]: time="2024-12-13T01:47:48.599410368Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 01:47:48.948442 sshd[2303]: Invalid user ubuntu from 36.138.19.180 port 36462 Dec 13 01:47:49.120594 sshd[2303]: Connection closed by invalid user ubuntu 36.138.19.180 port 36462 [preauth] Dec 13 01:47:49.122230 systemd[1]: sshd@45-139.178.70.110:22-36.138.19.180:36462.service: Deactivated successfully. Dec 13 01:47:49.303534 systemd[1]: Started sshd@46-139.178.70.110:22-36.138.19.180:36474.service - OpenSSH per-connection server daemon (36.138.19.180:36474). Dec 13 01:47:49.589619 containerd[1540]: time="2024-12-13T01:47:49.589496589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:47:49.591206 containerd[1540]: time="2024-12-13T01:47:49.591169648Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332822" Dec 13 01:47:49.594533 containerd[1540]: time="2024-12-13T01:47:49.594495003Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:47:49.601567 containerd[1540]: time="2024-12-13T01:47:49.601533457Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:47:49.602199 containerd[1540]: time="2024-12-13T01:47:49.601907197Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 1.002473945s" Dec 13 01:47:49.602199 containerd[1540]: time="2024-12-13T01:47:49.601932957Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 01:47:49.616215 containerd[1540]: time="2024-12-13T01:47:49.616105484Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 01:47:50.014758 sshd[2316]: Invalid user ubuntu from 36.138.19.180 port 36474 Dec 13 01:47:50.189053 sshd[2316]: Connection closed by invalid user ubuntu 36.138.19.180 port 36474 [preauth] Dec 13 01:47:50.190259 systemd[1]: sshd@46-139.178.70.110:22-36.138.19.180:36474.service: Deactivated successfully. Dec 13 01:47:50.369445 systemd[1]: Started sshd@47-139.178.70.110:22-36.138.19.180:36486.service - OpenSSH per-connection server daemon (36.138.19.180:36486). Dec 13 01:47:51.072426 sshd[2326]: Invalid user ubuntu from 36.138.19.180 port 36486 Dec 13 01:47:51.245633 sshd[2326]: Connection closed by invalid user ubuntu 36.138.19.180 port 36486 [preauth] Dec 13 01:47:51.247195 systemd[1]: sshd@47-139.178.70.110:22-36.138.19.180:36486.service: Deactivated successfully. Dec 13 01:47:51.459456 systemd[1]: Started sshd@48-139.178.70.110:22-36.138.19.180:36502.service - OpenSSH per-connection server daemon (36.138.19.180:36502). Dec 13 01:47:51.856311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2246480895.mount: Deactivated successfully. Dec 13 01:47:52.286976 sshd[2331]: Invalid user ubuntu from 36.138.19.180 port 36502 Dec 13 01:47:52.491210 sshd[2331]: Connection closed by invalid user ubuntu 36.138.19.180 port 36502 [preauth] Dec 13 01:47:52.492891 systemd[1]: sshd@48-139.178.70.110:22-36.138.19.180:36502.service: Deactivated successfully. Dec 13 01:47:52.588858 containerd[1540]: time="2024-12-13T01:47:52.588354187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:47:52.593671 containerd[1540]: time="2024-12-13T01:47:52.593641391Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619958" Dec 13 01:47:52.600007 containerd[1540]: time="2024-12-13T01:47:52.599974070Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:47:52.607133 containerd[1540]: time="2024-12-13T01:47:52.607091415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:47:52.607843 containerd[1540]: time="2024-12-13T01:47:52.607483586Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 2.991353132s" Dec 13 01:47:52.607843 containerd[1540]: time="2024-12-13T01:47:52.607515297Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 01:47:52.623047 containerd[1540]: time="2024-12-13T01:47:52.622990693Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:47:52.701172 systemd[1]: Started sshd@49-139.178.70.110:22-36.138.19.180:36516.service - OpenSSH per-connection server daemon (36.138.19.180:36516). Dec 13 01:47:53.395065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3141109554.mount: Deactivated successfully. Dec 13 01:47:53.542140 sshd[2348]: Invalid user ubuntu from 36.138.19.180 port 36516 Dec 13 01:47:53.744856 sshd[2348]: Connection closed by invalid user ubuntu 36.138.19.180 port 36516 [preauth] Dec 13 01:47:53.745732 systemd[1]: sshd@49-139.178.70.110:22-36.138.19.180:36516.service: Deactivated successfully. Dec 13 01:47:53.956356 systemd[1]: Started sshd@50-139.178.70.110:22-36.138.19.180:43446.service - OpenSSH per-connection server daemon (36.138.19.180:43446). Dec 13 01:47:54.454706 containerd[1540]: time="2024-12-13T01:47:54.453911676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:47:54.460985 containerd[1540]: time="2024-12-13T01:47:54.460939290Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Dec 13 01:47:54.479740 containerd[1540]: time="2024-12-13T01:47:54.479693696Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:47:54.498641 containerd[1540]: time="2024-12-13T01:47:54.498599394Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:47:54.499304 containerd[1540]: time="2024-12-13T01:47:54.499195303Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.876178173s" Dec 13 01:47:54.499304 containerd[1540]: time="2024-12-13T01:47:54.499217920Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 01:47:54.512889 containerd[1540]: time="2024-12-13T01:47:54.512860004Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:47:54.844621 sshd[2393]: Invalid user ubuntu from 36.138.19.180 port 43446 Dec 13 01:47:55.049969 sshd[2393]: Connection closed by invalid user ubuntu 36.138.19.180 port 43446 [preauth] Dec 13 01:47:55.051001 systemd[1]: sshd@50-139.178.70.110:22-36.138.19.180:43446.service: Deactivated successfully. Dec 13 01:47:55.208654 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3841508236.mount: Deactivated successfully. Dec 13 01:47:55.210890 containerd[1540]: time="2024-12-13T01:47:55.210444633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:47:55.211129 containerd[1540]: time="2024-12-13T01:47:55.211109961Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Dec 13 01:47:55.211531 containerd[1540]: time="2024-12-13T01:47:55.211520761Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:47:55.212578 containerd[1540]: time="2024-12-13T01:47:55.212558636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:47:55.213109 containerd[1540]: time="2024-12-13T01:47:55.213096513Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 700.216962ms" Dec 13 01:47:55.213193 containerd[1540]: time="2024-12-13T01:47:55.213183950Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 01:47:55.225726 containerd[1540]: time="2024-12-13T01:47:55.225701072Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 01:47:55.245105 systemd[1]: Started sshd@51-139.178.70.110:22-36.138.19.180:43456.service - OpenSSH per-connection server daemon (36.138.19.180:43456). Dec 13 01:47:56.026456 sshd[2415]: Invalid user ubuntu from 36.138.19.180 port 43456 Dec 13 01:47:56.083538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1885316066.mount: Deactivated successfully. Dec 13 01:47:56.218745 sshd[2415]: Connection closed by invalid user ubuntu 36.138.19.180 port 43456 [preauth] Dec 13 01:47:56.219548 systemd[1]: sshd@51-139.178.70.110:22-36.138.19.180:43456.service: Deactivated successfully. Dec 13 01:47:56.425644 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 13 01:47:56.427008 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:47:56.428187 systemd[1]: Started sshd@52-139.178.70.110:22-36.138.19.180:43472.service - OpenSSH per-connection server daemon (36.138.19.180:43472). Dec 13 01:47:57.107049 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:47:57.109850 (kubelet)[2469]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:47:57.287186 kubelet[2469]: E1213 01:47:57.287147 2469 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:47:57.288383 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:47:57.288462 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:47:57.337373 sshd[2461]: Invalid user ubuntu from 36.138.19.180 port 43472 Dec 13 01:47:57.551811 sshd[2461]: Connection closed by invalid user ubuntu 36.138.19.180 port 43472 [preauth] Dec 13 01:47:57.554078 systemd[1]: sshd@52-139.178.70.110:22-36.138.19.180:43472.service: Deactivated successfully. Dec 13 01:47:57.768021 systemd[1]: Started sshd@53-139.178.70.110:22-36.138.19.180:43480.service - OpenSSH per-connection server daemon (36.138.19.180:43480). Dec 13 01:47:58.351999 containerd[1540]: time="2024-12-13T01:47:58.351351213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:47:58.351999 containerd[1540]: time="2024-12-13T01:47:58.351681354Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Dec 13 01:47:58.351999 containerd[1540]: time="2024-12-13T01:47:58.351974151Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:47:58.353783 containerd[1540]: time="2024-12-13T01:47:58.353770126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:47:58.354456 containerd[1540]: time="2024-12-13T01:47:58.354441196Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.128717865s" Dec 13 01:47:58.354510 containerd[1540]: time="2024-12-13T01:47:58.354501831Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 01:47:58.598576 sshd[2485]: Invalid user ubuntu from 36.138.19.180 port 43480 Dec 13 01:47:58.803388 sshd[2485]: Connection closed by invalid user ubuntu 36.138.19.180 port 43480 [preauth] Dec 13 01:47:58.804713 systemd[1]: sshd@53-139.178.70.110:22-36.138.19.180:43480.service: Deactivated successfully. Dec 13 01:47:59.019037 systemd[1]: Started sshd@54-139.178.70.110:22-36.138.19.180:43486.service - OpenSSH per-connection server daemon (36.138.19.180:43486). Dec 13 01:47:59.815528 sshd[2547]: Invalid user ubuntu from 36.138.19.180 port 43486 Dec 13 01:48:00.013004 sshd[2547]: Connection closed by invalid user ubuntu 36.138.19.180 port 43486 [preauth] Dec 13 01:48:00.015011 systemd[1]: sshd@54-139.178.70.110:22-36.138.19.180:43486.service: Deactivated successfully. Dec 13 01:48:00.230108 systemd[1]: Started sshd@55-139.178.70.110:22-36.138.19.180:43500.service - OpenSSH per-connection server daemon (36.138.19.180:43500). Dec 13 01:48:00.510479 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:48:00.520118 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:48:00.533906 systemd[1]: Reloading requested from client PID 2560 ('systemctl') (unit session-9.scope)... Dec 13 01:48:00.533953 systemd[1]: Reloading... Dec 13 01:48:00.589631 zram_generator::config[2604]: No configuration found. Dec 13 01:48:00.648228 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Dec 13 01:48:00.663383 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:48:00.704941 systemd[1]: Reloading finished in 170 ms. Dec 13 01:48:00.756490 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:48:00.756547 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:48:00.756718 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:48:00.761117 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:48:01.020265 sshd[2552]: Invalid user ubuntu from 36.138.19.180 port 43500 Dec 13 01:48:01.218953 sshd[2552]: Connection closed by invalid user ubuntu 36.138.19.180 port 43500 [preauth] Dec 13 01:48:01.220344 systemd[1]: sshd@55-139.178.70.110:22-36.138.19.180:43500.service: Deactivated successfully. Dec 13 01:48:01.283498 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:48:01.287785 (kubelet)[2670]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:48:01.378050 kubelet[2670]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:48:01.378432 kubelet[2670]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:48:01.378432 kubelet[2670]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:48:01.386851 kubelet[2670]: I1213 01:48:01.386083 2670 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:48:01.432520 systemd[1]: Started sshd@56-139.178.70.110:22-36.138.19.180:43506.service - OpenSSH per-connection server daemon (36.138.19.180:43506). Dec 13 01:48:01.836948 kubelet[2670]: I1213 01:48:01.836520 2670 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:48:01.836948 kubelet[2670]: I1213 01:48:01.836538 2670 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:48:01.836948 kubelet[2670]: I1213 01:48:01.836662 2670 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:48:01.854542 kubelet[2670]: I1213 01:48:01.854422 2670 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:48:01.854542 kubelet[2670]: E1213 01:48:01.854528 2670 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.110:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.110:6443: connect: connection refused Dec 13 01:48:01.863238 kubelet[2670]: I1213 01:48:01.863225 2670 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:48:01.866233 kubelet[2670]: I1213 01:48:01.866219 2670 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:48:01.867261 kubelet[2670]: I1213 01:48:01.867247 2670 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:48:01.868273 kubelet[2670]: I1213 01:48:01.868079 2670 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:48:01.868273 kubelet[2670]: I1213 01:48:01.868093 2670 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:48:01.868273 kubelet[2670]: I1213 01:48:01.868163 2670 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:48:01.869538 kubelet[2670]: I1213 01:48:01.869528 2670 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:48:01.869967 kubelet[2670]: I1213 01:48:01.869960 2670 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:48:01.870071 kubelet[2670]: W1213 01:48:01.869917 2670 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://139.178.70.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Dec 13 01:48:01.870131 kubelet[2670]: E1213 01:48:01.870119 2670 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Dec 13 01:48:01.871563 kubelet[2670]: I1213 01:48:01.871524 2670 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:48:01.871563 kubelet[2670]: I1213 01:48:01.871538 2670 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:48:01.875973 kubelet[2670]: W1213 01:48:01.875917 2670 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://139.178.70.110:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Dec 13 01:48:01.875973 kubelet[2670]: E1213 01:48:01.875946 2670 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.110:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Dec 13 01:48:01.876139 kubelet[2670]: I1213 01:48:01.876074 2670 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:48:01.880535 kubelet[2670]: I1213 01:48:01.880444 2670 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:48:01.882227 kubelet[2670]: W1213 01:48:01.882124 2670 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:48:01.882595 kubelet[2670]: I1213 01:48:01.882579 2670 server.go:1256] "Started kubelet" Dec 13 01:48:01.887173 kubelet[2670]: I1213 01:48:01.887074 2670 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:48:01.887253 kubelet[2670]: I1213 01:48:01.887242 2670 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:48:01.887332 kubelet[2670]: I1213 01:48:01.887322 2670 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:48:01.888059 kubelet[2670]: I1213 01:48:01.887838 2670 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:48:01.889028 kubelet[2670]: I1213 01:48:01.889018 2670 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:48:01.890907 kubelet[2670]: E1213 01:48:01.890571 2670 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.110:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.110:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1810995d25c8c9de default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 01:48:01.882565086 +0000 UTC m=+0.592411976,LastTimestamp:2024-12-13 01:48:01.882565086 +0000 UTC m=+0.592411976,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 01:48:01.894101 kubelet[2670]: E1213 01:48:01.894032 2670 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:48:01.894101 kubelet[2670]: I1213 01:48:01.894058 2670 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:48:01.896705 kubelet[2670]: I1213 01:48:01.896690 2670 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:48:01.896741 kubelet[2670]: I1213 01:48:01.896729 2670 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:48:01.897050 kubelet[2670]: W1213 01:48:01.896912 2670 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://139.178.70.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Dec 13 01:48:01.897050 kubelet[2670]: E1213 01:48:01.896961 2670 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Dec 13 01:48:01.897050 kubelet[2670]: E1213 01:48:01.897011 2670 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.110:6443: connect: connection refused" interval="200ms" Dec 13 01:48:01.897453 kubelet[2670]: I1213 01:48:01.897369 2670 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:48:01.897453 kubelet[2670]: I1213 01:48:01.897407 2670 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:48:01.897989 kubelet[2670]: E1213 01:48:01.897631 2670 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:48:01.899227 kubelet[2670]: I1213 01:48:01.899210 2670 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:48:01.904879 kubelet[2670]: I1213 01:48:01.904869 2670 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:48:01.905527 kubelet[2670]: I1213 01:48:01.905520 2670 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:48:01.905571 kubelet[2670]: I1213 01:48:01.905566 2670 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:48:01.905612 kubelet[2670]: I1213 01:48:01.905607 2670 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:48:01.905667 kubelet[2670]: E1213 01:48:01.905661 2670 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:48:01.909366 kubelet[2670]: W1213 01:48:01.909341 2670 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://139.178.70.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Dec 13 01:48:01.909413 kubelet[2670]: E1213 01:48:01.909409 2670 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Dec 13 01:48:01.924757 kubelet[2670]: I1213 01:48:01.924747 2670 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:48:01.924856 kubelet[2670]: I1213 01:48:01.924850 2670 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:48:01.924910 kubelet[2670]: I1213 01:48:01.924905 2670 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:48:01.925864 kubelet[2670]: I1213 01:48:01.925855 2670 policy_none.go:49] "None policy: Start" Dec 13 01:48:01.926289 kubelet[2670]: I1213 01:48:01.926252 2670 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:48:01.926289 kubelet[2670]: I1213 01:48:01.926264 2670 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:48:01.930138 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:48:01.941858 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:48:01.944617 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:48:01.956132 kubelet[2670]: I1213 01:48:01.955554 2670 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:48:01.956132 kubelet[2670]: I1213 01:48:01.955741 2670 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:48:01.957061 kubelet[2670]: E1213 01:48:01.957050 2670 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 01:48:01.998331 kubelet[2670]: I1213 01:48:01.998315 2670 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:48:01.998711 kubelet[2670]: E1213 01:48:01.998689 2670 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.110:6443/api/v1/nodes\": dial tcp 139.178.70.110:6443: connect: connection refused" node="localhost" Dec 13 01:48:02.006900 kubelet[2670]: I1213 01:48:02.006764 2670 topology_manager.go:215] "Topology Admit Handler" podUID="a2d37267d4bfef3cf096e81fa001e611" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:48:02.008716 kubelet[2670]: I1213 01:48:02.008667 2670 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:48:02.012951 kubelet[2670]: I1213 01:48:02.012552 2670 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:48:02.018112 systemd[1]: Created slice kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice - libcontainer container kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice. Dec 13 01:48:02.037978 systemd[1]: Created slice kubepods-burstable-poda2d37267d4bfef3cf096e81fa001e611.slice - libcontainer container kubepods-burstable-poda2d37267d4bfef3cf096e81fa001e611.slice. Dec 13 01:48:02.041995 systemd[1]: Created slice kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice - libcontainer container kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice. Dec 13 01:48:02.097918 kubelet[2670]: I1213 01:48:02.097687 2670 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:48:02.097918 kubelet[2670]: I1213 01:48:02.097716 2670 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:48:02.097918 kubelet[2670]: I1213 01:48:02.097734 2670 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:48:02.097918 kubelet[2670]: I1213 01:48:02.097748 2670 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a2d37267d4bfef3cf096e81fa001e611-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a2d37267d4bfef3cf096e81fa001e611\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:48:02.097918 kubelet[2670]: I1213 01:48:02.097762 2670 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:48:02.098098 kubelet[2670]: I1213 01:48:02.097776 2670 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:48:02.098098 kubelet[2670]: I1213 01:48:02.097792 2670 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:48:02.098098 kubelet[2670]: I1213 01:48:02.097806 2670 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a2d37267d4bfef3cf096e81fa001e611-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a2d37267d4bfef3cf096e81fa001e611\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:48:02.098098 kubelet[2670]: I1213 01:48:02.097821 2670 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a2d37267d4bfef3cf096e81fa001e611-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a2d37267d4bfef3cf096e81fa001e611\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:48:02.098438 kubelet[2670]: E1213 01:48:02.098412 2670 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.110:6443: connect: connection refused" interval="400ms" Dec 13 01:48:02.189870 kubelet[2670]: E1213 01:48:02.189851 2670 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.110:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.110:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1810995d25c8c9de default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 01:48:01.882565086 +0000 UTC m=+0.592411976,LastTimestamp:2024-12-13 01:48:01.882565086 +0000 UTC m=+0.592411976,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 01:48:02.200195 kubelet[2670]: I1213 01:48:02.200146 2670 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:48:02.200359 kubelet[2670]: E1213 01:48:02.200341 2670 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.110:6443/api/v1/nodes\": dial tcp 139.178.70.110:6443: connect: connection refused" node="localhost" Dec 13 01:48:02.279082 sshd[2678]: Invalid user ubuntu from 36.138.19.180 port 43506 Dec 13 01:48:02.336456 containerd[1540]: time="2024-12-13T01:48:02.336419124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,}" Dec 13 01:48:02.341733 containerd[1540]: time="2024-12-13T01:48:02.341676799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a2d37267d4bfef3cf096e81fa001e611,Namespace:kube-system,Attempt:0,}" Dec 13 01:48:02.343775 containerd[1540]: time="2024-12-13T01:48:02.343744501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,}" Dec 13 01:48:02.482696 sshd[2678]: Connection closed by invalid user ubuntu 36.138.19.180 port 43506 [preauth] Dec 13 01:48:02.484315 systemd[1]: sshd@56-139.178.70.110:22-36.138.19.180:43506.service: Deactivated successfully. Dec 13 01:48:02.499160 kubelet[2670]: E1213 01:48:02.499143 2670 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.110:6443: connect: connection refused" interval="800ms" Dec 13 01:48:02.602001 kubelet[2670]: I1213 01:48:02.601976 2670 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:48:02.602253 kubelet[2670]: E1213 01:48:02.602236 2670 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.110:6443/api/v1/nodes\": dial tcp 139.178.70.110:6443: connect: connection refused" node="localhost" Dec 13 01:48:02.693080 systemd[1]: Started sshd@57-139.178.70.110:22-36.138.19.180:43512.service - OpenSSH per-connection server daemon (36.138.19.180:43512). Dec 13 01:48:02.777464 kubelet[2670]: W1213 01:48:02.777363 2670 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://139.178.70.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Dec 13 01:48:02.777464 kubelet[2670]: E1213 01:48:02.777408 2670 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Dec 13 01:48:02.819306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount372118974.mount: Deactivated successfully. Dec 13 01:48:02.821279 containerd[1540]: time="2024-12-13T01:48:02.821194649Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:48:02.822084 containerd[1540]: time="2024-12-13T01:48:02.821979447Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:48:02.822742 containerd[1540]: time="2024-12-13T01:48:02.822714517Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:48:02.822783 containerd[1540]: time="2024-12-13T01:48:02.822740653Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:48:02.824043 containerd[1540]: time="2024-12-13T01:48:02.823361023Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Dec 13 01:48:02.824043 containerd[1540]: time="2024-12-13T01:48:02.823401785Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:48:02.825696 containerd[1540]: time="2024-12-13T01:48:02.825671716Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:48:02.827430 containerd[1540]: time="2024-12-13T01:48:02.827408051Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 483.625182ms" Dec 13 01:48:02.827679 containerd[1540]: time="2024-12-13T01:48:02.827642031Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 491.162527ms" Dec 13 01:48:02.831343 containerd[1540]: time="2024-12-13T01:48:02.831211477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:48:02.832314 containerd[1540]: time="2024-12-13T01:48:02.832254797Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 490.530614ms" Dec 13 01:48:02.929044 containerd[1540]: time="2024-12-13T01:48:02.928999002Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:48:02.929336 containerd[1540]: time="2024-12-13T01:48:02.929034343Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:48:02.929336 containerd[1540]: time="2024-12-13T01:48:02.929044895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:02.929336 containerd[1540]: time="2024-12-13T01:48:02.929088902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:02.934915 containerd[1540]: time="2024-12-13T01:48:02.934758141Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:48:02.934915 containerd[1540]: time="2024-12-13T01:48:02.934801459Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:48:02.934915 containerd[1540]: time="2024-12-13T01:48:02.934813030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:02.935665 containerd[1540]: time="2024-12-13T01:48:02.935214525Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:48:02.935665 containerd[1540]: time="2024-12-13T01:48:02.935237223Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:48:02.935665 containerd[1540]: time="2024-12-13T01:48:02.935243985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:02.935665 containerd[1540]: time="2024-12-13T01:48:02.935283154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:02.935776 containerd[1540]: time="2024-12-13T01:48:02.935750555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:02.951031 systemd[1]: Started cri-containerd-cdb8cd67fa1d338059dc7614b1bfa87f25c4e2e3fb224aeaecec6bbdaed246a9.scope - libcontainer container cdb8cd67fa1d338059dc7614b1bfa87f25c4e2e3fb224aeaecec6bbdaed246a9. Dec 13 01:48:02.953965 systemd[1]: Started cri-containerd-79356bd24daeaca3cce9d5a4efdc256f517eb39f4589664fee194f6891d9cda2.scope - libcontainer container 79356bd24daeaca3cce9d5a4efdc256f517eb39f4589664fee194f6891d9cda2. Dec 13 01:48:02.955238 systemd[1]: Started cri-containerd-aa83eae43d3aff6e6840fbb9e0377360293070b6b0a1316f8cfc52ed5a7d765e.scope - libcontainer container aa83eae43d3aff6e6840fbb9e0377360293070b6b0a1316f8cfc52ed5a7d765e. Dec 13 01:48:02.992803 containerd[1540]: time="2024-12-13T01:48:02.992658988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,} returns sandbox id \"79356bd24daeaca3cce9d5a4efdc256f517eb39f4589664fee194f6891d9cda2\"" Dec 13 01:48:02.999221 containerd[1540]: time="2024-12-13T01:48:02.999154791Z" level=info msg="CreateContainer within sandbox \"79356bd24daeaca3cce9d5a4efdc256f517eb39f4589664fee194f6891d9cda2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:48:02.999529 kubelet[2670]: W1213 01:48:02.999406 2670 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://139.178.70.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Dec 13 01:48:02.999529 kubelet[2670]: E1213 01:48:02.999439 2670 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Dec 13 01:48:03.005996 containerd[1540]: time="2024-12-13T01:48:03.005976590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a2d37267d4bfef3cf096e81fa001e611,Namespace:kube-system,Attempt:0,} returns sandbox id \"cdb8cd67fa1d338059dc7614b1bfa87f25c4e2e3fb224aeaecec6bbdaed246a9\"" Dec 13 01:48:03.006537 containerd[1540]: time="2024-12-13T01:48:03.006524600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa83eae43d3aff6e6840fbb9e0377360293070b6b0a1316f8cfc52ed5a7d765e\"" Dec 13 01:48:03.010653 containerd[1540]: time="2024-12-13T01:48:03.010594331Z" level=info msg="CreateContainer within sandbox \"cdb8cd67fa1d338059dc7614b1bfa87f25c4e2e3fb224aeaecec6bbdaed246a9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:48:03.012667 containerd[1540]: time="2024-12-13T01:48:03.012630785Z" level=info msg="CreateContainer within sandbox \"aa83eae43d3aff6e6840fbb9e0377360293070b6b0a1316f8cfc52ed5a7d765e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:48:03.014665 containerd[1540]: time="2024-12-13T01:48:03.014637166Z" level=info msg="CreateContainer within sandbox \"79356bd24daeaca3cce9d5a4efdc256f517eb39f4589664fee194f6891d9cda2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"eecaf6a8c64e7982e9b13845c572a411186f727678e0dc41f4c1a66a04afb350\"" Dec 13 01:48:03.015210 containerd[1540]: time="2024-12-13T01:48:03.015198991Z" level=info msg="StartContainer for \"eecaf6a8c64e7982e9b13845c572a411186f727678e0dc41f4c1a66a04afb350\"" Dec 13 01:48:03.020396 containerd[1540]: time="2024-12-13T01:48:03.020377803Z" level=info msg="CreateContainer within sandbox \"aa83eae43d3aff6e6840fbb9e0377360293070b6b0a1316f8cfc52ed5a7d765e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"083084918415bfaeadae0f7fc846ed0a503d885894e9870969b37b59e82b4be7\"" Dec 13 01:48:03.020590 containerd[1540]: time="2024-12-13T01:48:03.020577839Z" level=info msg="StartContainer for \"083084918415bfaeadae0f7fc846ed0a503d885894e9870969b37b59e82b4be7\"" Dec 13 01:48:03.021760 containerd[1540]: time="2024-12-13T01:48:03.021743466Z" level=info msg="CreateContainer within sandbox \"cdb8cd67fa1d338059dc7614b1bfa87f25c4e2e3fb224aeaecec6bbdaed246a9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a0a6b3283a694cd8cf8ea5c8c10e801808f9e035c9082bae283023757eed2a38\"" Dec 13 01:48:03.022035 containerd[1540]: time="2024-12-13T01:48:03.022019891Z" level=info msg="StartContainer for \"a0a6b3283a694cd8cf8ea5c8c10e801808f9e035c9082bae283023757eed2a38\"" Dec 13 01:48:03.040079 systemd[1]: Started cri-containerd-eecaf6a8c64e7982e9b13845c572a411186f727678e0dc41f4c1a66a04afb350.scope - libcontainer container eecaf6a8c64e7982e9b13845c572a411186f727678e0dc41f4c1a66a04afb350. Dec 13 01:48:03.044198 systemd[1]: Started cri-containerd-a0a6b3283a694cd8cf8ea5c8c10e801808f9e035c9082bae283023757eed2a38.scope - libcontainer container a0a6b3283a694cd8cf8ea5c8c10e801808f9e035c9082bae283023757eed2a38. Dec 13 01:48:03.046860 systemd[1]: Started cri-containerd-083084918415bfaeadae0f7fc846ed0a503d885894e9870969b37b59e82b4be7.scope - libcontainer container 083084918415bfaeadae0f7fc846ed0a503d885894e9870969b37b59e82b4be7. Dec 13 01:48:03.082998 containerd[1540]: time="2024-12-13T01:48:03.082896697Z" level=info msg="StartContainer for \"eecaf6a8c64e7982e9b13845c572a411186f727678e0dc41f4c1a66a04afb350\" returns successfully" Dec 13 01:48:03.087180 containerd[1540]: time="2024-12-13T01:48:03.086967112Z" level=info msg="StartContainer for \"a0a6b3283a694cd8cf8ea5c8c10e801808f9e035c9082bae283023757eed2a38\" returns successfully" Dec 13 01:48:03.104194 containerd[1540]: time="2024-12-13T01:48:03.104163855Z" level=info msg="StartContainer for \"083084918415bfaeadae0f7fc846ed0a503d885894e9870969b37b59e82b4be7\" returns successfully" Dec 13 01:48:03.122865 kubelet[2670]: W1213 01:48:03.122760 2670 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://139.178.70.110:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Dec 13 01:48:03.122865 kubelet[2670]: E1213 01:48:03.122801 2670 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.110:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Dec 13 01:48:03.229596 kubelet[2670]: W1213 01:48:03.229558 2670 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://139.178.70.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Dec 13 01:48:03.229596 kubelet[2670]: E1213 01:48:03.229597 2670 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Dec 13 01:48:03.300264 kubelet[2670]: E1213 01:48:03.300204 2670 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.110:6443: connect: connection refused" interval="1.6s" Dec 13 01:48:03.403705 kubelet[2670]: I1213 01:48:03.403686 2670 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:48:03.403879 kubelet[2670]: E1213 01:48:03.403870 2670 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.110:6443/api/v1/nodes\": dial tcp 139.178.70.110:6443: connect: connection refused" node="localhost" Dec 13 01:48:03.495801 sshd[2705]: Invalid user ubuntu from 36.138.19.180 port 43512 Dec 13 01:48:03.696836 sshd[2705]: Connection closed by invalid user ubuntu 36.138.19.180 port 43512 [preauth] Dec 13 01:48:03.697168 systemd[1]: sshd@57-139.178.70.110:22-36.138.19.180:43512.service: Deactivated successfully. Dec 13 01:48:03.901693 systemd[1]: Started sshd@58-139.178.70.110:22-36.138.19.180:40842.service - OpenSSH per-connection server daemon (36.138.19.180:40842). Dec 13 01:48:04.687333 kubelet[2670]: E1213 01:48:04.687309 2670 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Dec 13 01:48:04.724314 sshd[2950]: Invalid user ubuntu from 36.138.19.180 port 40842 Dec 13 01:48:04.877667 kubelet[2670]: I1213 01:48:04.877631 2670 apiserver.go:52] "Watching apiserver" Dec 13 01:48:04.897451 kubelet[2670]: I1213 01:48:04.897413 2670 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:48:04.902697 kubelet[2670]: E1213 01:48:04.902677 2670 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 01:48:04.925436 sshd[2950]: Connection closed by invalid user ubuntu 36.138.19.180 port 40842 [preauth] Dec 13 01:48:04.926772 systemd[1]: sshd@58-139.178.70.110:22-36.138.19.180:40842.service: Deactivated successfully. Dec 13 01:48:05.005322 kubelet[2670]: I1213 01:48:05.005255 2670 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:48:05.012206 kubelet[2670]: I1213 01:48:05.012187 2670 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:48:05.140095 systemd[1]: Started sshd@59-139.178.70.110:22-36.138.19.180:40852.service - OpenSSH per-connection server daemon (36.138.19.180:40852). Dec 13 01:48:05.961808 sshd[2955]: Invalid user ubuntu from 36.138.19.180 port 40852 Dec 13 01:48:06.164349 sshd[2955]: Connection closed by invalid user ubuntu 36.138.19.180 port 40852 [preauth] Dec 13 01:48:06.165690 systemd[1]: sshd@59-139.178.70.110:22-36.138.19.180:40852.service: Deactivated successfully. Dec 13 01:48:06.334816 systemd[1]: Started sshd@60-139.178.70.110:22-36.138.19.180:40860.service - OpenSSH per-connection server daemon (36.138.19.180:40860). Dec 13 01:48:07.019480 sshd[2960]: Invalid user ubuntu from 36.138.19.180 port 40860 Dec 13 01:48:07.185998 sshd[2960]: Connection closed by invalid user ubuntu 36.138.19.180 port 40860 [preauth] Dec 13 01:48:07.186687 systemd[1]: sshd@60-139.178.70.110:22-36.138.19.180:40860.service: Deactivated successfully. Dec 13 01:48:07.368076 systemd[1]: Started sshd@61-139.178.70.110:22-36.138.19.180:40874.service - OpenSSH per-connection server daemon (36.138.19.180:40874). Dec 13 01:48:07.414240 systemd[1]: Reloading requested from client PID 2968 ('systemctl') (unit session-9.scope)... Dec 13 01:48:07.414344 systemd[1]: Reloading... Dec 13 01:48:07.472966 zram_generator::config[3008]: No configuration found. Dec 13 01:48:07.541023 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Dec 13 01:48:07.558550 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:48:07.609538 systemd[1]: Reloading finished in 194 ms. Dec 13 01:48:07.633642 kubelet[2670]: I1213 01:48:07.633570 2670 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:48:07.633579 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:48:07.648176 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:48:07.648312 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:48:07.654113 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:48:08.074581 sshd[2965]: Invalid user ubuntu from 36.138.19.180 port 40874 Dec 13 01:48:08.076634 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:48:08.088412 (kubelet)[3075]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:48:08.141851 kubelet[3075]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:48:08.141851 kubelet[3075]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:48:08.141851 kubelet[3075]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:48:08.142146 kubelet[3075]: I1213 01:48:08.141878 3075 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:48:08.144432 kubelet[3075]: I1213 01:48:08.144418 3075 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:48:08.144432 kubelet[3075]: I1213 01:48:08.144431 3075 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:48:08.144575 kubelet[3075]: I1213 01:48:08.144564 3075 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:48:08.145461 kubelet[3075]: I1213 01:48:08.145448 3075 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:48:08.161496 kubelet[3075]: I1213 01:48:08.161475 3075 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:48:08.166421 kubelet[3075]: I1213 01:48:08.166408 3075 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:48:08.166580 kubelet[3075]: I1213 01:48:08.166568 3075 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:48:08.166700 kubelet[3075]: I1213 01:48:08.166686 3075 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:48:08.166758 kubelet[3075]: I1213 01:48:08.166707 3075 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:48:08.166758 kubelet[3075]: I1213 01:48:08.166714 3075 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:48:08.166758 kubelet[3075]: I1213 01:48:08.166737 3075 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:48:08.166827 kubelet[3075]: I1213 01:48:08.166794 3075 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:48:08.166827 kubelet[3075]: I1213 01:48:08.166807 3075 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:48:08.166827 kubelet[3075]: I1213 01:48:08.166823 3075 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:48:08.167474 kubelet[3075]: I1213 01:48:08.166834 3075 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:48:08.169081 kubelet[3075]: I1213 01:48:08.169070 3075 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:48:08.169170 kubelet[3075]: I1213 01:48:08.169160 3075 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:48:08.169393 kubelet[3075]: I1213 01:48:08.169382 3075 server.go:1256] "Started kubelet" Dec 13 01:48:08.171812 kubelet[3075]: I1213 01:48:08.171148 3075 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:48:08.173581 kubelet[3075]: I1213 01:48:08.173573 3075 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:48:08.174940 kubelet[3075]: I1213 01:48:08.174860 3075 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:48:08.175429 kubelet[3075]: I1213 01:48:08.175417 3075 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:48:08.175520 kubelet[3075]: I1213 01:48:08.175510 3075 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:48:08.181297 kubelet[3075]: I1213 01:48:08.181269 3075 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:48:08.182690 kubelet[3075]: I1213 01:48:08.182302 3075 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:48:08.182690 kubelet[3075]: I1213 01:48:08.182359 3075 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:48:08.182981 kubelet[3075]: I1213 01:48:08.182844 3075 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:48:08.182981 kubelet[3075]: I1213 01:48:08.182914 3075 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:48:08.185154 kubelet[3075]: I1213 01:48:08.184370 3075 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:48:08.186767 kubelet[3075]: I1213 01:48:08.186752 3075 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:48:08.187425 kubelet[3075]: I1213 01:48:08.187415 3075 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:48:08.187850 kubelet[3075]: I1213 01:48:08.187483 3075 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:48:08.187850 kubelet[3075]: I1213 01:48:08.187499 3075 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:48:08.187850 kubelet[3075]: E1213 01:48:08.187610 3075 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:48:08.199532 kubelet[3075]: E1213 01:48:08.199517 3075 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:48:08.221021 kubelet[3075]: I1213 01:48:08.221006 3075 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:48:08.221118 kubelet[3075]: I1213 01:48:08.221112 3075 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:48:08.221197 kubelet[3075]: I1213 01:48:08.221191 3075 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:48:08.221347 kubelet[3075]: I1213 01:48:08.221342 3075 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:48:08.221393 kubelet[3075]: I1213 01:48:08.221388 3075 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:48:08.221438 kubelet[3075]: I1213 01:48:08.221433 3075 policy_none.go:49] "None policy: Start" Dec 13 01:48:08.222365 kubelet[3075]: I1213 01:48:08.222357 3075 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:48:08.222417 kubelet[3075]: I1213 01:48:08.222412 3075 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:48:08.222558 kubelet[3075]: I1213 01:48:08.222545 3075 state_mem.go:75] "Updated machine memory state" Dec 13 01:48:08.226407 kubelet[3075]: I1213 01:48:08.226396 3075 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:48:08.226648 kubelet[3075]: I1213 01:48:08.226639 3075 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:48:08.247148 sshd[2965]: Connection closed by invalid user ubuntu 36.138.19.180 port 40874 [preauth] Dec 13 01:48:08.248251 systemd[1]: sshd@61-139.178.70.110:22-36.138.19.180:40874.service: Deactivated successfully. Dec 13 01:48:08.287405 kubelet[3075]: I1213 01:48:08.287388 3075 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:48:08.288895 kubelet[3075]: I1213 01:48:08.288168 3075 topology_manager.go:215] "Topology Admit Handler" podUID="a2d37267d4bfef3cf096e81fa001e611" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:48:08.288895 kubelet[3075]: I1213 01:48:08.288254 3075 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:48:08.288895 kubelet[3075]: I1213 01:48:08.288284 3075 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:48:08.297201 kubelet[3075]: E1213 01:48:08.297072 3075 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Dec 13 01:48:08.298517 kubelet[3075]: I1213 01:48:08.298502 3075 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Dec 13 01:48:08.298625 kubelet[3075]: I1213 01:48:08.298618 3075 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:48:08.383397 kubelet[3075]: I1213 01:48:08.383333 3075 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:48:08.383522 kubelet[3075]: I1213 01:48:08.383513 3075 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a2d37267d4bfef3cf096e81fa001e611-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a2d37267d4bfef3cf096e81fa001e611\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:48:08.383585 kubelet[3075]: I1213 01:48:08.383578 3075 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:48:08.383656 kubelet[3075]: I1213 01:48:08.383648 3075 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:48:08.383718 kubelet[3075]: I1213 01:48:08.383712 3075 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:48:08.383781 kubelet[3075]: I1213 01:48:08.383772 3075 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:48:08.383844 kubelet[3075]: I1213 01:48:08.383837 3075 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:48:08.383896 kubelet[3075]: I1213 01:48:08.383891 3075 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a2d37267d4bfef3cf096e81fa001e611-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a2d37267d4bfef3cf096e81fa001e611\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:48:08.383967 kubelet[3075]: I1213 01:48:08.383960 3075 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a2d37267d4bfef3cf096e81fa001e611-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a2d37267d4bfef3cf096e81fa001e611\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:48:08.449384 systemd[1]: Started sshd@62-139.178.70.110:22-36.138.19.180:40876.service - OpenSSH per-connection server daemon (36.138.19.180:40876). Dec 13 01:48:09.167211 kubelet[3075]: I1213 01:48:09.167180 3075 apiserver.go:52] "Watching apiserver" Dec 13 01:48:09.183857 kubelet[3075]: I1213 01:48:09.183825 3075 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:48:09.223709 kubelet[3075]: E1213 01:48:09.223621 3075 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 01:48:09.253324 kubelet[3075]: I1213 01:48:09.253301 3075 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.253275755 podStartE2EDuration="1.253275755s" podCreationTimestamp="2024-12-13 01:48:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:48:09.253153135 +0000 UTC m=+1.151042370" watchObservedRunningTime="2024-12-13 01:48:09.253275755 +0000 UTC m=+1.151164985" Dec 13 01:48:09.271587 sshd[3116]: Invalid user ubuntu from 36.138.19.180 port 40876 Dec 13 01:48:09.272001 kubelet[3075]: I1213 01:48:09.271666 3075 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.271641977 podStartE2EDuration="1.271641977s" podCreationTimestamp="2024-12-13 01:48:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:48:09.270229884 +0000 UTC m=+1.168119118" watchObservedRunningTime="2024-12-13 01:48:09.271641977 +0000 UTC m=+1.169531202" Dec 13 01:48:09.298884 kubelet[3075]: I1213 01:48:09.298780 3075 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.298753577 podStartE2EDuration="4.298753577s" podCreationTimestamp="2024-12-13 01:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:48:09.283853033 +0000 UTC m=+1.181742267" watchObservedRunningTime="2024-12-13 01:48:09.298753577 +0000 UTC m=+1.196642802" Dec 13 01:48:09.474351 sshd[3116]: Connection closed by invalid user ubuntu 36.138.19.180 port 40876 [preauth] Dec 13 01:48:09.475371 systemd[1]: sshd@62-139.178.70.110:22-36.138.19.180:40876.service: Deactivated successfully. Dec 13 01:48:09.692135 systemd[1]: Started sshd@63-139.178.70.110:22-36.138.19.180:40882.service - OpenSSH per-connection server daemon (36.138.19.180:40882). Dec 13 01:48:10.535417 sshd[3122]: Invalid user ubuntu from 36.138.19.180 port 40882 Dec 13 01:48:10.741610 sshd[3122]: Connection closed by invalid user ubuntu 36.138.19.180 port 40882 [preauth] Dec 13 01:48:10.741501 systemd[1]: sshd@63-139.178.70.110:22-36.138.19.180:40882.service: Deactivated successfully. Dec 13 01:48:10.946259 systemd[1]: Started sshd@64-139.178.70.110:22-36.138.19.180:40890.service - OpenSSH per-connection server daemon (36.138.19.180:40890). Dec 13 01:48:11.819951 sshd[3149]: Invalid user ubuntu from 36.138.19.180 port 40890 Dec 13 01:48:12.019670 sshd[3149]: Connection closed by invalid user ubuntu 36.138.19.180 port 40890 [preauth] Dec 13 01:48:12.021079 systemd[1]: sshd@64-139.178.70.110:22-36.138.19.180:40890.service: Deactivated successfully. Dec 13 01:48:12.231973 systemd[1]: Started sshd@65-139.178.70.110:22-36.138.19.180:40902.service - OpenSSH per-connection server daemon (36.138.19.180:40902). Dec 13 01:48:12.984273 sudo[2024]: pam_unix(sudo:session): session closed for user root Dec 13 01:48:12.991727 sshd[2021]: pam_unix(sshd:session): session closed for user core Dec 13 01:48:12.993237 systemd[1]: sshd@40-139.178.70.110:22-139.178.89.65:39712.service: Deactivated successfully. Dec 13 01:48:12.994794 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:48:12.994934 systemd[1]: session-9.scope: Consumed 3.232s CPU time, 189.4M memory peak, 0B memory swap peak. Dec 13 01:48:12.995656 systemd-logind[1521]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:48:12.996289 systemd-logind[1521]: Removed session 9. Dec 13 01:48:13.054315 sshd[3154]: Invalid user ubuntu from 36.138.19.180 port 40902 Dec 13 01:48:13.256822 sshd[3154]: Connection closed by invalid user ubuntu 36.138.19.180 port 40902 [preauth] Dec 13 01:48:13.258470 systemd[1]: sshd@65-139.178.70.110:22-36.138.19.180:40902.service: Deactivated successfully. Dec 13 01:48:13.462670 systemd[1]: Started sshd@66-139.178.70.110:22-36.138.19.180:40910.service - OpenSSH per-connection server daemon (36.138.19.180:40910). Dec 13 01:48:14.275531 sshd[3176]: Invalid user ubuntu from 36.138.19.180 port 40910 Dec 13 01:48:14.475229 sshd[3176]: Connection closed by invalid user ubuntu 36.138.19.180 port 40910 [preauth] Dec 13 01:48:14.476051 systemd[1]: sshd@66-139.178.70.110:22-36.138.19.180:40910.service: Deactivated successfully. Dec 13 01:48:14.659212 systemd[1]: Started sshd@67-139.178.70.110:22-36.138.19.180:54086.service - OpenSSH per-connection server daemon (36.138.19.180:54086). Dec 13 01:48:15.362443 sshd[3181]: Invalid user ubuntu from 36.138.19.180 port 54086 Dec 13 01:48:15.534705 sshd[3181]: Connection closed by invalid user ubuntu 36.138.19.180 port 54086 [preauth] Dec 13 01:48:15.536297 systemd[1]: sshd@67-139.178.70.110:22-36.138.19.180:54086.service: Deactivated successfully. Dec 13 01:48:15.753144 systemd[1]: Started sshd@68-139.178.70.110:22-36.138.19.180:54092.service - OpenSSH per-connection server daemon (36.138.19.180:54092). Dec 13 01:48:16.575298 sshd[3186]: Invalid user ubuntu from 36.138.19.180 port 54092 Dec 13 01:48:16.778237 sshd[3186]: Connection closed by invalid user ubuntu 36.138.19.180 port 54092 [preauth] Dec 13 01:48:16.779200 systemd[1]: sshd@68-139.178.70.110:22-36.138.19.180:54092.service: Deactivated successfully. Dec 13 01:48:16.957536 systemd[1]: Started sshd@69-139.178.70.110:22-36.138.19.180:54098.service - OpenSSH per-connection server daemon (36.138.19.180:54098). Dec 13 01:48:17.647457 sshd[3191]: Invalid user ubuntu from 36.138.19.180 port 54098 Dec 13 01:48:17.816829 sshd[3191]: Connection closed by invalid user ubuntu 36.138.19.180 port 54098 [preauth] Dec 13 01:48:17.817790 systemd[1]: sshd@69-139.178.70.110:22-36.138.19.180:54098.service: Deactivated successfully. Dec 13 01:48:18.027205 systemd[1]: Started sshd@70-139.178.70.110:22-36.138.19.180:54106.service - OpenSSH per-connection server daemon (36.138.19.180:54106). Dec 13 01:48:18.843534 sshd[3196]: Invalid user ubuntu from 36.138.19.180 port 54106 Dec 13 01:48:19.043995 sshd[3196]: Connection closed by invalid user ubuntu 36.138.19.180 port 54106 [preauth] Dec 13 01:48:19.044624 systemd[1]: sshd@70-139.178.70.110:22-36.138.19.180:54106.service: Deactivated successfully. Dec 13 01:48:19.248639 systemd[1]: Started sshd@71-139.178.70.110:22-36.138.19.180:54110.service - OpenSSH per-connection server daemon (36.138.19.180:54110). Dec 13 01:48:19.714504 kubelet[3075]: I1213 01:48:19.714481 3075 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:48:19.714811 containerd[1540]: time="2024-12-13T01:48:19.714729890Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:48:19.715324 kubelet[3075]: I1213 01:48:19.715014 3075 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:48:20.067302 sshd[3201]: Invalid user ubuntu from 36.138.19.180 port 54110 Dec 13 01:48:20.251489 kubelet[3075]: I1213 01:48:20.251457 3075 topology_manager.go:215] "Topology Admit Handler" podUID="f63454b1-f6af-474d-9fba-68a33ba56e16" podNamespace="kube-system" podName="kube-proxy-mzqdl" Dec 13 01:48:20.259187 systemd[1]: Created slice kubepods-besteffort-podf63454b1_f6af_474d_9fba_68a33ba56e16.slice - libcontainer container kubepods-besteffort-podf63454b1_f6af_474d_9fba_68a33ba56e16.slice. Dec 13 01:48:20.267384 sshd[3201]: Connection closed by invalid user ubuntu 36.138.19.180 port 54110 [preauth] Dec 13 01:48:20.268239 systemd[1]: sshd@71-139.178.70.110:22-36.138.19.180:54110.service: Deactivated successfully. Dec 13 01:48:20.362981 kubelet[3075]: I1213 01:48:20.362774 3075 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f63454b1-f6af-474d-9fba-68a33ba56e16-lib-modules\") pod \"kube-proxy-mzqdl\" (UID: \"f63454b1-f6af-474d-9fba-68a33ba56e16\") " pod="kube-system/kube-proxy-mzqdl" Dec 13 01:48:20.362981 kubelet[3075]: I1213 01:48:20.362809 3075 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knbjb\" (UniqueName: \"kubernetes.io/projected/f63454b1-f6af-474d-9fba-68a33ba56e16-kube-api-access-knbjb\") pod \"kube-proxy-mzqdl\" (UID: \"f63454b1-f6af-474d-9fba-68a33ba56e16\") " pod="kube-system/kube-proxy-mzqdl" Dec 13 01:48:20.362981 kubelet[3075]: I1213 01:48:20.362828 3075 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f63454b1-f6af-474d-9fba-68a33ba56e16-kube-proxy\") pod \"kube-proxy-mzqdl\" (UID: \"f63454b1-f6af-474d-9fba-68a33ba56e16\") " pod="kube-system/kube-proxy-mzqdl" Dec 13 01:48:20.362981 kubelet[3075]: I1213 01:48:20.362854 3075 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f63454b1-f6af-474d-9fba-68a33ba56e16-xtables-lock\") pod \"kube-proxy-mzqdl\" (UID: \"f63454b1-f6af-474d-9fba-68a33ba56e16\") " pod="kube-system/kube-proxy-mzqdl" Dec 13 01:48:20.476072 kubelet[3075]: E1213 01:48:20.476039 3075 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 01:48:20.476072 kubelet[3075]: E1213 01:48:20.476078 3075 projected.go:200] Error preparing data for projected volume kube-api-access-knbjb for pod kube-system/kube-proxy-mzqdl: configmap "kube-root-ca.crt" not found Dec 13 01:48:20.476194 kubelet[3075]: E1213 01:48:20.476141 3075 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f63454b1-f6af-474d-9fba-68a33ba56e16-kube-api-access-knbjb podName:f63454b1-f6af-474d-9fba-68a33ba56e16 nodeName:}" failed. No retries permitted until 2024-12-13 01:48:20.976119078 +0000 UTC m=+12.874008305 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-knbjb" (UniqueName: "kubernetes.io/projected/f63454b1-f6af-474d-9fba-68a33ba56e16-kube-api-access-knbjb") pod "kube-proxy-mzqdl" (UID: "f63454b1-f6af-474d-9fba-68a33ba56e16") : configmap "kube-root-ca.crt" not found Dec 13 01:48:20.486105 systemd[1]: Started sshd@72-139.178.70.110:22-36.138.19.180:54112.service - OpenSSH per-connection server daemon (36.138.19.180:54112). Dec 13 01:48:20.762097 kubelet[3075]: I1213 01:48:20.761951 3075 topology_manager.go:215] "Topology Admit Handler" podUID="b66e164e-4da5-4339-bc36-b6a0f16fe8df" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-rn2fr" Dec 13 01:48:20.779056 systemd[1]: Created slice kubepods-besteffort-podb66e164e_4da5_4339_bc36_b6a0f16fe8df.slice - libcontainer container kubepods-besteffort-podb66e164e_4da5_4339_bc36_b6a0f16fe8df.slice. Dec 13 01:48:20.865201 kubelet[3075]: I1213 01:48:20.865173 3075 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82g27\" (UniqueName: \"kubernetes.io/projected/b66e164e-4da5-4339-bc36-b6a0f16fe8df-kube-api-access-82g27\") pod \"tigera-operator-c7ccbd65-rn2fr\" (UID: \"b66e164e-4da5-4339-bc36-b6a0f16fe8df\") " pod="tigera-operator/tigera-operator-c7ccbd65-rn2fr" Dec 13 01:48:20.865377 kubelet[3075]: I1213 01:48:20.865355 3075 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b66e164e-4da5-4339-bc36-b6a0f16fe8df-var-lib-calico\") pod \"tigera-operator-c7ccbd65-rn2fr\" (UID: \"b66e164e-4da5-4339-bc36-b6a0f16fe8df\") " pod="tigera-operator/tigera-operator-c7ccbd65-rn2fr" Dec 13 01:48:21.082594 containerd[1540]: time="2024-12-13T01:48:21.082524497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-rn2fr,Uid:b66e164e-4da5-4339-bc36-b6a0f16fe8df,Namespace:tigera-operator,Attempt:0,}" Dec 13 01:48:21.167098 containerd[1540]: time="2024-12-13T01:48:21.167066799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mzqdl,Uid:f63454b1-f6af-474d-9fba-68a33ba56e16,Namespace:kube-system,Attempt:0,}" Dec 13 01:48:21.177171 containerd[1540]: time="2024-12-13T01:48:21.177031025Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:48:21.177171 containerd[1540]: time="2024-12-13T01:48:21.177087766Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:48:21.177171 containerd[1540]: time="2024-12-13T01:48:21.177095900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:21.177270 containerd[1540]: time="2024-12-13T01:48:21.177146940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:21.193011 systemd[1]: Started cri-containerd-ec7d4202913faf51f28493880fe15297303df983e13f16f05b093caea582c68f.scope - libcontainer container ec7d4202913faf51f28493880fe15297303df983e13f16f05b093caea582c68f. Dec 13 01:48:21.219485 containerd[1540]: time="2024-12-13T01:48:21.219315760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-rn2fr,Uid:b66e164e-4da5-4339-bc36-b6a0f16fe8df,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ec7d4202913faf51f28493880fe15297303df983e13f16f05b093caea582c68f\"" Dec 13 01:48:21.220735 containerd[1540]: time="2024-12-13T01:48:21.220659182Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 01:48:21.332740 containerd[1540]: time="2024-12-13T01:48:21.332126380Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:48:21.332740 containerd[1540]: time="2024-12-13T01:48:21.332535676Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:48:21.332740 containerd[1540]: time="2024-12-13T01:48:21.332597685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:21.333566 containerd[1540]: time="2024-12-13T01:48:21.332671859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:21.342324 sshd[3207]: Invalid user ubuntu from 36.138.19.180 port 54112 Dec 13 01:48:21.347039 systemd[1]: Started cri-containerd-04835a71f630d59cc3dcde8764839c613108884eeb07a8b1d09c92e8ba208965.scope - libcontainer container 04835a71f630d59cc3dcde8764839c613108884eeb07a8b1d09c92e8ba208965. Dec 13 01:48:21.363305 containerd[1540]: time="2024-12-13T01:48:21.363277809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mzqdl,Uid:f63454b1-f6af-474d-9fba-68a33ba56e16,Namespace:kube-system,Attempt:0,} returns sandbox id \"04835a71f630d59cc3dcde8764839c613108884eeb07a8b1d09c92e8ba208965\"" Dec 13 01:48:21.366234 containerd[1540]: time="2024-12-13T01:48:21.366151506Z" level=info msg="CreateContainer within sandbox \"04835a71f630d59cc3dcde8764839c613108884eeb07a8b1d09c92e8ba208965\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:48:21.490357 containerd[1540]: time="2024-12-13T01:48:21.490282010Z" level=info msg="CreateContainer within sandbox \"04835a71f630d59cc3dcde8764839c613108884eeb07a8b1d09c92e8ba208965\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"acb1c03ea45344b0564f177e01634cccde58a699acbf094dc6de5f10186d491c\"" Dec 13 01:48:21.490731 containerd[1540]: time="2024-12-13T01:48:21.490685518Z" level=info msg="StartContainer for \"acb1c03ea45344b0564f177e01634cccde58a699acbf094dc6de5f10186d491c\"" Dec 13 01:48:21.513032 systemd[1]: Started cri-containerd-acb1c03ea45344b0564f177e01634cccde58a699acbf094dc6de5f10186d491c.scope - libcontainer container acb1c03ea45344b0564f177e01634cccde58a699acbf094dc6de5f10186d491c. Dec 13 01:48:21.541019 containerd[1540]: time="2024-12-13T01:48:21.540977827Z" level=info msg="StartContainer for \"acb1c03ea45344b0564f177e01634cccde58a699acbf094dc6de5f10186d491c\" returns successfully" Dec 13 01:48:21.547945 sshd[3207]: Connection closed by invalid user ubuntu 36.138.19.180 port 54112 [preauth] Dec 13 01:48:21.549612 systemd[1]: sshd@72-139.178.70.110:22-36.138.19.180:54112.service: Deactivated successfully. Dec 13 01:48:21.721303 systemd[1]: Started sshd@73-139.178.70.110:22-36.138.19.180:54128.service - OpenSSH per-connection server daemon (36.138.19.180:54128). Dec 13 01:48:22.290096 kubelet[3075]: I1213 01:48:22.290059 3075 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-mzqdl" podStartSLOduration=2.290032557 podStartE2EDuration="2.290032557s" podCreationTimestamp="2024-12-13 01:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:48:22.289894428 +0000 UTC m=+14.187783662" watchObservedRunningTime="2024-12-13 01:48:22.290032557 +0000 UTC m=+14.187921786" Dec 13 01:48:22.424398 sshd[3326]: Invalid user ubuntu from 36.138.19.180 port 54128 Dec 13 01:48:22.597356 sshd[3326]: Connection closed by invalid user ubuntu 36.138.19.180 port 54128 [preauth] Dec 13 01:48:22.598211 systemd[1]: sshd@73-139.178.70.110:22-36.138.19.180:54128.service: Deactivated successfully. Dec 13 01:48:22.809980 systemd[1]: Started sshd@74-139.178.70.110:22-36.138.19.180:54132.service - OpenSSH per-connection server daemon (36.138.19.180:54132). Dec 13 01:48:23.766375 sshd[3335]: Invalid user ubuntu from 36.138.19.180 port 54132 Dec 13 01:48:23.969854 sshd[3335]: Connection closed by invalid user ubuntu 36.138.19.180 port 54132 [preauth] Dec 13 01:48:23.971006 systemd[1]: sshd@74-139.178.70.110:22-36.138.19.180:54132.service: Deactivated successfully. Dec 13 01:48:24.173022 systemd[1]: Started sshd@75-139.178.70.110:22-36.138.19.180:42632.service - OpenSSH per-connection server daemon (36.138.19.180:42632). Dec 13 01:48:24.984406 sshd[3462]: Invalid user ubuntu from 36.138.19.180 port 42632 Dec 13 01:48:25.183988 sshd[3462]: Connection closed by invalid user ubuntu 36.138.19.180 port 42632 [preauth] Dec 13 01:48:25.185788 systemd[1]: sshd@75-139.178.70.110:22-36.138.19.180:42632.service: Deactivated successfully. Dec 13 01:48:25.402879 systemd[1]: Started sshd@76-139.178.70.110:22-36.138.19.180:42644.service - OpenSSH per-connection server daemon (36.138.19.180:42644). Dec 13 01:48:26.251942 sshd[3467]: Invalid user ubuntu from 36.138.19.180 port 42644 Dec 13 01:48:26.350891 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1626956836.mount: Deactivated successfully. Dec 13 01:48:26.457898 sshd[3467]: Connection closed by invalid user ubuntu 36.138.19.180 port 42644 [preauth] Dec 13 01:48:26.458733 systemd[1]: sshd@76-139.178.70.110:22-36.138.19.180:42644.service: Deactivated successfully. Dec 13 01:48:26.659020 systemd[1]: Started sshd@77-139.178.70.110:22-36.138.19.180:42648.service - OpenSSH per-connection server daemon (36.138.19.180:42648). Dec 13 01:48:26.912607 containerd[1540]: time="2024-12-13T01:48:26.912446555Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:26.913123 containerd[1540]: time="2024-12-13T01:48:26.913051756Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764357" Dec 13 01:48:26.913667 containerd[1540]: time="2024-12-13T01:48:26.913449002Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:26.914656 containerd[1540]: time="2024-12-13T01:48:26.914635447Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:26.915223 containerd[1540]: time="2024-12-13T01:48:26.915206698Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 5.694528593s" Dec 13 01:48:26.915282 containerd[1540]: time="2024-12-13T01:48:26.915273334Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Dec 13 01:48:26.916618 containerd[1540]: time="2024-12-13T01:48:26.916601047Z" level=info msg="CreateContainer within sandbox \"ec7d4202913faf51f28493880fe15297303df983e13f16f05b093caea582c68f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 01:48:26.926959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2226356267.mount: Deactivated successfully. Dec 13 01:48:26.930181 containerd[1540]: time="2024-12-13T01:48:26.930148989Z" level=info msg="CreateContainer within sandbox \"ec7d4202913faf51f28493880fe15297303df983e13f16f05b093caea582c68f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"bf0f343b12c4ca39cf67dc75ae57af58e352fe30ed0286f9f6170c375005c217\"" Dec 13 01:48:26.930961 containerd[1540]: time="2024-12-13T01:48:26.930574269Z" level=info msg="StartContainer for \"bf0f343b12c4ca39cf67dc75ae57af58e352fe30ed0286f9f6170c375005c217\"" Dec 13 01:48:26.963113 systemd[1]: Started cri-containerd-bf0f343b12c4ca39cf67dc75ae57af58e352fe30ed0286f9f6170c375005c217.scope - libcontainer container bf0f343b12c4ca39cf67dc75ae57af58e352fe30ed0286f9f6170c375005c217. Dec 13 01:48:26.985617 containerd[1540]: time="2024-12-13T01:48:26.985590149Z" level=info msg="StartContainer for \"bf0f343b12c4ca39cf67dc75ae57af58e352fe30ed0286f9f6170c375005c217\" returns successfully" Dec 13 01:48:27.247096 kubelet[3075]: I1213 01:48:27.247075 3075 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-rn2fr" podStartSLOduration=1.551388856 podStartE2EDuration="7.247012478s" podCreationTimestamp="2024-12-13 01:48:20 +0000 UTC" firstStartedPulling="2024-12-13 01:48:21.220027751 +0000 UTC m=+13.117916976" lastFinishedPulling="2024-12-13 01:48:26.91565137 +0000 UTC m=+18.813540598" observedRunningTime="2024-12-13 01:48:27.244498003 +0000 UTC m=+19.142387237" watchObservedRunningTime="2024-12-13 01:48:27.247012478 +0000 UTC m=+19.144901713" Dec 13 01:48:27.496855 sshd[3480]: Invalid user ubuntu from 36.138.19.180 port 42648 Dec 13 01:48:27.694026 sshd[3480]: Connection closed by invalid user ubuntu 36.138.19.180 port 42648 [preauth] Dec 13 01:48:27.695265 systemd[1]: sshd@77-139.178.70.110:22-36.138.19.180:42648.service: Deactivated successfully. Dec 13 01:48:27.903626 systemd[1]: Started sshd@78-139.178.70.110:22-36.138.19.180:42656.service - OpenSSH per-connection server daemon (36.138.19.180:42656). Dec 13 01:48:28.717937 sshd[3521]: Invalid user ubuntu from 36.138.19.180 port 42656 Dec 13 01:48:28.918127 sshd[3521]: Connection closed by invalid user ubuntu 36.138.19.180 port 42656 [preauth] Dec 13 01:48:28.920150 systemd[1]: sshd@78-139.178.70.110:22-36.138.19.180:42656.service: Deactivated successfully. Dec 13 01:48:29.125544 systemd[1]: Started sshd@79-139.178.70.110:22-36.138.19.180:42660.service - OpenSSH per-connection server daemon (36.138.19.180:42660). Dec 13 01:48:29.756068 kubelet[3075]: I1213 01:48:29.755861 3075 topology_manager.go:215] "Topology Admit Handler" podUID="ac0a2986-4c10-495e-bb78-0d0fbf050305" podNamespace="calico-system" podName="calico-typha-75fddf5874-nr6k7" Dec 13 01:48:29.765192 systemd[1]: Created slice kubepods-besteffort-podac0a2986_4c10_495e_bb78_0d0fbf050305.slice - libcontainer container kubepods-besteffort-podac0a2986_4c10_495e_bb78_0d0fbf050305.slice. Dec 13 01:48:29.826274 kubelet[3075]: I1213 01:48:29.825286 3075 topology_manager.go:215] "Topology Admit Handler" podUID="19681f20-7fcf-4139-b6d3-560008f3b677" podNamespace="calico-system" podName="calico-node-psf7m" Dec 13 01:48:29.830851 systemd[1]: Created slice kubepods-besteffort-pod19681f20_7fcf_4139_b6d3_560008f3b677.slice - libcontainer container kubepods-besteffort-pod19681f20_7fcf_4139_b6d3_560008f3b677.slice. Dec 13 01:48:29.926772 kubelet[3075]: I1213 01:48:29.926749 3075 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmhjv\" (UniqueName: \"kubernetes.io/projected/19681f20-7fcf-4139-b6d3-560008f3b677-kube-api-access-gmhjv\") pod \"calico-node-psf7m\" (UID: \"19681f20-7fcf-4139-b6d3-560008f3b677\") " pod="calico-system/calico-node-psf7m" Dec 13 01:48:29.926963 kubelet[3075]: I1213 01:48:29.926954 3075 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5vrq\" (UniqueName: \"kubernetes.io/projected/ac0a2986-4c10-495e-bb78-0d0fbf050305-kube-api-access-k5vrq\") pod \"calico-typha-75fddf5874-nr6k7\" (UID: \"ac0a2986-4c10-495e-bb78-0d0fbf050305\") " pod="calico-system/calico-typha-75fddf5874-nr6k7" Dec 13 01:48:29.927047 kubelet[3075]: I1213 01:48:29.927042 3075 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/19681f20-7fcf-4139-b6d3-560008f3b677-cni-bin-dir\") pod \"calico-node-psf7m\" (UID: \"19681f20-7fcf-4139-b6d3-560008f3b677\") " pod="calico-system/calico-node-psf7m" Dec 13 01:48:29.927107 kubelet[3075]: I1213 01:48:29.927101 3075 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/19681f20-7fcf-4139-b6d3-560008f3b677-cni-net-dir\") pod \"calico-node-psf7m\" (UID: \"19681f20-7fcf-4139-b6d3-560008f3b677\") " pod="calico-system/calico-node-psf7m" Dec 13 01:48:29.927190 kubelet[3075]: I1213 01:48:29.927185 3075 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/19681f20-7fcf-4139-b6d3-560008f3b677-var-lib-calico\") pod \"calico-node-psf7m\" (UID: \"19681f20-7fcf-4139-b6d3-560008f3b677\") " pod="calico-system/calico-node-psf7m" Dec 13 01:48:29.927274 kubelet[3075]: I1213 01:48:29.927269 3075 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/19681f20-7fcf-4139-b6d3-560008f3b677-cni-log-dir\") pod \"calico-node-psf7m\" (UID: \"19681f20-7fcf-4139-b6d3-560008f3b677\") " pod="calico-system/calico-node-psf7m" Dec 13 01:48:29.927319 kubelet[3075]: I1213 01:48:29.927314 3075 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ac0a2986-4c10-495e-bb78-0d0fbf050305-typha-certs\") pod \"calico-typha-75fddf5874-nr6k7\" (UID: \"ac0a2986-4c10-495e-bb78-0d0fbf050305\") " pod="calico-system/calico-typha-75fddf5874-nr6k7" Dec 13 01:48:29.927398 kubelet[3075]: I1213 01:48:29.927393 3075 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/19681f20-7fcf-4139-b6d3-560008f3b677-flexvol-driver-host\") pod \"calico-node-psf7m\" (UID: \"19681f20-7fcf-4139-b6d3-560008f3b677\") " pod="calico-system/calico-node-psf7m" Dec 13 01:48:29.927513 kubelet[3075]: I1213 01:48:29.927508 3075 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19681f20-7fcf-4139-b6d3-560008f3b677-xtables-lock\") pod \"calico-node-psf7m\" (UID: \"19681f20-7fcf-4139-b6d3-560008f3b677\") " pod="calico-system/calico-node-psf7m" Dec 13 01:48:29.927571 kubelet[3075]: I1213 01:48:29.927566 3075 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac0a2986-4c10-495e-bb78-0d0fbf050305-tigera-ca-bundle\") pod \"calico-typha-75fddf5874-nr6k7\" (UID: \"ac0a2986-4c10-495e-bb78-0d0fbf050305\") " pod="calico-system/calico-typha-75fddf5874-nr6k7" Dec 13 01:48:29.927632 kubelet[3075]: I1213 01:48:29.927627 3075 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/19681f20-7fcf-4139-b6d3-560008f3b677-var-run-calico\") pod \"calico-node-psf7m\" (UID: \"19681f20-7fcf-4139-b6d3-560008f3b677\") " pod="calico-system/calico-node-psf7m" Dec 13 01:48:29.927681 kubelet[3075]: I1213 01:48:29.927671 3075 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/19681f20-7fcf-4139-b6d3-560008f3b677-policysync\") pod \"calico-node-psf7m\" (UID: \"19681f20-7fcf-4139-b6d3-560008f3b677\") " pod="calico-system/calico-node-psf7m" Dec 13 01:48:29.927964 kubelet[3075]: I1213 01:48:29.927813 3075 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/19681f20-7fcf-4139-b6d3-560008f3b677-node-certs\") pod \"calico-node-psf7m\" (UID: \"19681f20-7fcf-4139-b6d3-560008f3b677\") " pod="calico-system/calico-node-psf7m" Dec 13 01:48:29.927964 kubelet[3075]: I1213 01:48:29.927828 3075 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19681f20-7fcf-4139-b6d3-560008f3b677-lib-modules\") pod \"calico-node-psf7m\" (UID: \"19681f20-7fcf-4139-b6d3-560008f3b677\") " pod="calico-system/calico-node-psf7m" Dec 13 01:48:29.927964 kubelet[3075]: I1213 01:48:29.927841 3075 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19681f20-7fcf-4139-b6d3-560008f3b677-tigera-ca-bundle\") pod \"calico-node-psf7m\" (UID: \"19681f20-7fcf-4139-b6d3-560008f3b677\") " pod="calico-system/calico-node-psf7m" Dec 13 01:48:29.932304 kubelet[3075]: I1213 01:48:29.931809 3075 topology_manager.go:215] "Topology Admit Handler" podUID="505a3ea8-bd57-41cf-a662-11b3cdb671b9" podNamespace="calico-system" podName="csi-node-driver-qmcx8" Dec 13 01:48:29.932304 kubelet[3075]: E1213 01:48:29.932029 3075 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qmcx8" podUID="505a3ea8-bd57-41cf-a662-11b3cdb671b9" Dec 13 01:48:29.947143 sshd[3527]: Invalid user ubuntu from 36.138.19.180 port 42660 Dec 13 01:48:30.029239 kubelet[3075]: I1213 01:48:30.028770 3075 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/505a3ea8-bd57-41cf-a662-11b3cdb671b9-registration-dir\") pod \"csi-node-driver-qmcx8\" (UID: \"505a3ea8-bd57-41cf-a662-11b3cdb671b9\") " pod="calico-system/csi-node-driver-qmcx8" Dec 13 01:48:30.029239 kubelet[3075]: I1213 01:48:30.028839 3075 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/505a3ea8-bd57-41cf-a662-11b3cdb671b9-varrun\") pod \"csi-node-driver-qmcx8\" (UID: \"505a3ea8-bd57-41cf-a662-11b3cdb671b9\") " pod="calico-system/csi-node-driver-qmcx8" Dec 13 01:48:30.029239 kubelet[3075]: I1213 01:48:30.028862 3075 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n24mp\" (UniqueName: \"kubernetes.io/projected/505a3ea8-bd57-41cf-a662-11b3cdb671b9-kube-api-access-n24mp\") pod \"csi-node-driver-qmcx8\" (UID: \"505a3ea8-bd57-41cf-a662-11b3cdb671b9\") " pod="calico-system/csi-node-driver-qmcx8" Dec 13 01:48:30.029239 kubelet[3075]: I1213 01:48:30.028915 3075 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/505a3ea8-bd57-41cf-a662-11b3cdb671b9-socket-dir\") pod \"csi-node-driver-qmcx8\" (UID: \"505a3ea8-bd57-41cf-a662-11b3cdb671b9\") " pod="calico-system/csi-node-driver-qmcx8" Dec 13 01:48:30.031975 kubelet[3075]: I1213 01:48:30.031780 3075 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/505a3ea8-bd57-41cf-a662-11b3cdb671b9-kubelet-dir\") pod \"csi-node-driver-qmcx8\" (UID: \"505a3ea8-bd57-41cf-a662-11b3cdb671b9\") " pod="calico-system/csi-node-driver-qmcx8" Dec 13 01:48:30.047145 kubelet[3075]: E1213 01:48:30.047117 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:30.047145 kubelet[3075]: W1213 01:48:30.047140 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:30.049227 kubelet[3075]: E1213 01:48:30.048334 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:30.049227 kubelet[3075]: E1213 01:48:30.048866 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:30.049227 kubelet[3075]: W1213 01:48:30.048961 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:30.049227 kubelet[3075]: E1213 01:48:30.048978 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:30.058362 kubelet[3075]: E1213 01:48:30.058347 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:30.058493 kubelet[3075]: W1213 01:48:30.058483 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:30.058628 kubelet[3075]: E1213 01:48:30.058554 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:30.059032 kubelet[3075]: E1213 01:48:30.059006 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:30.059032 kubelet[3075]: W1213 01:48:30.059013 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:30.059032 kubelet[3075]: E1213 01:48:30.059020 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:30.086943 containerd[1540]: time="2024-12-13T01:48:30.086864776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75fddf5874-nr6k7,Uid:ac0a2986-4c10-495e-bb78-0d0fbf050305,Namespace:calico-system,Attempt:0,}" Dec 13 01:48:30.105763 containerd[1540]: time="2024-12-13T01:48:30.105558654Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:48:30.105763 containerd[1540]: time="2024-12-13T01:48:30.105605711Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:48:30.105763 containerd[1540]: time="2024-12-13T01:48:30.105630874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:30.106204 containerd[1540]: time="2024-12-13T01:48:30.105739605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:30.119117 systemd[1]: Started cri-containerd-0138e0243a956acf2e44323e42d2db9cbc6ec6ec15f513fbe767fc91a16c2e32.scope - libcontainer container 0138e0243a956acf2e44323e42d2db9cbc6ec6ec15f513fbe767fc91a16c2e32. Dec 13 01:48:30.132813 kubelet[3075]: E1213 01:48:30.132742 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:30.132813 kubelet[3075]: W1213 01:48:30.132754 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:30.132813 kubelet[3075]: E1213 01:48:30.132768 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:30.133037 kubelet[3075]: E1213 01:48:30.132999 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:30.133037 kubelet[3075]: W1213 01:48:30.133004 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:30.133037 kubelet[3075]: E1213 01:48:30.133010 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:30.133151 kubelet[3075]: E1213 01:48:30.133104 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:30.133151 kubelet[3075]: W1213 01:48:30.133110 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:30.133151 kubelet[3075]: E1213 01:48:30.133116 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:30.133231 kubelet[3075]: E1213 01:48:30.133209 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:30.133231 kubelet[3075]: W1213 01:48:30.133213 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:30.133231 kubelet[3075]: E1213 01:48:30.133222 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:30.133311 kubelet[3075]: E1213 01:48:30.133305 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:30.133311 kubelet[3075]: W1213 01:48:30.133309 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:30.133437 kubelet[3075]: E1213 01:48:30.133315 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:30.133646 kubelet[3075]: E1213 01:48:30.133634 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:30.133646 kubelet[3075]: W1213 01:48:30.133643 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:30.133687 kubelet[3075]: E1213 01:48:30.133652 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:30.133772 kubelet[3075]: E1213 01:48:30.133761 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:30.133772 kubelet[3075]: W1213 01:48:30.133767 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:30.133772 kubelet[3075]: E1213 01:48:30.133773 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:30.133875 kubelet[3075]: E1213 01:48:30.133857 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:30.133875 kubelet[3075]: W1213 01:48:30.133862 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:30.133875 kubelet[3075]: E1213 01:48:30.133868 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:30.142154 kubelet[3075]: E1213 01:48:30.141198 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:30.142154 kubelet[3075]: W1213 01:48:30.141211 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:30.142154 kubelet[3075]: E1213 01:48:30.141224 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:30.142154 kubelet[3075]: E1213 01:48:30.141673 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:30.142154 kubelet[3075]: W1213 01:48:30.141679 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:30.142154 kubelet[3075]: E1213 01:48:30.141691 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:30.142154 kubelet[3075]: E1213 01:48:30.141786 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:30.142154 kubelet[3075]: W1213 01:48:30.141791 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:30.142154 kubelet[3075]: E1213 01:48:30.141799 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:30.142154 kubelet[3075]: E1213 01:48:30.141899 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:30.142387 containerd[1540]: time="2024-12-13T01:48:30.141577136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-psf7m,Uid:19681f20-7fcf-4139-b6d3-560008f3b677,Namespace:calico-system,Attempt:0,}" Dec 13 01:48:30.142411 kubelet[3075]: W1213 01:48:30.141903 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:30.142411 kubelet[3075]: E1213 01:48:30.141910 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:30.142411 kubelet[3075]: E1213 01:48:30.142064 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:30.142411 kubelet[3075]: W1213 01:48:30.142069 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:30.142411 kubelet[3075]: E1213 01:48:30.142075 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:30.142411 kubelet[3075]: E1213 01:48:30.142158 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:30.142411 kubelet[3075]: W1213 01:48:30.142162 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:30.142411 kubelet[3075]: E1213 01:48:30.142169 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:30.142411 kubelet[3075]: E1213 01:48:30.142240 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:30.142411 kubelet[3075]: W1213 01:48:30.142244 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:30.142567 kubelet[3075]: E1213 01:48:30.142250 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:30.142567 kubelet[3075]: E1213 01:48:30.142318 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:30.142567 kubelet[3075]: W1213 01:48:30.142323 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:30.142567 kubelet[3075]: E1213 01:48:30.142329 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:30.145162 kubelet[3075]: E1213 01:48:30.143950 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:30.145162 kubelet[3075]: W1213 01:48:30.143959 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:30.145162 kubelet[3075]: E1213 01:48:30.143970 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:30.145162 kubelet[3075]: E1213 01:48:30.144126 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:30.145162 kubelet[3075]: W1213 01:48:30.144131 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:30.145162 kubelet[3075]: E1213 01:48:30.144138 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:30.145162 kubelet[3075]: E1213 01:48:30.144747 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:30.145162 kubelet[3075]: W1213 01:48:30.144754 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:30.145162 kubelet[3075]: E1213 01:48:30.144762 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:30.145162 kubelet[3075]: E1213 01:48:30.144928 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:30.145586 kubelet[3075]: W1213 01:48:30.145092 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:30.145586 kubelet[3075]: E1213 01:48:30.145101 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:30.145586 kubelet[3075]: E1213 01:48:30.145249 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:30.145586 kubelet[3075]: W1213 01:48:30.145254 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:30.145586 kubelet[3075]: E1213 01:48:30.145260 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:30.145752 kubelet[3075]: E1213 01:48:30.145692 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:30.145752 kubelet[3075]: W1213 01:48:30.145698 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:30.145752 kubelet[3075]: E1213 01:48:30.145709 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:30.146260 kubelet[3075]: E1213 01:48:30.145795 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:30.146260 kubelet[3075]: W1213 01:48:30.145800 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:30.146260 kubelet[3075]: E1213 01:48:30.145824 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:30.146260 kubelet[3075]: E1213 01:48:30.145949 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:30.146260 kubelet[3075]: W1213 01:48:30.145960 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:30.146260 kubelet[3075]: E1213 01:48:30.145967 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:30.147065 kubelet[3075]: E1213 01:48:30.147054 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:30.147065 kubelet[3075]: W1213 01:48:30.147063 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:30.147129 kubelet[3075]: E1213 01:48:30.147072 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:30.154104 kubelet[3075]: E1213 01:48:30.154061 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:30.154104 kubelet[3075]: W1213 01:48:30.154071 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:30.154104 kubelet[3075]: E1213 01:48:30.154084 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:30.154287 sshd[3527]: Connection closed by invalid user ubuntu 36.138.19.180 port 42660 [preauth] Dec 13 01:48:30.155651 systemd[1]: sshd@79-139.178.70.110:22-36.138.19.180:42660.service: Deactivated successfully. Dec 13 01:48:30.162771 containerd[1540]: time="2024-12-13T01:48:30.157797029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75fddf5874-nr6k7,Uid:ac0a2986-4c10-495e-bb78-0d0fbf050305,Namespace:calico-system,Attempt:0,} returns sandbox id \"0138e0243a956acf2e44323e42d2db9cbc6ec6ec15f513fbe767fc91a16c2e32\"" Dec 13 01:48:30.162771 containerd[1540]: time="2024-12-13T01:48:30.159083582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 01:48:30.204658 containerd[1540]: time="2024-12-13T01:48:30.204460602Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:48:30.204658 containerd[1540]: time="2024-12-13T01:48:30.204526387Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:48:30.204658 containerd[1540]: time="2024-12-13T01:48:30.204538334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:30.204658 containerd[1540]: time="2024-12-13T01:48:30.204617520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:30.218132 systemd[1]: Started cri-containerd-b272403bb7d9abf30cd2412743b1e495ef09960e5e998bb6ab3e6e315263ca3f.scope - libcontainer container b272403bb7d9abf30cd2412743b1e495ef09960e5e998bb6ab3e6e315263ca3f. Dec 13 01:48:30.235291 containerd[1540]: time="2024-12-13T01:48:30.235256130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-psf7m,Uid:19681f20-7fcf-4139-b6d3-560008f3b677,Namespace:calico-system,Attempt:0,} returns sandbox id \"b272403bb7d9abf30cd2412743b1e495ef09960e5e998bb6ab3e6e315263ca3f\"" Dec 13 01:48:30.363672 systemd[1]: Started sshd@80-139.178.70.110:22-36.138.19.180:42668.service - OpenSSH per-connection server daemon (36.138.19.180:42668). Dec 13 01:48:31.188485 sshd[3654]: Invalid user ubuntu from 36.138.19.180 port 42668 Dec 13 01:48:31.389945 sshd[3654]: Connection closed by invalid user ubuntu 36.138.19.180 port 42668 [preauth] Dec 13 01:48:31.391380 systemd[1]: sshd@80-139.178.70.110:22-36.138.19.180:42668.service: Deactivated successfully. Dec 13 01:48:31.454487 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2643168707.mount: Deactivated successfully. Dec 13 01:48:31.591514 systemd[1]: Started sshd@81-139.178.70.110:22-36.138.19.180:42676.service - OpenSSH per-connection server daemon (36.138.19.180:42676). Dec 13 01:48:31.989014 containerd[1540]: time="2024-12-13T01:48:31.988971501Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:31.989534 containerd[1540]: time="2024-12-13T01:48:31.989405706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Dec 13 01:48:31.989796 containerd[1540]: time="2024-12-13T01:48:31.989735732Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:31.990999 containerd[1540]: time="2024-12-13T01:48:31.990873275Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:31.991441 containerd[1540]: time="2024-12-13T01:48:31.991424349Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 1.83232499s" Dec 13 01:48:31.991478 containerd[1540]: time="2024-12-13T01:48:31.991441427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Dec 13 01:48:31.996369 containerd[1540]: time="2024-12-13T01:48:31.996345154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 01:48:32.016044 containerd[1540]: time="2024-12-13T01:48:32.015751024Z" level=info msg="CreateContainer within sandbox \"0138e0243a956acf2e44323e42d2db9cbc6ec6ec15f513fbe767fc91a16c2e32\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 01:48:32.022400 containerd[1540]: time="2024-12-13T01:48:32.022334796Z" level=info msg="CreateContainer within sandbox \"0138e0243a956acf2e44323e42d2db9cbc6ec6ec15f513fbe767fc91a16c2e32\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"73f2bc7e1bedbe55df7c08654f27b9fb7a5212263c2aaeac7a8874daa0d0b42e\"" Dec 13 01:48:32.022982 containerd[1540]: time="2024-12-13T01:48:32.022956836Z" level=info msg="StartContainer for \"73f2bc7e1bedbe55df7c08654f27b9fb7a5212263c2aaeac7a8874daa0d0b42e\"" Dec 13 01:48:32.063049 systemd[1]: Started cri-containerd-73f2bc7e1bedbe55df7c08654f27b9fb7a5212263c2aaeac7a8874daa0d0b42e.scope - libcontainer container 73f2bc7e1bedbe55df7c08654f27b9fb7a5212263c2aaeac7a8874daa0d0b42e. Dec 13 01:48:32.097198 containerd[1540]: time="2024-12-13T01:48:32.097171286Z" level=info msg="StartContainer for \"73f2bc7e1bedbe55df7c08654f27b9fb7a5212263c2aaeac7a8874daa0d0b42e\" returns successfully" Dec 13 01:48:32.188955 kubelet[3075]: E1213 01:48:32.188352 3075 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qmcx8" podUID="505a3ea8-bd57-41cf-a662-11b3cdb671b9" Dec 13 01:48:32.278020 kubelet[3075]: I1213 01:48:32.277784 3075 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-75fddf5874-nr6k7" podStartSLOduration=1.441716297 podStartE2EDuration="3.277070155s" podCreationTimestamp="2024-12-13 01:48:29 +0000 UTC" firstStartedPulling="2024-12-13 01:48:30.158479275 +0000 UTC m=+22.056368501" lastFinishedPulling="2024-12-13 01:48:31.993833134 +0000 UTC m=+23.891722359" observedRunningTime="2024-12-13 01:48:32.277033728 +0000 UTC m=+24.174922962" watchObservedRunningTime="2024-12-13 01:48:32.277070155 +0000 UTC m=+24.174959383" Dec 13 01:48:32.349377 kubelet[3075]: E1213 01:48:32.349353 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:32.349377 kubelet[3075]: W1213 01:48:32.349372 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:32.349508 kubelet[3075]: E1213 01:48:32.349389 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:32.349508 kubelet[3075]: E1213 01:48:32.349498 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:32.349508 kubelet[3075]: W1213 01:48:32.349504 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:32.349585 kubelet[3075]: E1213 01:48:32.349511 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:32.349617 kubelet[3075]: E1213 01:48:32.349593 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:32.349617 kubelet[3075]: W1213 01:48:32.349597 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:32.349617 kubelet[3075]: E1213 01:48:32.349603 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:32.349688 kubelet[3075]: E1213 01:48:32.349684 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:32.349708 kubelet[3075]: W1213 01:48:32.349688 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:32.349708 kubelet[3075]: E1213 01:48:32.349694 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:32.349792 kubelet[3075]: E1213 01:48:32.349783 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:32.349792 kubelet[3075]: W1213 01:48:32.349789 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:32.349912 kubelet[3075]: E1213 01:48:32.349795 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:32.349912 kubelet[3075]: E1213 01:48:32.349876 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:32.349912 kubelet[3075]: W1213 01:48:32.349880 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:32.349912 kubelet[3075]: E1213 01:48:32.349886 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:32.350024 kubelet[3075]: E1213 01:48:32.349968 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:32.350024 kubelet[3075]: W1213 01:48:32.349972 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:32.350024 kubelet[3075]: E1213 01:48:32.349978 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:32.350096 kubelet[3075]: E1213 01:48:32.350056 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:32.350096 kubelet[3075]: W1213 01:48:32.350060 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:32.350096 kubelet[3075]: E1213 01:48:32.350066 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:32.350160 kubelet[3075]: E1213 01:48:32.350146 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:32.350160 kubelet[3075]: W1213 01:48:32.350151 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:32.350160 kubelet[3075]: E1213 01:48:32.350156 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:32.350295 kubelet[3075]: E1213 01:48:32.350271 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:32.350295 kubelet[3075]: W1213 01:48:32.350275 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:32.350295 kubelet[3075]: E1213 01:48:32.350282 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:32.350361 kubelet[3075]: E1213 01:48:32.350355 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:32.350361 kubelet[3075]: W1213 01:48:32.350359 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:32.350398 kubelet[3075]: E1213 01:48:32.350366 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:32.350444 kubelet[3075]: E1213 01:48:32.350439 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:32.350444 kubelet[3075]: W1213 01:48:32.350444 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:32.350487 kubelet[3075]: E1213 01:48:32.350450 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:32.350537 kubelet[3075]: E1213 01:48:32.350528 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:32.350537 kubelet[3075]: W1213 01:48:32.350535 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:32.350740 kubelet[3075]: E1213 01:48:32.350541 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:32.350740 kubelet[3075]: E1213 01:48:32.350613 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:32.350740 kubelet[3075]: W1213 01:48:32.350617 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:32.350740 kubelet[3075]: E1213 01:48:32.350622 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:32.350740 kubelet[3075]: E1213 01:48:32.350715 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:32.350740 kubelet[3075]: W1213 01:48:32.350719 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:32.350740 kubelet[3075]: E1213 01:48:32.350726 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:32.423744 sshd[3665]: Invalid user ubuntu from 36.138.19.180 port 42676 Dec 13 01:48:32.446907 kubelet[3075]: E1213 01:48:32.446890 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:32.446907 kubelet[3075]: W1213 01:48:32.446903 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:32.446907 kubelet[3075]: E1213 01:48:32.446915 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:32.447094 kubelet[3075]: E1213 01:48:32.447083 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:32.447094 kubelet[3075]: W1213 01:48:32.447092 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:32.447295 kubelet[3075]: E1213 01:48:32.447101 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:32.447295 kubelet[3075]: E1213 01:48:32.447211 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:32.447295 kubelet[3075]: W1213 01:48:32.447218 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:32.447295 kubelet[3075]: E1213 01:48:32.447231 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:32.447401 kubelet[3075]: E1213 01:48:32.447396 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:32.447479 kubelet[3075]: W1213 01:48:32.447428 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:32.447479 kubelet[3075]: E1213 01:48:32.447441 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:32.447545 kubelet[3075]: E1213 01:48:32.447540 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:32.447578 kubelet[3075]: W1213 01:48:32.447572 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:32.447613 kubelet[3075]: E1213 01:48:32.447607 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:32.447994 kubelet[3075]: E1213 01:48:32.447740 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:32.447994 kubelet[3075]: W1213 01:48:32.447745 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:32.447994 kubelet[3075]: E1213 01:48:32.447755 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:32.447994 kubelet[3075]: E1213 01:48:32.447917 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:32.447994 kubelet[3075]: W1213 01:48:32.447932 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:32.447994 kubelet[3075]: E1213 01:48:32.447940 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:32.448133 kubelet[3075]: E1213 01:48:32.448127 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:32.448164 kubelet[3075]: W1213 01:48:32.448160 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:32.448203 kubelet[3075]: E1213 01:48:32.448198 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:32.448335 kubelet[3075]: E1213 01:48:32.448330 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:32.448373 kubelet[3075]: W1213 01:48:32.448368 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:32.448411 kubelet[3075]: E1213 01:48:32.448405 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:32.448537 kubelet[3075]: E1213 01:48:32.448531 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:32.448567 kubelet[3075]: W1213 01:48:32.448562 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:32.448616 kubelet[3075]: E1213 01:48:32.448605 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:32.448722 kubelet[3075]: E1213 01:48:32.448716 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:32.448757 kubelet[3075]: W1213 01:48:32.448752 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:32.448793 kubelet[3075]: E1213 01:48:32.448789 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:32.448917 kubelet[3075]: E1213 01:48:32.448911 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:32.448967 kubelet[3075]: W1213 01:48:32.448961 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:32.449010 kubelet[3075]: E1213 01:48:32.449000 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:32.449108 kubelet[3075]: E1213 01:48:32.449096 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:32.449108 kubelet[3075]: W1213 01:48:32.449104 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:32.449154 kubelet[3075]: E1213 01:48:32.449113 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:32.449215 kubelet[3075]: E1213 01:48:32.449204 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:32.449215 kubelet[3075]: W1213 01:48:32.449212 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:32.449256 kubelet[3075]: E1213 01:48:32.449219 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:32.449428 kubelet[3075]: E1213 01:48:32.449363 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:32.449428 kubelet[3075]: W1213 01:48:32.449369 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:32.449428 kubelet[3075]: E1213 01:48:32.449379 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:32.449518 kubelet[3075]: E1213 01:48:32.449513 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:32.449552 kubelet[3075]: W1213 01:48:32.449546 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:32.449672 kubelet[3075]: E1213 01:48:32.449583 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:32.449741 kubelet[3075]: E1213 01:48:32.449729 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:32.449741 kubelet[3075]: W1213 01:48:32.449737 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:32.449785 kubelet[3075]: E1213 01:48:32.449746 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:32.449840 kubelet[3075]: E1213 01:48:32.449833 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:32.449840 kubelet[3075]: W1213 01:48:32.449838 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:32.449886 kubelet[3075]: E1213 01:48:32.449844 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:32.618676 sshd[3665]: Connection closed by invalid user ubuntu 36.138.19.180 port 42676 [preauth] Dec 13 01:48:32.618584 systemd[1]: sshd@81-139.178.70.110:22-36.138.19.180:42676.service: Deactivated successfully. Dec 13 01:48:32.821213 systemd[1]: Started sshd@82-139.178.70.110:22-36.138.19.180:42692.service - OpenSSH per-connection server daemon (36.138.19.180:42692). Dec 13 01:48:33.218644 containerd[1540]: time="2024-12-13T01:48:33.218596173Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:33.219213 containerd[1540]: time="2024-12-13T01:48:33.219188263Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Dec 13 01:48:33.219476 containerd[1540]: time="2024-12-13T01:48:33.219245097Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:33.223105 containerd[1540]: time="2024-12-13T01:48:33.223065145Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:33.223881 containerd[1540]: time="2024-12-13T01:48:33.223812521Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.227314467s" Dec 13 01:48:33.223881 containerd[1540]: time="2024-12-13T01:48:33.223833026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 01:48:33.225324 containerd[1540]: time="2024-12-13T01:48:33.225303834Z" level=info msg="CreateContainer within sandbox \"b272403bb7d9abf30cd2412743b1e495ef09960e5e998bb6ab3e6e315263ca3f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 01:48:33.251846 kubelet[3075]: I1213 01:48:33.251819 3075 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:48:33.254657 kubelet[3075]: E1213 01:48:33.254568 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:33.254657 kubelet[3075]: W1213 01:48:33.254578 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:33.254657 kubelet[3075]: E1213 01:48:33.254589 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:33.255001 kubelet[3075]: E1213 01:48:33.254753 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:33.255001 kubelet[3075]: W1213 01:48:33.254758 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:33.255001 kubelet[3075]: E1213 01:48:33.254765 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:33.255001 kubelet[3075]: E1213 01:48:33.254886 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:33.255001 kubelet[3075]: W1213 01:48:33.254891 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:33.255001 kubelet[3075]: E1213 01:48:33.254898 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:33.255770 kubelet[3075]: E1213 01:48:33.255643 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:33.255770 kubelet[3075]: W1213 01:48:33.255650 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:33.255770 kubelet[3075]: E1213 01:48:33.255659 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:33.256148 kubelet[3075]: E1213 01:48:33.255968 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:33.256148 kubelet[3075]: W1213 01:48:33.255973 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:33.256148 kubelet[3075]: E1213 01:48:33.255981 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:33.256148 kubelet[3075]: E1213 01:48:33.256103 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:33.256148 kubelet[3075]: W1213 01:48:33.256111 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:33.256148 kubelet[3075]: E1213 01:48:33.256117 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:33.256268 kubelet[3075]: E1213 01:48:33.256224 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:33.256268 kubelet[3075]: W1213 01:48:33.256228 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:33.256268 kubelet[3075]: E1213 01:48:33.256234 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:33.256849 kubelet[3075]: E1213 01:48:33.256331 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:33.256849 kubelet[3075]: W1213 01:48:33.256338 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:33.256849 kubelet[3075]: E1213 01:48:33.256346 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:33.256849 kubelet[3075]: E1213 01:48:33.256485 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:33.256849 kubelet[3075]: W1213 01:48:33.256489 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:33.256849 kubelet[3075]: E1213 01:48:33.256498 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:33.256849 kubelet[3075]: E1213 01:48:33.256601 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:33.256849 kubelet[3075]: W1213 01:48:33.256617 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:33.256849 kubelet[3075]: E1213 01:48:33.256626 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:33.256849 kubelet[3075]: E1213 01:48:33.256723 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:33.257434 kubelet[3075]: W1213 01:48:33.256729 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:33.257434 kubelet[3075]: E1213 01:48:33.256735 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:33.257434 kubelet[3075]: E1213 01:48:33.257012 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:33.257434 kubelet[3075]: W1213 01:48:33.257020 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:33.257434 kubelet[3075]: E1213 01:48:33.257031 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:33.257434 kubelet[3075]: E1213 01:48:33.257135 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:33.257434 kubelet[3075]: W1213 01:48:33.257140 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:33.257434 kubelet[3075]: E1213 01:48:33.257160 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:33.257434 kubelet[3075]: E1213 01:48:33.257284 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:33.257434 kubelet[3075]: W1213 01:48:33.257290 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:33.257892 kubelet[3075]: E1213 01:48:33.257298 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:33.257892 kubelet[3075]: E1213 01:48:33.257416 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:33.257892 kubelet[3075]: W1213 01:48:33.257421 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:33.257892 kubelet[3075]: E1213 01:48:33.257427 3075 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:33.262017 containerd[1540]: time="2024-12-13T01:48:33.261908536Z" level=info msg="CreateContainer within sandbox \"b272403bb7d9abf30cd2412743b1e495ef09960e5e998bb6ab3e6e315263ca3f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7670a9ae7bec903993269c09beac9865815fe3bddd5d1ace8708ebaa58b6fc0f\"" Dec 13 01:48:33.264094 containerd[1540]: time="2024-12-13T01:48:33.263501800Z" level=info msg="StartContainer for \"7670a9ae7bec903993269c09beac9865815fe3bddd5d1ace8708ebaa58b6fc0f\"" Dec 13 01:48:33.281744 systemd[1]: run-containerd-runc-k8s.io-7670a9ae7bec903993269c09beac9865815fe3bddd5d1ace8708ebaa58b6fc0f-runc.eU8nfY.mount: Deactivated successfully. Dec 13 01:48:33.291043 systemd[1]: Started cri-containerd-7670a9ae7bec903993269c09beac9865815fe3bddd5d1ace8708ebaa58b6fc0f.scope - libcontainer container 7670a9ae7bec903993269c09beac9865815fe3bddd5d1ace8708ebaa58b6fc0f. Dec 13 01:48:33.313343 containerd[1540]: time="2024-12-13T01:48:33.313188061Z" level=info msg="StartContainer for \"7670a9ae7bec903993269c09beac9865815fe3bddd5d1ace8708ebaa58b6fc0f\" returns successfully" Dec 13 01:48:33.315891 systemd[1]: cri-containerd-7670a9ae7bec903993269c09beac9865815fe3bddd5d1ace8708ebaa58b6fc0f.scope: Deactivated successfully. Dec 13 01:48:33.329759 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7670a9ae7bec903993269c09beac9865815fe3bddd5d1ace8708ebaa58b6fc0f-rootfs.mount: Deactivated successfully. Dec 13 01:48:33.565257 containerd[1540]: time="2024-12-13T01:48:33.546544720Z" level=info msg="shim disconnected" id=7670a9ae7bec903993269c09beac9865815fe3bddd5d1ace8708ebaa58b6fc0f namespace=k8s.io Dec 13 01:48:33.565257 containerd[1540]: time="2024-12-13T01:48:33.564837807Z" level=warning msg="cleaning up after shim disconnected" id=7670a9ae7bec903993269c09beac9865815fe3bddd5d1ace8708ebaa58b6fc0f namespace=k8s.io Dec 13 01:48:33.565257 containerd[1540]: time="2024-12-13T01:48:33.564848660Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:48:33.572682 containerd[1540]: time="2024-12-13T01:48:33.572340422Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:48:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:48:33.623733 sshd[3749]: Invalid user ubuntu from 36.138.19.180 port 42692 Dec 13 01:48:33.820459 sshd[3749]: Connection closed by invalid user ubuntu 36.138.19.180 port 42692 [preauth] Dec 13 01:48:33.821374 systemd[1]: sshd@82-139.178.70.110:22-36.138.19.180:42692.service: Deactivated successfully. Dec 13 01:48:34.035781 systemd[1]: Started sshd@83-139.178.70.110:22-36.138.19.180:35132.service - OpenSSH per-connection server daemon (36.138.19.180:35132). Dec 13 01:48:34.188856 kubelet[3075]: E1213 01:48:34.188599 3075 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qmcx8" podUID="505a3ea8-bd57-41cf-a662-11b3cdb671b9" Dec 13 01:48:34.258758 containerd[1540]: time="2024-12-13T01:48:34.258708489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 01:48:34.852408 sshd[3842]: Invalid user ubuntu from 36.138.19.180 port 35132 Dec 13 01:48:35.052662 sshd[3842]: Connection closed by invalid user ubuntu 36.138.19.180 port 35132 [preauth] Dec 13 01:48:35.053808 systemd[1]: sshd@83-139.178.70.110:22-36.138.19.180:35132.service: Deactivated successfully. Dec 13 01:48:35.267385 systemd[1]: Started sshd@84-139.178.70.110:22-36.138.19.180:35144.service - OpenSSH per-connection server daemon (36.138.19.180:35144). Dec 13 01:48:36.125398 sshd[3847]: Invalid user ubuntu from 36.138.19.180 port 35144 Dec 13 01:48:36.188294 kubelet[3075]: E1213 01:48:36.188054 3075 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qmcx8" podUID="505a3ea8-bd57-41cf-a662-11b3cdb671b9" Dec 13 01:48:36.338651 sshd[3847]: Connection closed by invalid user ubuntu 36.138.19.180 port 35144 [preauth] Dec 13 01:48:36.339451 systemd[1]: sshd@84-139.178.70.110:22-36.138.19.180:35144.service: Deactivated successfully. Dec 13 01:48:36.516753 systemd[1]: Started sshd@85-139.178.70.110:22-36.138.19.180:35146.service - OpenSSH per-connection server daemon (36.138.19.180:35146). Dec 13 01:48:37.282412 sshd[3852]: Invalid user ubuntu from 36.138.19.180 port 35146 Dec 13 01:48:37.465948 sshd[3852]: Connection closed by invalid user ubuntu 36.138.19.180 port 35146 [preauth] Dec 13 01:48:37.466846 systemd[1]: sshd@85-139.178.70.110:22-36.138.19.180:35146.service: Deactivated successfully. Dec 13 01:48:37.668444 systemd[1]: Started sshd@86-139.178.70.110:22-36.138.19.180:35156.service - OpenSSH per-connection server daemon (36.138.19.180:35156). Dec 13 01:48:37.772575 containerd[1540]: time="2024-12-13T01:48:37.771947879Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:37.773733 containerd[1540]: time="2024-12-13T01:48:37.773136564Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Dec 13 01:48:37.773826 containerd[1540]: time="2024-12-13T01:48:37.773486363Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:37.776403 containerd[1540]: time="2024-12-13T01:48:37.776362860Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:37.777942 containerd[1540]: time="2024-12-13T01:48:37.777512968Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 3.518755396s" Dec 13 01:48:37.777942 containerd[1540]: time="2024-12-13T01:48:37.777536744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 01:48:37.783884 containerd[1540]: time="2024-12-13T01:48:37.782941448Z" level=info msg="CreateContainer within sandbox \"b272403bb7d9abf30cd2412743b1e495ef09960e5e998bb6ab3e6e315263ca3f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:48:37.796241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2708053538.mount: Deactivated successfully. Dec 13 01:48:37.802071 containerd[1540]: time="2024-12-13T01:48:37.802044090Z" level=info msg="CreateContainer within sandbox \"b272403bb7d9abf30cd2412743b1e495ef09960e5e998bb6ab3e6e315263ca3f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9e147118e26ae8535f653027bb2ebebf5559a316d361f42996ac2c9f650434d5\"" Dec 13 01:48:37.817534 containerd[1540]: time="2024-12-13T01:48:37.817495048Z" level=info msg="StartContainer for \"9e147118e26ae8535f653027bb2ebebf5559a316d361f42996ac2c9f650434d5\"" Dec 13 01:48:37.853020 systemd[1]: Started cri-containerd-9e147118e26ae8535f653027bb2ebebf5559a316d361f42996ac2c9f650434d5.scope - libcontainer container 9e147118e26ae8535f653027bb2ebebf5559a316d361f42996ac2c9f650434d5. Dec 13 01:48:38.019729 containerd[1540]: time="2024-12-13T01:48:38.019656251Z" level=info msg="StartContainer for \"9e147118e26ae8535f653027bb2ebebf5559a316d361f42996ac2c9f650434d5\" returns successfully" Dec 13 01:48:38.188763 kubelet[3075]: E1213 01:48:38.188574 3075 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qmcx8" podUID="505a3ea8-bd57-41cf-a662-11b3cdb671b9" Dec 13 01:48:38.613847 sshd[3862]: Invalid user ubuntu from 36.138.19.180 port 35156 Dec 13 01:48:38.816132 sshd[3862]: Connection closed by invalid user ubuntu 36.138.19.180 port 35156 [preauth] Dec 13 01:48:38.817512 systemd[1]: sshd@86-139.178.70.110:22-36.138.19.180:35156.service: Deactivated successfully. Dec 13 01:48:38.996082 systemd[1]: Started sshd@87-139.178.70.110:22-36.138.19.180:35168.service - OpenSSH per-connection server daemon (36.138.19.180:35168). Dec 13 01:48:39.921053 sshd[3906]: Invalid user ubuntu from 36.138.19.180 port 35168 Dec 13 01:48:40.096983 sshd[3906]: Connection closed by invalid user ubuntu 36.138.19.180 port 35168 [preauth] Dec 13 01:48:40.097988 systemd[1]: sshd@87-139.178.70.110:22-36.138.19.180:35168.service: Deactivated successfully. Dec 13 01:48:40.189323 kubelet[3075]: E1213 01:48:40.189075 3075 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qmcx8" podUID="505a3ea8-bd57-41cf-a662-11b3cdb671b9" Dec 13 01:48:40.301460 systemd[1]: Started sshd@88-139.178.70.110:22-36.138.19.180:35182.service - OpenSSH per-connection server daemon (36.138.19.180:35182). Dec 13 01:48:40.364033 systemd[1]: cri-containerd-9e147118e26ae8535f653027bb2ebebf5559a316d361f42996ac2c9f650434d5.scope: Deactivated successfully. Dec 13 01:48:40.415162 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e147118e26ae8535f653027bb2ebebf5559a316d361f42996ac2c9f650434d5-rootfs.mount: Deactivated successfully. Dec 13 01:48:40.455326 containerd[1540]: time="2024-12-13T01:48:40.455244543Z" level=info msg="shim disconnected" id=9e147118e26ae8535f653027bb2ebebf5559a316d361f42996ac2c9f650434d5 namespace=k8s.io Dec 13 01:48:40.455857 containerd[1540]: time="2024-12-13T01:48:40.455750647Z" level=warning msg="cleaning up after shim disconnected" id=9e147118e26ae8535f653027bb2ebebf5559a316d361f42996ac2c9f650434d5 namespace=k8s.io Dec 13 01:48:40.455857 containerd[1540]: time="2024-12-13T01:48:40.455763357Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:48:40.505862 kubelet[3075]: I1213 01:48:40.505783 3075 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:48:40.754661 kubelet[3075]: I1213 01:48:40.754464 3075 topology_manager.go:215] "Topology Admit Handler" podUID="b7419af9-8db7-4200-828e-4294ae89fbd9" podNamespace="kube-system" podName="coredns-76f75df574-jzxgp" Dec 13 01:48:40.830006 kubelet[3075]: I1213 01:48:40.829972 3075 topology_manager.go:215] "Topology Admit Handler" podUID="4d8029e0-2c95-494c-bb51-f3a7debfa6c1" podNamespace="kube-system" podName="coredns-76f75df574-4mlcx" Dec 13 01:48:40.830131 kubelet[3075]: I1213 01:48:40.830097 3075 topology_manager.go:215] "Topology Admit Handler" podUID="72a2a28b-0b80-4d0a-89fb-10506cac7c8e" podNamespace="calico-system" podName="calico-kube-controllers-54bc5f94b9-8mt2p" Dec 13 01:48:40.830307 kubelet[3075]: I1213 01:48:40.830158 3075 topology_manager.go:215] "Topology Admit Handler" podUID="c771d094-7c93-4bc5-90e6-c1ad822c0b38" podNamespace="calico-apiserver" podName="calico-apiserver-886bb9bdf-hnf79" Dec 13 01:48:40.830307 kubelet[3075]: I1213 01:48:40.830218 3075 topology_manager.go:215] "Topology Admit Handler" podUID="285c9a76-f344-4cf0-af98-33c38dd5f27a" podNamespace="calico-apiserver" podName="calico-apiserver-886bb9bdf-f88f5" Dec 13 01:48:40.887826 systemd[1]: Created slice kubepods-besteffort-podc771d094_7c93_4bc5_90e6_c1ad822c0b38.slice - libcontainer container kubepods-besteffort-podc771d094_7c93_4bc5_90e6_c1ad822c0b38.slice. Dec 13 01:48:40.893474 systemd[1]: Created slice kubepods-burstable-pod4d8029e0_2c95_494c_bb51_f3a7debfa6c1.slice - libcontainer container kubepods-burstable-pod4d8029e0_2c95_494c_bb51_f3a7debfa6c1.slice. Dec 13 01:48:40.898762 systemd[1]: Created slice kubepods-burstable-podb7419af9_8db7_4200_828e_4294ae89fbd9.slice - libcontainer container kubepods-burstable-podb7419af9_8db7_4200_828e_4294ae89fbd9.slice. Dec 13 01:48:40.904872 systemd[1]: Created slice kubepods-besteffort-pod72a2a28b_0b80_4d0a_89fb_10506cac7c8e.slice - libcontainer container kubepods-besteffort-pod72a2a28b_0b80_4d0a_89fb_10506cac7c8e.slice. Dec 13 01:48:40.909825 systemd[1]: Created slice kubepods-besteffort-pod285c9a76_f344_4cf0_af98_33c38dd5f27a.slice - libcontainer container kubepods-besteffort-pod285c9a76_f344_4cf0_af98_33c38dd5f27a.slice. Dec 13 01:48:40.947169 kubelet[3075]: I1213 01:48:40.947144 3075 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72a2a28b-0b80-4d0a-89fb-10506cac7c8e-tigera-ca-bundle\") pod \"calico-kube-controllers-54bc5f94b9-8mt2p\" (UID: \"72a2a28b-0b80-4d0a-89fb-10506cac7c8e\") " pod="calico-system/calico-kube-controllers-54bc5f94b9-8mt2p" Dec 13 01:48:40.948028 kubelet[3075]: I1213 01:48:40.948013 3075 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vql6g\" (UniqueName: \"kubernetes.io/projected/b7419af9-8db7-4200-828e-4294ae89fbd9-kube-api-access-vql6g\") pod \"coredns-76f75df574-jzxgp\" (UID: \"b7419af9-8db7-4200-828e-4294ae89fbd9\") " pod="kube-system/coredns-76f75df574-jzxgp" Dec 13 01:48:40.948078 kubelet[3075]: I1213 01:48:40.948040 3075 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/285c9a76-f344-4cf0-af98-33c38dd5f27a-calico-apiserver-certs\") pod \"calico-apiserver-886bb9bdf-f88f5\" (UID: \"285c9a76-f344-4cf0-af98-33c38dd5f27a\") " pod="calico-apiserver/calico-apiserver-886bb9bdf-f88f5" Dec 13 01:48:40.948078 kubelet[3075]: I1213 01:48:40.948055 3075 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7419af9-8db7-4200-828e-4294ae89fbd9-config-volume\") pod \"coredns-76f75df574-jzxgp\" (UID: \"b7419af9-8db7-4200-828e-4294ae89fbd9\") " pod="kube-system/coredns-76f75df574-jzxgp" Dec 13 01:48:40.948078 kubelet[3075]: I1213 01:48:40.948071 3075 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d8029e0-2c95-494c-bb51-f3a7debfa6c1-config-volume\") pod \"coredns-76f75df574-4mlcx\" (UID: \"4d8029e0-2c95-494c-bb51-f3a7debfa6c1\") " pod="kube-system/coredns-76f75df574-4mlcx" Dec 13 01:48:40.948331 kubelet[3075]: I1213 01:48:40.948086 3075 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx6kj\" (UniqueName: \"kubernetes.io/projected/c771d094-7c93-4bc5-90e6-c1ad822c0b38-kube-api-access-vx6kj\") pod \"calico-apiserver-886bb9bdf-hnf79\" (UID: \"c771d094-7c93-4bc5-90e6-c1ad822c0b38\") " pod="calico-apiserver/calico-apiserver-886bb9bdf-hnf79" Dec 13 01:48:40.948331 kubelet[3075]: I1213 01:48:40.948103 3075 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qt85\" (UniqueName: \"kubernetes.io/projected/72a2a28b-0b80-4d0a-89fb-10506cac7c8e-kube-api-access-2qt85\") pod \"calico-kube-controllers-54bc5f94b9-8mt2p\" (UID: \"72a2a28b-0b80-4d0a-89fb-10506cac7c8e\") " pod="calico-system/calico-kube-controllers-54bc5f94b9-8mt2p" Dec 13 01:48:40.948331 kubelet[3075]: I1213 01:48:40.948116 3075 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkwnn\" (UniqueName: \"kubernetes.io/projected/285c9a76-f344-4cf0-af98-33c38dd5f27a-kube-api-access-wkwnn\") pod \"calico-apiserver-886bb9bdf-f88f5\" (UID: \"285c9a76-f344-4cf0-af98-33c38dd5f27a\") " pod="calico-apiserver/calico-apiserver-886bb9bdf-f88f5" Dec 13 01:48:40.948331 kubelet[3075]: I1213 01:48:40.948127 3075 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvhb4\" (UniqueName: \"kubernetes.io/projected/4d8029e0-2c95-494c-bb51-f3a7debfa6c1-kube-api-access-lvhb4\") pod \"coredns-76f75df574-4mlcx\" (UID: \"4d8029e0-2c95-494c-bb51-f3a7debfa6c1\") " pod="kube-system/coredns-76f75df574-4mlcx" Dec 13 01:48:40.948331 kubelet[3075]: I1213 01:48:40.948141 3075 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c771d094-7c93-4bc5-90e6-c1ad822c0b38-calico-apiserver-certs\") pod \"calico-apiserver-886bb9bdf-hnf79\" (UID: \"c771d094-7c93-4bc5-90e6-c1ad822c0b38\") " pod="calico-apiserver/calico-apiserver-886bb9bdf-hnf79" Dec 13 01:48:41.110499 sshd[3911]: Invalid user ubuntu from 36.138.19.180 port 35182 Dec 13 01:48:41.196853 containerd[1540]: time="2024-12-13T01:48:41.196739879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4mlcx,Uid:4d8029e0-2c95-494c-bb51-f3a7debfa6c1,Namespace:kube-system,Attempt:0,}" Dec 13 01:48:41.196853 containerd[1540]: time="2024-12-13T01:48:41.196795555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-886bb9bdf-hnf79,Uid:c771d094-7c93-4bc5-90e6-c1ad822c0b38,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:48:41.201379 containerd[1540]: time="2024-12-13T01:48:41.201332946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-jzxgp,Uid:b7419af9-8db7-4200-828e-4294ae89fbd9,Namespace:kube-system,Attempt:0,}" Dec 13 01:48:41.210771 containerd[1540]: time="2024-12-13T01:48:41.210742725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54bc5f94b9-8mt2p,Uid:72a2a28b-0b80-4d0a-89fb-10506cac7c8e,Namespace:calico-system,Attempt:0,}" Dec 13 01:48:41.214524 containerd[1540]: time="2024-12-13T01:48:41.214493612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-886bb9bdf-f88f5,Uid:285c9a76-f344-4cf0-af98-33c38dd5f27a,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:48:41.308758 sshd[3911]: Connection closed by invalid user ubuntu 36.138.19.180 port 35182 [preauth] Dec 13 01:48:41.310656 systemd[1]: sshd@88-139.178.70.110:22-36.138.19.180:35182.service: Deactivated successfully. Dec 13 01:48:41.314512 containerd[1540]: time="2024-12-13T01:48:41.314338054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 01:48:41.436472 containerd[1540]: time="2024-12-13T01:48:41.434953353Z" level=error msg="Failed to destroy network for sandbox \"5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:41.436472 containerd[1540]: time="2024-12-13T01:48:41.435432637Z" level=error msg="encountered an error cleaning up failed sandbox \"5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:41.436472 containerd[1540]: time="2024-12-13T01:48:41.435473983Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-886bb9bdf-f88f5,Uid:285c9a76-f344-4cf0-af98-33c38dd5f27a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:41.438960 containerd[1540]: time="2024-12-13T01:48:41.437933085Z" level=error msg="Failed to destroy network for sandbox \"444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:41.438419 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147-shm.mount: Deactivated successfully. Dec 13 01:48:41.440219 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a-shm.mount: Deactivated successfully. Dec 13 01:48:41.444198 containerd[1540]: time="2024-12-13T01:48:41.443986383Z" level=error msg="encountered an error cleaning up failed sandbox \"444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:41.444198 containerd[1540]: time="2024-12-13T01:48:41.444047066Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4mlcx,Uid:4d8029e0-2c95-494c-bb51-f3a7debfa6c1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:41.451134 kubelet[3075]: E1213 01:48:41.450945 3075 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:41.452186 kubelet[3075]: E1213 01:48:41.451545 3075 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:41.452186 kubelet[3075]: E1213 01:48:41.451733 3075 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-886bb9bdf-f88f5" Dec 13 01:48:41.452186 kubelet[3075]: E1213 01:48:41.451757 3075 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-886bb9bdf-f88f5" Dec 13 01:48:41.452278 kubelet[3075]: E1213 01:48:41.451802 3075 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-886bb9bdf-f88f5_calico-apiserver(285c9a76-f344-4cf0-af98-33c38dd5f27a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-886bb9bdf-f88f5_calico-apiserver(285c9a76-f344-4cf0-af98-33c38dd5f27a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-886bb9bdf-f88f5" podUID="285c9a76-f344-4cf0-af98-33c38dd5f27a" Dec 13 01:48:41.453001 kubelet[3075]: E1213 01:48:41.452540 3075 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-4mlcx" Dec 13 01:48:41.453001 kubelet[3075]: E1213 01:48:41.452720 3075 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-4mlcx" Dec 13 01:48:41.453001 kubelet[3075]: E1213 01:48:41.452764 3075 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-4mlcx_kube-system(4d8029e0-2c95-494c-bb51-f3a7debfa6c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-4mlcx_kube-system(4d8029e0-2c95-494c-bb51-f3a7debfa6c1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-4mlcx" podUID="4d8029e0-2c95-494c-bb51-f3a7debfa6c1" Dec 13 01:48:41.468415 containerd[1540]: time="2024-12-13T01:48:41.466014154Z" level=error msg="Failed to destroy network for sandbox \"dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:41.468415 containerd[1540]: time="2024-12-13T01:48:41.466331735Z" level=error msg="encountered an error cleaning up failed sandbox \"dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:41.468415 containerd[1540]: time="2024-12-13T01:48:41.466379476Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54bc5f94b9-8mt2p,Uid:72a2a28b-0b80-4d0a-89fb-10506cac7c8e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:41.468789 kubelet[3075]: E1213 01:48:41.466568 3075 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:41.468789 kubelet[3075]: E1213 01:48:41.466619 3075 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54bc5f94b9-8mt2p" Dec 13 01:48:41.468789 kubelet[3075]: E1213 01:48:41.466639 3075 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54bc5f94b9-8mt2p" Dec 13 01:48:41.468874 kubelet[3075]: E1213 01:48:41.466695 3075 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-54bc5f94b9-8mt2p_calico-system(72a2a28b-0b80-4d0a-89fb-10506cac7c8e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-54bc5f94b9-8mt2p_calico-system(72a2a28b-0b80-4d0a-89fb-10506cac7c8e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54bc5f94b9-8mt2p" podUID="72a2a28b-0b80-4d0a-89fb-10506cac7c8e" Dec 13 01:48:41.469732 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a-shm.mount: Deactivated successfully. Dec 13 01:48:41.470416 containerd[1540]: time="2024-12-13T01:48:41.470153262Z" level=error msg="Failed to destroy network for sandbox \"13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:41.470683 containerd[1540]: time="2024-12-13T01:48:41.470661962Z" level=error msg="encountered an error cleaning up failed sandbox \"13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:41.470716 containerd[1540]: time="2024-12-13T01:48:41.470703557Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-jzxgp,Uid:b7419af9-8db7-4200-828e-4294ae89fbd9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:41.474363 kubelet[3075]: E1213 01:48:41.471879 3075 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:41.474363 kubelet[3075]: E1213 01:48:41.472113 3075 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-jzxgp" Dec 13 01:48:41.474363 kubelet[3075]: E1213 01:48:41.472133 3075 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-jzxgp" Dec 13 01:48:41.473315 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2-shm.mount: Deactivated successfully. Dec 13 01:48:41.475269 kubelet[3075]: E1213 01:48:41.472173 3075 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-jzxgp_kube-system(b7419af9-8db7-4200-828e-4294ae89fbd9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-jzxgp_kube-system(b7419af9-8db7-4200-828e-4294ae89fbd9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-jzxgp" podUID="b7419af9-8db7-4200-828e-4294ae89fbd9" Dec 13 01:48:41.477451 containerd[1540]: time="2024-12-13T01:48:41.477414697Z" level=error msg="Failed to destroy network for sandbox \"946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:41.477705 containerd[1540]: time="2024-12-13T01:48:41.477682264Z" level=error msg="encountered an error cleaning up failed sandbox \"946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:41.477750 containerd[1540]: time="2024-12-13T01:48:41.477726289Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-886bb9bdf-hnf79,Uid:c771d094-7c93-4bc5-90e6-c1ad822c0b38,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:41.478163 kubelet[3075]: E1213 01:48:41.477902 3075 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:41.478163 kubelet[3075]: E1213 01:48:41.477945 3075 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-886bb9bdf-hnf79" Dec 13 01:48:41.478163 kubelet[3075]: E1213 01:48:41.477959 3075 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-886bb9bdf-hnf79" Dec 13 01:48:41.478255 kubelet[3075]: E1213 01:48:41.477998 3075 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-886bb9bdf-hnf79_calico-apiserver(c771d094-7c93-4bc5-90e6-c1ad822c0b38)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-886bb9bdf-hnf79_calico-apiserver(c771d094-7c93-4bc5-90e6-c1ad822c0b38)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-886bb9bdf-hnf79" podUID="c771d094-7c93-4bc5-90e6-c1ad822c0b38" Dec 13 01:48:41.519218 systemd[1]: Started sshd@89-139.178.70.110:22-36.138.19.180:35196.service - OpenSSH per-connection server daemon (36.138.19.180:35196). Dec 13 01:48:42.192253 systemd[1]: Created slice kubepods-besteffort-pod505a3ea8_bd57_41cf_a662_11b3cdb671b9.slice - libcontainer container kubepods-besteffort-pod505a3ea8_bd57_41cf_a662_11b3cdb671b9.slice. Dec 13 01:48:42.194031 containerd[1540]: time="2024-12-13T01:48:42.193914488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qmcx8,Uid:505a3ea8-bd57-41cf-a662-11b3cdb671b9,Namespace:calico-system,Attempt:0,}" Dec 13 01:48:42.235675 containerd[1540]: time="2024-12-13T01:48:42.235597138Z" level=error msg="Failed to destroy network for sandbox \"7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:42.236074 containerd[1540]: time="2024-12-13T01:48:42.235962390Z" level=error msg="encountered an error cleaning up failed sandbox \"7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:42.236074 containerd[1540]: time="2024-12-13T01:48:42.236015034Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qmcx8,Uid:505a3ea8-bd57-41cf-a662-11b3cdb671b9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:42.236252 kubelet[3075]: E1213 01:48:42.236222 3075 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:42.236290 kubelet[3075]: E1213 01:48:42.236272 3075 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qmcx8" Dec 13 01:48:42.236847 kubelet[3075]: E1213 01:48:42.236293 3075 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qmcx8" Dec 13 01:48:42.236847 kubelet[3075]: E1213 01:48:42.236350 3075 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qmcx8_calico-system(505a3ea8-bd57-41cf-a662-11b3cdb671b9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qmcx8_calico-system(505a3ea8-bd57-41cf-a662-11b3cdb671b9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qmcx8" podUID="505a3ea8-bd57-41cf-a662-11b3cdb671b9" Dec 13 01:48:42.348516 kubelet[3075]: I1213 01:48:42.348479 3075 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147" Dec 13 01:48:42.350942 kubelet[3075]: I1213 01:48:42.350866 3075 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2" Dec 13 01:48:42.354949 kubelet[3075]: I1213 01:48:42.354891 3075 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e" Dec 13 01:48:42.356755 kubelet[3075]: I1213 01:48:42.355658 3075 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a" Dec 13 01:48:42.356755 kubelet[3075]: I1213 01:48:42.356256 3075 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f" Dec 13 01:48:42.356940 kubelet[3075]: I1213 01:48:42.356872 3075 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a" Dec 13 01:48:42.374606 containerd[1540]: time="2024-12-13T01:48:42.373930336Z" level=info msg="StopPodSandbox for \"13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2\"" Dec 13 01:48:42.374606 containerd[1540]: time="2024-12-13T01:48:42.374456668Z" level=info msg="StopPodSandbox for \"dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a\"" Dec 13 01:48:42.377413 containerd[1540]: time="2024-12-13T01:48:42.377397793Z" level=info msg="Ensure that sandbox dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a in task-service has been cleanup successfully" Dec 13 01:48:42.377502 containerd[1540]: time="2024-12-13T01:48:42.377479973Z" level=info msg="StopPodSandbox for \"946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e\"" Dec 13 01:48:42.377616 containerd[1540]: time="2024-12-13T01:48:42.377599507Z" level=info msg="Ensure that sandbox 946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e in task-service has been cleanup successfully" Dec 13 01:48:42.378653 containerd[1540]: time="2024-12-13T01:48:42.378631414Z" level=info msg="StopPodSandbox for \"444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a\"" Dec 13 01:48:42.378951 containerd[1540]: time="2024-12-13T01:48:42.377396603Z" level=info msg="Ensure that sandbox 13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2 in task-service has been cleanup successfully" Dec 13 01:48:42.379009 containerd[1540]: time="2024-12-13T01:48:42.378914140Z" level=info msg="StopPodSandbox for \"7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f\"" Dec 13 01:48:42.379093 containerd[1540]: time="2024-12-13T01:48:42.379073994Z" level=info msg="Ensure that sandbox 444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a in task-service has been cleanup successfully" Dec 13 01:48:42.379169 containerd[1540]: time="2024-12-13T01:48:42.379159480Z" level=info msg="Ensure that sandbox 7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f in task-service has been cleanup successfully" Dec 13 01:48:42.379840 containerd[1540]: time="2024-12-13T01:48:42.377457428Z" level=info msg="StopPodSandbox for \"5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147\"" Dec 13 01:48:42.379988 containerd[1540]: time="2024-12-13T01:48:42.379977206Z" level=info msg="Ensure that sandbox 5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147 in task-service has been cleanup successfully" Dec 13 01:48:42.415153 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e-shm.mount: Deactivated successfully. Dec 13 01:48:42.464539 containerd[1540]: time="2024-12-13T01:48:42.464431447Z" level=error msg="StopPodSandbox for \"dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a\" failed" error="failed to destroy network for sandbox \"dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:42.466939 kubelet[3075]: E1213 01:48:42.465471 3075 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a" Dec 13 01:48:42.468030 containerd[1540]: time="2024-12-13T01:48:42.468000684Z" level=error msg="StopPodSandbox for \"946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e\" failed" error="failed to destroy network for sandbox \"946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:42.469320 kubelet[3075]: E1213 01:48:42.469290 3075 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e" Dec 13 01:48:42.470510 containerd[1540]: time="2024-12-13T01:48:42.470476492Z" level=error msg="StopPodSandbox for \"444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a\" failed" error="failed to destroy network for sandbox \"444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:42.470809 kubelet[3075]: E1213 01:48:42.470781 3075 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a" Dec 13 01:48:42.475428 containerd[1540]: time="2024-12-13T01:48:42.475397856Z" level=error msg="StopPodSandbox for \"5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147\" failed" error="failed to destroy network for sandbox \"5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:42.475690 kubelet[3075]: E1213 01:48:42.475657 3075 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147" Dec 13 01:48:42.478450 containerd[1540]: time="2024-12-13T01:48:42.478425002Z" level=error msg="StopPodSandbox for \"7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f\" failed" error="failed to destroy network for sandbox \"7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:42.478675 kubelet[3075]: E1213 01:48:42.478664 3075 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f" Dec 13 01:48:42.478812 containerd[1540]: time="2024-12-13T01:48:42.478797669Z" level=error msg="StopPodSandbox for \"13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2\" failed" error="failed to destroy network for sandbox \"13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:42.478908 kubelet[3075]: E1213 01:48:42.478901 3075 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2" Dec 13 01:48:42.484629 sshd[4096]: Invalid user ubuntu from 36.138.19.180 port 35196 Dec 13 01:48:42.486154 kubelet[3075]: E1213 01:48:42.484833 3075 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a"} Dec 13 01:48:42.486154 kubelet[3075]: E1213 01:48:42.484887 3075 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"72a2a28b-0b80-4d0a-89fb-10506cac7c8e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:48:42.486154 kubelet[3075]: E1213 01:48:42.484911 3075 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"72a2a28b-0b80-4d0a-89fb-10506cac7c8e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54bc5f94b9-8mt2p" podUID="72a2a28b-0b80-4d0a-89fb-10506cac7c8e" Dec 13 01:48:42.486154 kubelet[3075]: E1213 01:48:42.484838 3075 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a"} Dec 13 01:48:42.486390 kubelet[3075]: E1213 01:48:42.484980 3075 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4d8029e0-2c95-494c-bb51-f3a7debfa6c1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:48:42.486390 kubelet[3075]: E1213 01:48:42.484994 3075 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4d8029e0-2c95-494c-bb51-f3a7debfa6c1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-4mlcx" podUID="4d8029e0-2c95-494c-bb51-f3a7debfa6c1" Dec 13 01:48:42.486390 kubelet[3075]: E1213 01:48:42.484846 3075 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e"} Dec 13 01:48:42.486390 kubelet[3075]: E1213 01:48:42.485032 3075 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c771d094-7c93-4bc5-90e6-c1ad822c0b38\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:48:42.486618 kubelet[3075]: E1213 01:48:42.485048 3075 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c771d094-7c93-4bc5-90e6-c1ad822c0b38\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-886bb9bdf-hnf79" podUID="c771d094-7c93-4bc5-90e6-c1ad822c0b38" Dec 13 01:48:42.486618 kubelet[3075]: E1213 01:48:42.484850 3075 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147"} Dec 13 01:48:42.486618 kubelet[3075]: E1213 01:48:42.485069 3075 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"285c9a76-f344-4cf0-af98-33c38dd5f27a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:48:42.486618 kubelet[3075]: E1213 01:48:42.485083 3075 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"285c9a76-f344-4cf0-af98-33c38dd5f27a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-886bb9bdf-f88f5" podUID="285c9a76-f344-4cf0-af98-33c38dd5f27a" Dec 13 01:48:42.486841 kubelet[3075]: E1213 01:48:42.484855 3075 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f"} Dec 13 01:48:42.486841 kubelet[3075]: E1213 01:48:42.485102 3075 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"505a3ea8-bd57-41cf-a662-11b3cdb671b9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:48:42.486841 kubelet[3075]: E1213 01:48:42.485133 3075 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"505a3ea8-bd57-41cf-a662-11b3cdb671b9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qmcx8" podUID="505a3ea8-bd57-41cf-a662-11b3cdb671b9" Dec 13 01:48:42.486841 kubelet[3075]: E1213 01:48:42.485142 3075 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2"} Dec 13 01:48:42.486841 kubelet[3075]: E1213 01:48:42.485158 3075 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b7419af9-8db7-4200-828e-4294ae89fbd9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:48:42.487008 kubelet[3075]: E1213 01:48:42.485181 3075 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b7419af9-8db7-4200-828e-4294ae89fbd9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-jzxgp" podUID="b7419af9-8db7-4200-828e-4294ae89fbd9" Dec 13 01:48:42.683958 sshd[4096]: Connection closed by invalid user ubuntu 36.138.19.180 port 35196 [preauth] Dec 13 01:48:42.688618 systemd[1]: sshd@89-139.178.70.110:22-36.138.19.180:35196.service: Deactivated successfully. Dec 13 01:48:42.901343 systemd[1]: Started sshd@90-139.178.70.110:22-36.138.19.180:35198.service - OpenSSH per-connection server daemon (36.138.19.180:35198). Dec 13 01:48:44.026980 sshd[4242]: Invalid user ubuntu from 36.138.19.180 port 35198 Dec 13 01:48:44.229415 sshd[4242]: Connection closed by invalid user ubuntu 36.138.19.180 port 35198 [preauth] Dec 13 01:48:44.231242 systemd[1]: sshd@90-139.178.70.110:22-36.138.19.180:35198.service: Deactivated successfully. Dec 13 01:48:44.432289 systemd[1]: Started sshd@91-139.178.70.110:22-36.138.19.180:44560.service - OpenSSH per-connection server daemon (36.138.19.180:44560). Dec 13 01:48:45.446177 sshd[4251]: Invalid user ubuntu from 36.138.19.180 port 44560 Dec 13 01:48:45.643629 sshd[4251]: Connection closed by invalid user ubuntu 36.138.19.180 port 44560 [preauth] Dec 13 01:48:45.645513 systemd[1]: sshd@91-139.178.70.110:22-36.138.19.180:44560.service: Deactivated successfully. Dec 13 01:48:45.857651 systemd[1]: Started sshd@92-139.178.70.110:22-36.138.19.180:44562.service - OpenSSH per-connection server daemon (36.138.19.180:44562). Dec 13 01:48:46.472779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1889946588.mount: Deactivated successfully. Dec 13 01:48:46.756603 containerd[1540]: time="2024-12-13T01:48:46.756502436Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Dec 13 01:48:46.756603 containerd[1540]: time="2024-12-13T01:48:46.756569201Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:46.790935 containerd[1540]: time="2024-12-13T01:48:46.790036980Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:46.806399 containerd[1540]: time="2024-12-13T01:48:46.806371069Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:46.808400 containerd[1540]: time="2024-12-13T01:48:46.808371151Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 5.492478886s" Dec 13 01:48:46.808474 containerd[1540]: time="2024-12-13T01:48:46.808461180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 01:48:46.838568 containerd[1540]: time="2024-12-13T01:48:46.838536003Z" level=info msg="CreateContainer within sandbox \"b272403bb7d9abf30cd2412743b1e495ef09960e5e998bb6ab3e6e315263ca3f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 01:48:47.059555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3908905536.mount: Deactivated successfully. Dec 13 01:48:47.091970 sshd[4256]: Invalid user ubuntu from 36.138.19.180 port 44562 Dec 13 01:48:47.104635 containerd[1540]: time="2024-12-13T01:48:47.104526737Z" level=info msg="CreateContainer within sandbox \"b272403bb7d9abf30cd2412743b1e495ef09960e5e998bb6ab3e6e315263ca3f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"fd959aaf948484800bc55fd14221123d8b9966b8a47497cee47ffde9104010a7\"" Dec 13 01:48:47.109257 containerd[1540]: time="2024-12-13T01:48:47.109242414Z" level=info msg="StartContainer for \"fd959aaf948484800bc55fd14221123d8b9966b8a47497cee47ffde9104010a7\"" Dec 13 01:48:47.221034 systemd[1]: Started cri-containerd-fd959aaf948484800bc55fd14221123d8b9966b8a47497cee47ffde9104010a7.scope - libcontainer container fd959aaf948484800bc55fd14221123d8b9966b8a47497cee47ffde9104010a7. Dec 13 01:48:47.247950 containerd[1540]: time="2024-12-13T01:48:47.247061134Z" level=info msg="StartContainer for \"fd959aaf948484800bc55fd14221123d8b9966b8a47497cee47ffde9104010a7\" returns successfully" Dec 13 01:48:47.299874 sshd[4256]: Connection closed by invalid user ubuntu 36.138.19.180 port 44562 [preauth] Dec 13 01:48:47.301597 systemd[1]: sshd@92-139.178.70.110:22-36.138.19.180:44562.service: Deactivated successfully. Dec 13 01:48:47.483799 systemd[1]: Started sshd@93-139.178.70.110:22-36.138.19.180:44576.service - OpenSSH per-connection server daemon (36.138.19.180:44576). Dec 13 01:48:47.615339 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 01:48:47.635522 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 01:48:48.194997 sshd[4302]: Invalid user ubuntu from 36.138.19.180 port 44576 Dec 13 01:48:48.366464 sshd[4302]: Connection closed by invalid user ubuntu 36.138.19.180 port 44576 [preauth] Dec 13 01:48:48.367553 systemd[1]: sshd@93-139.178.70.110:22-36.138.19.180:44576.service: Deactivated successfully. Dec 13 01:48:48.587251 systemd[1]: Started sshd@94-139.178.70.110:22-36.138.19.180:44588.service - OpenSSH per-connection server daemon (36.138.19.180:44588). Dec 13 01:48:49.243980 kernel: bpftool[4500]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 01:48:49.499405 systemd-networkd[1244]: vxlan.calico: Link UP Dec 13 01:48:49.499410 systemd-networkd[1244]: vxlan.calico: Gained carrier Dec 13 01:48:49.518893 sshd[4376]: Invalid user ubuntu from 36.138.19.180 port 44588 Dec 13 01:48:49.731533 sshd[4376]: Connection closed by invalid user ubuntu 36.138.19.180 port 44588 [preauth] Dec 13 01:48:49.734220 systemd[1]: sshd@94-139.178.70.110:22-36.138.19.180:44588.service: Deactivated successfully. Dec 13 01:48:49.913696 systemd[1]: Started sshd@95-139.178.70.110:22-36.138.19.180:44590.service - OpenSSH per-connection server daemon (36.138.19.180:44590). Dec 13 01:48:50.640773 sshd[4584]: Invalid user ubuntu from 36.138.19.180 port 44590 Dec 13 01:48:50.819660 sshd[4584]: Connection closed by invalid user ubuntu 36.138.19.180 port 44590 [preauth] Dec 13 01:48:50.821158 systemd[1]: sshd@95-139.178.70.110:22-36.138.19.180:44590.service: Deactivated successfully. Dec 13 01:48:50.993596 systemd[1]: Started sshd@96-139.178.70.110:22-36.138.19.180:44596.service - OpenSSH per-connection server daemon (36.138.19.180:44596). Dec 13 01:48:51.367102 systemd-networkd[1244]: vxlan.calico: Gained IPv6LL Dec 13 01:48:51.719115 sshd[4592]: Invalid user ubuntu from 36.138.19.180 port 44596 Dec 13 01:48:51.889540 sshd[4592]: Connection closed by invalid user ubuntu 36.138.19.180 port 44596 [preauth] Dec 13 01:48:51.891025 systemd[1]: sshd@96-139.178.70.110:22-36.138.19.180:44596.service: Deactivated successfully. Dec 13 01:48:52.134742 systemd[1]: Started sshd@97-139.178.70.110:22-36.138.19.180:44608.service - OpenSSH per-connection server daemon (36.138.19.180:44608). Dec 13 01:48:52.849266 sshd[4600]: Invalid user debian from 36.138.19.180 port 44608 Dec 13 01:48:53.026636 sshd[4600]: Connection closed by invalid user debian 36.138.19.180 port 44608 [preauth] Dec 13 01:48:53.028207 systemd[1]: sshd@97-139.178.70.110:22-36.138.19.180:44608.service: Deactivated successfully. Dec 13 01:48:53.212792 systemd[1]: Started sshd@98-139.178.70.110:22-36.138.19.180:44618.service - OpenSSH per-connection server daemon (36.138.19.180:44618). Dec 13 01:48:53.927646 sshd[4605]: Invalid user debian from 36.138.19.180 port 44618 Dec 13 01:48:54.103065 sshd[4605]: Connection closed by invalid user debian 36.138.19.180 port 44618 [preauth] Dec 13 01:48:54.104145 systemd[1]: sshd@98-139.178.70.110:22-36.138.19.180:44618.service: Deactivated successfully. Dec 13 01:48:54.190968 containerd[1540]: time="2024-12-13T01:48:54.190223507Z" level=info msg="StopPodSandbox for \"dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a\"" Dec 13 01:48:54.190968 containerd[1540]: time="2024-12-13T01:48:54.190332842Z" level=info msg="StopPodSandbox for \"444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a\"" Dec 13 01:48:54.250739 kubelet[3075]: I1213 01:48:54.250629 3075 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-psf7m" podStartSLOduration=8.672500068 podStartE2EDuration="25.245203874s" podCreationTimestamp="2024-12-13 01:48:29 +0000 UTC" firstStartedPulling="2024-12-13 01:48:30.235974699 +0000 UTC m=+22.133863925" lastFinishedPulling="2024-12-13 01:48:46.808678501 +0000 UTC m=+38.706567731" observedRunningTime="2024-12-13 01:48:47.490146857 +0000 UTC m=+39.388036096" watchObservedRunningTime="2024-12-13 01:48:54.245203874 +0000 UTC m=+46.143093103" Dec 13 01:48:54.298489 systemd[1]: Started sshd@99-139.178.70.110:22-36.138.19.180:53086.service - OpenSSH per-connection server daemon (36.138.19.180:53086). Dec 13 01:48:54.462445 containerd[1540]: 2024-12-13 01:48:54.243 [INFO][4638] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a" Dec 13 01:48:54.462445 containerd[1540]: 2024-12-13 01:48:54.243 [INFO][4638] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a" iface="eth0" netns="/var/run/netns/cni-99718a48-bcee-e4f1-39a7-c9a6d35f325d" Dec 13 01:48:54.462445 containerd[1540]: 2024-12-13 01:48:54.244 [INFO][4638] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a" iface="eth0" netns="/var/run/netns/cni-99718a48-bcee-e4f1-39a7-c9a6d35f325d" Dec 13 01:48:54.462445 containerd[1540]: 2024-12-13 01:48:54.246 [INFO][4638] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a" iface="eth0" netns="/var/run/netns/cni-99718a48-bcee-e4f1-39a7-c9a6d35f325d" Dec 13 01:48:54.462445 containerd[1540]: 2024-12-13 01:48:54.246 [INFO][4638] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a" Dec 13 01:48:54.462445 containerd[1540]: 2024-12-13 01:48:54.246 [INFO][4638] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a" Dec 13 01:48:54.462445 containerd[1540]: 2024-12-13 01:48:54.448 [INFO][4649] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a" HandleID="k8s-pod-network.444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a" Workload="localhost-k8s-coredns--76f75df574--4mlcx-eth0" Dec 13 01:48:54.462445 containerd[1540]: 2024-12-13 01:48:54.449 [INFO][4649] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:48:54.462445 containerd[1540]: 2024-12-13 01:48:54.450 [INFO][4649] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:48:54.462445 containerd[1540]: 2024-12-13 01:48:54.457 [WARNING][4649] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a" HandleID="k8s-pod-network.444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a" Workload="localhost-k8s-coredns--76f75df574--4mlcx-eth0" Dec 13 01:48:54.462445 containerd[1540]: 2024-12-13 01:48:54.457 [INFO][4649] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a" HandleID="k8s-pod-network.444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a" Workload="localhost-k8s-coredns--76f75df574--4mlcx-eth0" Dec 13 01:48:54.462445 containerd[1540]: 2024-12-13 01:48:54.458 [INFO][4649] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:48:54.462445 containerd[1540]: 2024-12-13 01:48:54.461 [INFO][4638] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a" Dec 13 01:48:54.465469 systemd[1]: run-netns-cni\x2d99718a48\x2dbcee\x2de4f1\x2d39a7\x2dc9a6d35f325d.mount: Deactivated successfully. Dec 13 01:48:54.467993 containerd[1540]: time="2024-12-13T01:48:54.467957924Z" level=info msg="TearDown network for sandbox \"444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a\" successfully" Dec 13 01:48:54.467993 containerd[1540]: time="2024-12-13T01:48:54.467979319Z" level=info msg="StopPodSandbox for \"444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a\" returns successfully" Dec 13 01:48:54.468956 containerd[1540]: time="2024-12-13T01:48:54.468724339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4mlcx,Uid:4d8029e0-2c95-494c-bb51-f3a7debfa6c1,Namespace:kube-system,Attempt:1,}" Dec 13 01:48:54.469756 containerd[1540]: 2024-12-13 01:48:54.245 [INFO][4633] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a" Dec 13 01:48:54.469756 containerd[1540]: 2024-12-13 01:48:54.245 [INFO][4633] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a" iface="eth0" netns="/var/run/netns/cni-d9b7dace-ea54-0a1b-e870-c420c1f7ab4b" Dec 13 01:48:54.469756 containerd[1540]: 2024-12-13 01:48:54.245 [INFO][4633] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a" iface="eth0" netns="/var/run/netns/cni-d9b7dace-ea54-0a1b-e870-c420c1f7ab4b" Dec 13 01:48:54.469756 containerd[1540]: 2024-12-13 01:48:54.247 [INFO][4633] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a" iface="eth0" netns="/var/run/netns/cni-d9b7dace-ea54-0a1b-e870-c420c1f7ab4b" Dec 13 01:48:54.469756 containerd[1540]: 2024-12-13 01:48:54.247 [INFO][4633] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a" Dec 13 01:48:54.469756 containerd[1540]: 2024-12-13 01:48:54.247 [INFO][4633] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a" Dec 13 01:48:54.469756 containerd[1540]: 2024-12-13 01:48:54.448 [INFO][4650] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a" HandleID="k8s-pod-network.dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a" Workload="localhost-k8s-calico--kube--controllers--54bc5f94b9--8mt2p-eth0" Dec 13 01:48:54.469756 containerd[1540]: 2024-12-13 01:48:54.449 [INFO][4650] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:48:54.469756 containerd[1540]: 2024-12-13 01:48:54.458 [INFO][4650] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:48:54.469756 containerd[1540]: 2024-12-13 01:48:54.464 [WARNING][4650] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a" HandleID="k8s-pod-network.dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a" Workload="localhost-k8s-calico--kube--controllers--54bc5f94b9--8mt2p-eth0" Dec 13 01:48:54.469756 containerd[1540]: 2024-12-13 01:48:54.464 [INFO][4650] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a" HandleID="k8s-pod-network.dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a" Workload="localhost-k8s-calico--kube--controllers--54bc5f94b9--8mt2p-eth0" Dec 13 01:48:54.469756 containerd[1540]: 2024-12-13 01:48:54.466 [INFO][4650] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:48:54.469756 containerd[1540]: 2024-12-13 01:48:54.468 [INFO][4633] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a" Dec 13 01:48:54.470975 containerd[1540]: time="2024-12-13T01:48:54.470624733Z" level=info msg="TearDown network for sandbox \"dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a\" successfully" Dec 13 01:48:54.470975 containerd[1540]: time="2024-12-13T01:48:54.470637042Z" level=info msg="StopPodSandbox for \"dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a\" returns successfully" Dec 13 01:48:54.470975 containerd[1540]: time="2024-12-13T01:48:54.470929761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54bc5f94b9-8mt2p,Uid:72a2a28b-0b80-4d0a-89fb-10506cac7c8e,Namespace:calico-system,Attempt:1,}" Dec 13 01:48:54.474187 systemd[1]: run-netns-cni\x2dd9b7dace\x2dea54\x2d0a1b\x2de870\x2dc420c1f7ab4b.mount: Deactivated successfully. Dec 13 01:48:54.584152 systemd-networkd[1244]: cali8971e558873: Link UP Dec 13 01:48:54.584611 systemd-networkd[1244]: cali8971e558873: Gained carrier Dec 13 01:48:54.602856 containerd[1540]: 2024-12-13 01:48:54.526 [INFO][4674] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--4mlcx-eth0 coredns-76f75df574- kube-system 4d8029e0-2c95-494c-bb51-f3a7debfa6c1 743 0 2024-12-13 01:48:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-4mlcx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8971e558873 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="be2e0a7bb9226e8e5a80a5fa6f630de68e9800c9884349451993c897196688fb" Namespace="kube-system" Pod="coredns-76f75df574-4mlcx" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--4mlcx-" Dec 13 01:48:54.602856 containerd[1540]: 2024-12-13 01:48:54.526 [INFO][4674] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="be2e0a7bb9226e8e5a80a5fa6f630de68e9800c9884349451993c897196688fb" Namespace="kube-system" Pod="coredns-76f75df574-4mlcx" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--4mlcx-eth0" Dec 13 01:48:54.602856 containerd[1540]: 2024-12-13 01:48:54.550 [INFO][4687] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="be2e0a7bb9226e8e5a80a5fa6f630de68e9800c9884349451993c897196688fb" HandleID="k8s-pod-network.be2e0a7bb9226e8e5a80a5fa6f630de68e9800c9884349451993c897196688fb" Workload="localhost-k8s-coredns--76f75df574--4mlcx-eth0" Dec 13 01:48:54.602856 containerd[1540]: 2024-12-13 01:48:54.558 [INFO][4687] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="be2e0a7bb9226e8e5a80a5fa6f630de68e9800c9884349451993c897196688fb" HandleID="k8s-pod-network.be2e0a7bb9226e8e5a80a5fa6f630de68e9800c9884349451993c897196688fb" Workload="localhost-k8s-coredns--76f75df574--4mlcx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050e30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-4mlcx", "timestamp":"2024-12-13 01:48:54.550557614 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:48:54.602856 containerd[1540]: 2024-12-13 01:48:54.558 [INFO][4687] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:48:54.602856 containerd[1540]: 2024-12-13 01:48:54.558 [INFO][4687] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:48:54.602856 containerd[1540]: 2024-12-13 01:48:54.558 [INFO][4687] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:48:54.602856 containerd[1540]: 2024-12-13 01:48:54.559 [INFO][4687] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.be2e0a7bb9226e8e5a80a5fa6f630de68e9800c9884349451993c897196688fb" host="localhost" Dec 13 01:48:54.602856 containerd[1540]: 2024-12-13 01:48:54.565 [INFO][4687] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:48:54.602856 containerd[1540]: 2024-12-13 01:48:54.567 [INFO][4687] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:48:54.602856 containerd[1540]: 2024-12-13 01:48:54.568 [INFO][4687] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:48:54.602856 containerd[1540]: 2024-12-13 01:48:54.569 [INFO][4687] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:48:54.602856 containerd[1540]: 2024-12-13 01:48:54.569 [INFO][4687] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.be2e0a7bb9226e8e5a80a5fa6f630de68e9800c9884349451993c897196688fb" host="localhost" Dec 13 01:48:54.602856 containerd[1540]: 2024-12-13 01:48:54.569 [INFO][4687] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.be2e0a7bb9226e8e5a80a5fa6f630de68e9800c9884349451993c897196688fb Dec 13 01:48:54.602856 containerd[1540]: 2024-12-13 01:48:54.572 [INFO][4687] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.be2e0a7bb9226e8e5a80a5fa6f630de68e9800c9884349451993c897196688fb" host="localhost" Dec 13 01:48:54.602856 containerd[1540]: 2024-12-13 01:48:54.577 [INFO][4687] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.be2e0a7bb9226e8e5a80a5fa6f630de68e9800c9884349451993c897196688fb" host="localhost" Dec 13 01:48:54.602856 containerd[1540]: 2024-12-13 01:48:54.577 [INFO][4687] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.be2e0a7bb9226e8e5a80a5fa6f630de68e9800c9884349451993c897196688fb" host="localhost" Dec 13 01:48:54.602856 containerd[1540]: 2024-12-13 01:48:54.577 [INFO][4687] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:48:54.602856 containerd[1540]: 2024-12-13 01:48:54.577 [INFO][4687] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="be2e0a7bb9226e8e5a80a5fa6f630de68e9800c9884349451993c897196688fb" HandleID="k8s-pod-network.be2e0a7bb9226e8e5a80a5fa6f630de68e9800c9884349451993c897196688fb" Workload="localhost-k8s-coredns--76f75df574--4mlcx-eth0" Dec 13 01:48:54.606286 containerd[1540]: 2024-12-13 01:48:54.579 [INFO][4674] cni-plugin/k8s.go 386: Populated endpoint ContainerID="be2e0a7bb9226e8e5a80a5fa6f630de68e9800c9884349451993c897196688fb" Namespace="kube-system" Pod="coredns-76f75df574-4mlcx" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--4mlcx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--4mlcx-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4d8029e0-2c95-494c-bb51-f3a7debfa6c1", ResourceVersion:"743", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-4mlcx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8971e558873", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:48:54.606286 containerd[1540]: 2024-12-13 01:48:54.579 [INFO][4674] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="be2e0a7bb9226e8e5a80a5fa6f630de68e9800c9884349451993c897196688fb" Namespace="kube-system" Pod="coredns-76f75df574-4mlcx" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--4mlcx-eth0" Dec 13 01:48:54.606286 containerd[1540]: 2024-12-13 01:48:54.579 [INFO][4674] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8971e558873 ContainerID="be2e0a7bb9226e8e5a80a5fa6f630de68e9800c9884349451993c897196688fb" Namespace="kube-system" Pod="coredns-76f75df574-4mlcx" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--4mlcx-eth0" Dec 13 01:48:54.606286 containerd[1540]: 2024-12-13 01:48:54.586 [INFO][4674] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="be2e0a7bb9226e8e5a80a5fa6f630de68e9800c9884349451993c897196688fb" Namespace="kube-system" Pod="coredns-76f75df574-4mlcx" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--4mlcx-eth0" Dec 13 01:48:54.606286 containerd[1540]: 2024-12-13 01:48:54.586 [INFO][4674] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="be2e0a7bb9226e8e5a80a5fa6f630de68e9800c9884349451993c897196688fb" Namespace="kube-system" Pod="coredns-76f75df574-4mlcx" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--4mlcx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--4mlcx-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4d8029e0-2c95-494c-bb51-f3a7debfa6c1", ResourceVersion:"743", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"be2e0a7bb9226e8e5a80a5fa6f630de68e9800c9884349451993c897196688fb", Pod:"coredns-76f75df574-4mlcx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8971e558873", MAC:"5a:cb:09:9a:07:69", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:48:54.606286 containerd[1540]: 2024-12-13 01:48:54.598 [INFO][4674] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="be2e0a7bb9226e8e5a80a5fa6f630de68e9800c9884349451993c897196688fb" Namespace="kube-system" Pod="coredns-76f75df574-4mlcx" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--4mlcx-eth0" Dec 13 01:48:54.610092 systemd-networkd[1244]: calic15a849e04a: Link UP Dec 13 01:48:54.612028 systemd-networkd[1244]: calic15a849e04a: Gained carrier Dec 13 01:48:54.625833 containerd[1540]: 2024-12-13 01:48:54.524 [INFO][4665] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--54bc5f94b9--8mt2p-eth0 calico-kube-controllers-54bc5f94b9- calico-system 72a2a28b-0b80-4d0a-89fb-10506cac7c8e 744 0 2024-12-13 01:48:29 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:54bc5f94b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-54bc5f94b9-8mt2p eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic15a849e04a [] []}} ContainerID="26b2a78c390a3f37be7ddbcf0a27f72447742d95b1ebfcd0d96b118bd3abae4d" Namespace="calico-system" Pod="calico-kube-controllers-54bc5f94b9-8mt2p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54bc5f94b9--8mt2p-" Dec 13 01:48:54.625833 containerd[1540]: 2024-12-13 01:48:54.525 [INFO][4665] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="26b2a78c390a3f37be7ddbcf0a27f72447742d95b1ebfcd0d96b118bd3abae4d" Namespace="calico-system" Pod="calico-kube-controllers-54bc5f94b9-8mt2p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54bc5f94b9--8mt2p-eth0" Dec 13 01:48:54.625833 containerd[1540]: 2024-12-13 01:48:54.552 [INFO][4688] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="26b2a78c390a3f37be7ddbcf0a27f72447742d95b1ebfcd0d96b118bd3abae4d" HandleID="k8s-pod-network.26b2a78c390a3f37be7ddbcf0a27f72447742d95b1ebfcd0d96b118bd3abae4d" Workload="localhost-k8s-calico--kube--controllers--54bc5f94b9--8mt2p-eth0" Dec 13 01:48:54.625833 containerd[1540]: 2024-12-13 01:48:54.558 [INFO][4688] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="26b2a78c390a3f37be7ddbcf0a27f72447742d95b1ebfcd0d96b118bd3abae4d" HandleID="k8s-pod-network.26b2a78c390a3f37be7ddbcf0a27f72447742d95b1ebfcd0d96b118bd3abae4d" Workload="localhost-k8s-calico--kube--controllers--54bc5f94b9--8mt2p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004bc920), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-54bc5f94b9-8mt2p", "timestamp":"2024-12-13 01:48:54.552622203 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:48:54.625833 containerd[1540]: 2024-12-13 01:48:54.558 [INFO][4688] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:48:54.625833 containerd[1540]: 2024-12-13 01:48:54.577 [INFO][4688] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:48:54.625833 containerd[1540]: 2024-12-13 01:48:54.578 [INFO][4688] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:48:54.625833 containerd[1540]: 2024-12-13 01:48:54.579 [INFO][4688] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.26b2a78c390a3f37be7ddbcf0a27f72447742d95b1ebfcd0d96b118bd3abae4d" host="localhost" Dec 13 01:48:54.625833 containerd[1540]: 2024-12-13 01:48:54.582 [INFO][4688] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:48:54.625833 containerd[1540]: 2024-12-13 01:48:54.586 [INFO][4688] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:48:54.625833 containerd[1540]: 2024-12-13 01:48:54.590 [INFO][4688] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:48:54.625833 containerd[1540]: 2024-12-13 01:48:54.593 [INFO][4688] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:48:54.625833 containerd[1540]: 2024-12-13 01:48:54.593 [INFO][4688] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.26b2a78c390a3f37be7ddbcf0a27f72447742d95b1ebfcd0d96b118bd3abae4d" host="localhost" Dec 13 01:48:54.625833 containerd[1540]: 2024-12-13 01:48:54.595 [INFO][4688] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.26b2a78c390a3f37be7ddbcf0a27f72447742d95b1ebfcd0d96b118bd3abae4d Dec 13 01:48:54.625833 containerd[1540]: 2024-12-13 01:48:54.600 [INFO][4688] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.26b2a78c390a3f37be7ddbcf0a27f72447742d95b1ebfcd0d96b118bd3abae4d" host="localhost" Dec 13 01:48:54.625833 containerd[1540]: 2024-12-13 01:48:54.605 [INFO][4688] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.26b2a78c390a3f37be7ddbcf0a27f72447742d95b1ebfcd0d96b118bd3abae4d" host="localhost" Dec 13 01:48:54.625833 containerd[1540]: 2024-12-13 01:48:54.605 [INFO][4688] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.26b2a78c390a3f37be7ddbcf0a27f72447742d95b1ebfcd0d96b118bd3abae4d" host="localhost" Dec 13 01:48:54.625833 containerd[1540]: 2024-12-13 01:48:54.605 [INFO][4688] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:48:54.625833 containerd[1540]: 2024-12-13 01:48:54.605 [INFO][4688] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="26b2a78c390a3f37be7ddbcf0a27f72447742d95b1ebfcd0d96b118bd3abae4d" HandleID="k8s-pod-network.26b2a78c390a3f37be7ddbcf0a27f72447742d95b1ebfcd0d96b118bd3abae4d" Workload="localhost-k8s-calico--kube--controllers--54bc5f94b9--8mt2p-eth0" Dec 13 01:48:54.626325 containerd[1540]: 2024-12-13 01:48:54.608 [INFO][4665] cni-plugin/k8s.go 386: Populated endpoint ContainerID="26b2a78c390a3f37be7ddbcf0a27f72447742d95b1ebfcd0d96b118bd3abae4d" Namespace="calico-system" Pod="calico-kube-controllers-54bc5f94b9-8mt2p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54bc5f94b9--8mt2p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--54bc5f94b9--8mt2p-eth0", GenerateName:"calico-kube-controllers-54bc5f94b9-", Namespace:"calico-system", SelfLink:"", UID:"72a2a28b-0b80-4d0a-89fb-10506cac7c8e", ResourceVersion:"744", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54bc5f94b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-54bc5f94b9-8mt2p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic15a849e04a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:48:54.626325 containerd[1540]: 2024-12-13 01:48:54.608 [INFO][4665] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="26b2a78c390a3f37be7ddbcf0a27f72447742d95b1ebfcd0d96b118bd3abae4d" Namespace="calico-system" Pod="calico-kube-controllers-54bc5f94b9-8mt2p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54bc5f94b9--8mt2p-eth0" Dec 13 01:48:54.626325 containerd[1540]: 2024-12-13 01:48:54.608 [INFO][4665] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic15a849e04a ContainerID="26b2a78c390a3f37be7ddbcf0a27f72447742d95b1ebfcd0d96b118bd3abae4d" Namespace="calico-system" Pod="calico-kube-controllers-54bc5f94b9-8mt2p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54bc5f94b9--8mt2p-eth0" Dec 13 01:48:54.626325 containerd[1540]: 2024-12-13 01:48:54.612 [INFO][4665] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="26b2a78c390a3f37be7ddbcf0a27f72447742d95b1ebfcd0d96b118bd3abae4d" Namespace="calico-system" Pod="calico-kube-controllers-54bc5f94b9-8mt2p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54bc5f94b9--8mt2p-eth0" Dec 13 01:48:54.626325 containerd[1540]: 2024-12-13 01:48:54.614 [INFO][4665] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="26b2a78c390a3f37be7ddbcf0a27f72447742d95b1ebfcd0d96b118bd3abae4d" Namespace="calico-system" Pod="calico-kube-controllers-54bc5f94b9-8mt2p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54bc5f94b9--8mt2p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--54bc5f94b9--8mt2p-eth0", GenerateName:"calico-kube-controllers-54bc5f94b9-", Namespace:"calico-system", SelfLink:"", UID:"72a2a28b-0b80-4d0a-89fb-10506cac7c8e", ResourceVersion:"744", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54bc5f94b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"26b2a78c390a3f37be7ddbcf0a27f72447742d95b1ebfcd0d96b118bd3abae4d", Pod:"calico-kube-controllers-54bc5f94b9-8mt2p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic15a849e04a", MAC:"2a:c1:3d:51:35:bb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:48:54.626325 containerd[1540]: 2024-12-13 01:48:54.623 [INFO][4665] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="26b2a78c390a3f37be7ddbcf0a27f72447742d95b1ebfcd0d96b118bd3abae4d" Namespace="calico-system" Pod="calico-kube-controllers-54bc5f94b9-8mt2p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54bc5f94b9--8mt2p-eth0" Dec 13 01:48:54.650105 containerd[1540]: time="2024-12-13T01:48:54.650006922Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:48:54.650431 containerd[1540]: time="2024-12-13T01:48:54.650406331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:48:54.650634 containerd[1540]: time="2024-12-13T01:48:54.650571522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:54.650815 containerd[1540]: time="2024-12-13T01:48:54.650794031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:54.652450 containerd[1540]: time="2024-12-13T01:48:54.651882789Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:48:54.652450 containerd[1540]: time="2024-12-13T01:48:54.652372370Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:48:54.652450 containerd[1540]: time="2024-12-13T01:48:54.652381465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:54.652450 containerd[1540]: time="2024-12-13T01:48:54.652429553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:54.670015 systemd[1]: Started cri-containerd-26b2a78c390a3f37be7ddbcf0a27f72447742d95b1ebfcd0d96b118bd3abae4d.scope - libcontainer container 26b2a78c390a3f37be7ddbcf0a27f72447742d95b1ebfcd0d96b118bd3abae4d. Dec 13 01:48:54.673177 systemd[1]: Started cri-containerd-be2e0a7bb9226e8e5a80a5fa6f630de68e9800c9884349451993c897196688fb.scope - libcontainer container be2e0a7bb9226e8e5a80a5fa6f630de68e9800c9884349451993c897196688fb. Dec 13 01:48:54.687497 systemd-resolved[1466]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:48:54.688159 systemd-resolved[1466]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:48:54.724260 containerd[1540]: time="2024-12-13T01:48:54.724182638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54bc5f94b9-8mt2p,Uid:72a2a28b-0b80-4d0a-89fb-10506cac7c8e,Namespace:calico-system,Attempt:1,} returns sandbox id \"26b2a78c390a3f37be7ddbcf0a27f72447742d95b1ebfcd0d96b118bd3abae4d\"" Dec 13 01:48:54.728333 containerd[1540]: time="2024-12-13T01:48:54.728180085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 01:48:54.738463 containerd[1540]: time="2024-12-13T01:48:54.738407449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4mlcx,Uid:4d8029e0-2c95-494c-bb51-f3a7debfa6c1,Namespace:kube-system,Attempt:1,} returns sandbox id \"be2e0a7bb9226e8e5a80a5fa6f630de68e9800c9884349451993c897196688fb\"" Dec 13 01:48:54.747404 containerd[1540]: time="2024-12-13T01:48:54.747090830Z" level=info msg="CreateContainer within sandbox \"be2e0a7bb9226e8e5a80a5fa6f630de68e9800c9884349451993c897196688fb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:48:54.777436 containerd[1540]: time="2024-12-13T01:48:54.777409043Z" level=info msg="CreateContainer within sandbox \"be2e0a7bb9226e8e5a80a5fa6f630de68e9800c9884349451993c897196688fb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6583f2d12c2a8b4577672661e556c7a235b547a91b04eecba9e2646528d31254\"" Dec 13 01:48:54.777898 containerd[1540]: time="2024-12-13T01:48:54.777751562Z" level=info msg="StartContainer for \"6583f2d12c2a8b4577672661e556c7a235b547a91b04eecba9e2646528d31254\"" Dec 13 01:48:54.795007 systemd[1]: Started cri-containerd-6583f2d12c2a8b4577672661e556c7a235b547a91b04eecba9e2646528d31254.scope - libcontainer container 6583f2d12c2a8b4577672661e556c7a235b547a91b04eecba9e2646528d31254. Dec 13 01:48:54.817584 containerd[1540]: time="2024-12-13T01:48:54.817356413Z" level=info msg="StartContainer for \"6583f2d12c2a8b4577672661e556c7a235b547a91b04eecba9e2646528d31254\" returns successfully" Dec 13 01:48:55.093589 sshd[4658]: Invalid user debian from 36.138.19.180 port 53086 Dec 13 01:48:55.288145 sshd[4658]: Connection closed by invalid user debian 36.138.19.180 port 53086 [preauth] Dec 13 01:48:55.288814 systemd[1]: sshd@99-139.178.70.110:22-36.138.19.180:53086.service: Deactivated successfully. Dec 13 01:48:55.404390 kubelet[3075]: I1213 01:48:55.404226 3075 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-4mlcx" podStartSLOduration=35.404197966 podStartE2EDuration="35.404197966s" podCreationTimestamp="2024-12-13 01:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:48:55.398063056 +0000 UTC m=+47.295952289" watchObservedRunningTime="2024-12-13 01:48:55.404197966 +0000 UTC m=+47.302087196" Dec 13 01:48:55.501478 systemd[1]: Started sshd@100-139.178.70.110:22-36.138.19.180:53096.service - OpenSSH per-connection server daemon (36.138.19.180:53096). Dec 13 01:48:55.910055 systemd-networkd[1244]: cali8971e558873: Gained IPv6LL Dec 13 01:48:56.166168 systemd-networkd[1244]: calic15a849e04a: Gained IPv6LL Dec 13 01:48:56.189862 containerd[1540]: time="2024-12-13T01:48:56.189836208Z" level=info msg="StopPodSandbox for \"13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2\"" Dec 13 01:48:56.192605 containerd[1540]: time="2024-12-13T01:48:56.192313557Z" level=info msg="StopPodSandbox for \"5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147\"" Dec 13 01:48:56.302373 containerd[1540]: 2024-12-13 01:48:56.248 [INFO][4884] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2" Dec 13 01:48:56.302373 containerd[1540]: 2024-12-13 01:48:56.248 [INFO][4884] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2" iface="eth0" netns="/var/run/netns/cni-9325e5ba-61b3-a2d8-ca17-f8bf43d8f86b" Dec 13 01:48:56.302373 containerd[1540]: 2024-12-13 01:48:56.248 [INFO][4884] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2" iface="eth0" netns="/var/run/netns/cni-9325e5ba-61b3-a2d8-ca17-f8bf43d8f86b" Dec 13 01:48:56.302373 containerd[1540]: 2024-12-13 01:48:56.248 [INFO][4884] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2" iface="eth0" netns="/var/run/netns/cni-9325e5ba-61b3-a2d8-ca17-f8bf43d8f86b" Dec 13 01:48:56.302373 containerd[1540]: 2024-12-13 01:48:56.248 [INFO][4884] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2" Dec 13 01:48:56.302373 containerd[1540]: 2024-12-13 01:48:56.248 [INFO][4884] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2" Dec 13 01:48:56.302373 containerd[1540]: 2024-12-13 01:48:56.287 [INFO][4904] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2" HandleID="k8s-pod-network.13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2" Workload="localhost-k8s-coredns--76f75df574--jzxgp-eth0" Dec 13 01:48:56.302373 containerd[1540]: 2024-12-13 01:48:56.287 [INFO][4904] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:48:56.302373 containerd[1540]: 2024-12-13 01:48:56.287 [INFO][4904] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:48:56.302373 containerd[1540]: 2024-12-13 01:48:56.294 [WARNING][4904] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2" HandleID="k8s-pod-network.13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2" Workload="localhost-k8s-coredns--76f75df574--jzxgp-eth0" Dec 13 01:48:56.302373 containerd[1540]: 2024-12-13 01:48:56.294 [INFO][4904] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2" HandleID="k8s-pod-network.13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2" Workload="localhost-k8s-coredns--76f75df574--jzxgp-eth0" Dec 13 01:48:56.302373 containerd[1540]: 2024-12-13 01:48:56.296 [INFO][4904] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:48:56.302373 containerd[1540]: 2024-12-13 01:48:56.299 [INFO][4884] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2" Dec 13 01:48:56.310606 containerd[1540]: time="2024-12-13T01:48:56.302688053Z" level=info msg="TearDown network for sandbox \"13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2\" successfully" Dec 13 01:48:56.310606 containerd[1540]: time="2024-12-13T01:48:56.302706749Z" level=info msg="StopPodSandbox for \"13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2\" returns successfully" Dec 13 01:48:56.310606 containerd[1540]: time="2024-12-13T01:48:56.304562402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-jzxgp,Uid:b7419af9-8db7-4200-828e-4294ae89fbd9,Namespace:kube-system,Attempt:1,}" Dec 13 01:48:56.310606 containerd[1540]: 2024-12-13 01:48:56.241 [INFO][4892] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147" Dec 13 01:48:56.310606 containerd[1540]: 2024-12-13 01:48:56.241 [INFO][4892] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147" iface="eth0" netns="/var/run/netns/cni-32a166e9-3eb6-3403-3799-059a4028a9b6" Dec 13 01:48:56.310606 containerd[1540]: 2024-12-13 01:48:56.241 [INFO][4892] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147" iface="eth0" netns="/var/run/netns/cni-32a166e9-3eb6-3403-3799-059a4028a9b6" Dec 13 01:48:56.310606 containerd[1540]: 2024-12-13 01:48:56.242 [INFO][4892] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147" iface="eth0" netns="/var/run/netns/cni-32a166e9-3eb6-3403-3799-059a4028a9b6" Dec 13 01:48:56.310606 containerd[1540]: 2024-12-13 01:48:56.242 [INFO][4892] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147" Dec 13 01:48:56.310606 containerd[1540]: 2024-12-13 01:48:56.242 [INFO][4892] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147" Dec 13 01:48:56.310606 containerd[1540]: 2024-12-13 01:48:56.292 [INFO][4900] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147" HandleID="k8s-pod-network.5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147" Workload="localhost-k8s-calico--apiserver--886bb9bdf--f88f5-eth0" Dec 13 01:48:56.310606 containerd[1540]: 2024-12-13 01:48:56.293 [INFO][4900] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:48:56.310606 containerd[1540]: 2024-12-13 01:48:56.296 [INFO][4900] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:48:56.310606 containerd[1540]: 2024-12-13 01:48:56.304 [WARNING][4900] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147" HandleID="k8s-pod-network.5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147" Workload="localhost-k8s-calico--apiserver--886bb9bdf--f88f5-eth0" Dec 13 01:48:56.310606 containerd[1540]: 2024-12-13 01:48:56.304 [INFO][4900] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147" HandleID="k8s-pod-network.5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147" Workload="localhost-k8s-calico--apiserver--886bb9bdf--f88f5-eth0" Dec 13 01:48:56.310606 containerd[1540]: 2024-12-13 01:48:56.305 [INFO][4900] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:48:56.310606 containerd[1540]: 2024-12-13 01:48:56.307 [INFO][4892] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147" Dec 13 01:48:56.310606 containerd[1540]: time="2024-12-13T01:48:56.309443998Z" level=info msg="TearDown network for sandbox \"5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147\" successfully" Dec 13 01:48:56.310606 containerd[1540]: time="2024-12-13T01:48:56.309459985Z" level=info msg="StopPodSandbox for \"5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147\" returns successfully" Dec 13 01:48:56.310606 containerd[1540]: time="2024-12-13T01:48:56.310036772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-886bb9bdf-f88f5,Uid:285c9a76-f344-4cf0-af98-33c38dd5f27a,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:48:56.306288 systemd[1]: run-netns-cni\x2d9325e5ba\x2d61b3\x2da2d8\x2dca17\x2df8bf43d8f86b.mount: Deactivated successfully. Dec 13 01:48:56.312253 systemd[1]: run-netns-cni\x2d32a166e9\x2d3eb6\x2d3403\x2d3799\x2d059a4028a9b6.mount: Deactivated successfully. Dec 13 01:48:56.376264 sshd[4856]: Invalid user debian from 36.138.19.180 port 53096 Dec 13 01:48:56.561653 systemd-networkd[1244]: cali2e3d526ff82: Link UP Dec 13 01:48:56.562106 systemd-networkd[1244]: cali2e3d526ff82: Gained carrier Dec 13 01:48:56.577636 containerd[1540]: 2024-12-13 01:48:56.480 [INFO][4923] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--jzxgp-eth0 coredns-76f75df574- kube-system b7419af9-8db7-4200-828e-4294ae89fbd9 770 0 2024-12-13 01:48:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-jzxgp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2e3d526ff82 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d728ef4ebfe3768295c0dbef4bb46a5cc958a5f938070dd054acc9f5d7b38614" Namespace="kube-system" Pod="coredns-76f75df574-jzxgp" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--jzxgp-" Dec 13 01:48:56.577636 containerd[1540]: 2024-12-13 01:48:56.480 [INFO][4923] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d728ef4ebfe3768295c0dbef4bb46a5cc958a5f938070dd054acc9f5d7b38614" Namespace="kube-system" Pod="coredns-76f75df574-jzxgp" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--jzxgp-eth0" Dec 13 01:48:56.577636 containerd[1540]: 2024-12-13 01:48:56.516 [INFO][4940] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d728ef4ebfe3768295c0dbef4bb46a5cc958a5f938070dd054acc9f5d7b38614" HandleID="k8s-pod-network.d728ef4ebfe3768295c0dbef4bb46a5cc958a5f938070dd054acc9f5d7b38614" Workload="localhost-k8s-coredns--76f75df574--jzxgp-eth0" Dec 13 01:48:56.577636 containerd[1540]: 2024-12-13 01:48:56.522 [INFO][4940] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d728ef4ebfe3768295c0dbef4bb46a5cc958a5f938070dd054acc9f5d7b38614" HandleID="k8s-pod-network.d728ef4ebfe3768295c0dbef4bb46a5cc958a5f938070dd054acc9f5d7b38614" Workload="localhost-k8s-coredns--76f75df574--jzxgp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000504c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-jzxgp", "timestamp":"2024-12-13 01:48:56.516088384 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:48:56.577636 containerd[1540]: 2024-12-13 01:48:56.522 [INFO][4940] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:48:56.577636 containerd[1540]: 2024-12-13 01:48:56.522 [INFO][4940] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:48:56.577636 containerd[1540]: 2024-12-13 01:48:56.522 [INFO][4940] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:48:56.577636 containerd[1540]: 2024-12-13 01:48:56.525 [INFO][4940] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d728ef4ebfe3768295c0dbef4bb46a5cc958a5f938070dd054acc9f5d7b38614" host="localhost" Dec 13 01:48:56.577636 containerd[1540]: 2024-12-13 01:48:56.530 [INFO][4940] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:48:56.577636 containerd[1540]: 2024-12-13 01:48:56.536 [INFO][4940] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:48:56.577636 containerd[1540]: 2024-12-13 01:48:56.540 [INFO][4940] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:48:56.577636 containerd[1540]: 2024-12-13 01:48:56.541 [INFO][4940] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:48:56.577636 containerd[1540]: 2024-12-13 01:48:56.541 [INFO][4940] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d728ef4ebfe3768295c0dbef4bb46a5cc958a5f938070dd054acc9f5d7b38614" host="localhost" Dec 13 01:48:56.577636 containerd[1540]: 2024-12-13 01:48:56.542 [INFO][4940] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d728ef4ebfe3768295c0dbef4bb46a5cc958a5f938070dd054acc9f5d7b38614 Dec 13 01:48:56.577636 containerd[1540]: 2024-12-13 01:48:56.546 [INFO][4940] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d728ef4ebfe3768295c0dbef4bb46a5cc958a5f938070dd054acc9f5d7b38614" host="localhost" Dec 13 01:48:56.577636 containerd[1540]: 2024-12-13 01:48:56.555 [INFO][4940] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.d728ef4ebfe3768295c0dbef4bb46a5cc958a5f938070dd054acc9f5d7b38614" host="localhost" Dec 13 01:48:56.577636 containerd[1540]: 2024-12-13 01:48:56.555 [INFO][4940] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.d728ef4ebfe3768295c0dbef4bb46a5cc958a5f938070dd054acc9f5d7b38614" host="localhost" Dec 13 01:48:56.577636 containerd[1540]: 2024-12-13 01:48:56.555 [INFO][4940] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:48:56.577636 containerd[1540]: 2024-12-13 01:48:56.555 [INFO][4940] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="d728ef4ebfe3768295c0dbef4bb46a5cc958a5f938070dd054acc9f5d7b38614" HandleID="k8s-pod-network.d728ef4ebfe3768295c0dbef4bb46a5cc958a5f938070dd054acc9f5d7b38614" Workload="localhost-k8s-coredns--76f75df574--jzxgp-eth0" Dec 13 01:48:56.579618 containerd[1540]: 2024-12-13 01:48:56.558 [INFO][4923] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d728ef4ebfe3768295c0dbef4bb46a5cc958a5f938070dd054acc9f5d7b38614" Namespace="kube-system" Pod="coredns-76f75df574-jzxgp" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--jzxgp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--jzxgp-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"b7419af9-8db7-4200-828e-4294ae89fbd9", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-jzxgp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2e3d526ff82", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:48:56.579618 containerd[1540]: 2024-12-13 01:48:56.558 [INFO][4923] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="d728ef4ebfe3768295c0dbef4bb46a5cc958a5f938070dd054acc9f5d7b38614" Namespace="kube-system" Pod="coredns-76f75df574-jzxgp" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--jzxgp-eth0" Dec 13 01:48:56.579618 containerd[1540]: 2024-12-13 01:48:56.558 [INFO][4923] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2e3d526ff82 ContainerID="d728ef4ebfe3768295c0dbef4bb46a5cc958a5f938070dd054acc9f5d7b38614" Namespace="kube-system" Pod="coredns-76f75df574-jzxgp" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--jzxgp-eth0" Dec 13 01:48:56.579618 containerd[1540]: 2024-12-13 01:48:56.562 [INFO][4923] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d728ef4ebfe3768295c0dbef4bb46a5cc958a5f938070dd054acc9f5d7b38614" Namespace="kube-system" Pod="coredns-76f75df574-jzxgp" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--jzxgp-eth0" Dec 13 01:48:56.579618 containerd[1540]: 2024-12-13 01:48:56.564 [INFO][4923] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d728ef4ebfe3768295c0dbef4bb46a5cc958a5f938070dd054acc9f5d7b38614" Namespace="kube-system" Pod="coredns-76f75df574-jzxgp" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--jzxgp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--jzxgp-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"b7419af9-8db7-4200-828e-4294ae89fbd9", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d728ef4ebfe3768295c0dbef4bb46a5cc958a5f938070dd054acc9f5d7b38614", Pod:"coredns-76f75df574-jzxgp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2e3d526ff82", MAC:"fa:56:40:35:6c:6f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:48:56.579618 containerd[1540]: 2024-12-13 01:48:56.572 [INFO][4923] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d728ef4ebfe3768295c0dbef4bb46a5cc958a5f938070dd054acc9f5d7b38614" Namespace="kube-system" Pod="coredns-76f75df574-jzxgp" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--jzxgp-eth0" Dec 13 01:48:56.589683 sshd[4856]: Connection closed by invalid user debian 36.138.19.180 port 53096 [preauth] Dec 13 01:48:56.591017 systemd[1]: sshd@100-139.178.70.110:22-36.138.19.180:53096.service: Deactivated successfully. Dec 13 01:48:56.602501 systemd-networkd[1244]: cali3d4a0bd41e1: Link UP Dec 13 01:48:56.603121 systemd-networkd[1244]: cali3d4a0bd41e1: Gained carrier Dec 13 01:48:56.617542 containerd[1540]: time="2024-12-13T01:48:56.616900777Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:48:56.619070 containerd[1540]: time="2024-12-13T01:48:56.617550094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:48:56.619070 containerd[1540]: time="2024-12-13T01:48:56.617561055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:56.619070 containerd[1540]: 2024-12-13 01:48:56.479 [INFO][4912] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--886bb9bdf--f88f5-eth0 calico-apiserver-886bb9bdf- calico-apiserver 285c9a76-f344-4cf0-af98-33c38dd5f27a 769 0 2024-12-13 01:48:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:886bb9bdf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-886bb9bdf-f88f5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3d4a0bd41e1 [] []}} ContainerID="02aaca0879c57a014cd6138eebfa22b0b5b5f1643a75dc38b593f914d57f737c" Namespace="calico-apiserver" Pod="calico-apiserver-886bb9bdf-f88f5" WorkloadEndpoint="localhost-k8s-calico--apiserver--886bb9bdf--f88f5-" Dec 13 01:48:56.619070 containerd[1540]: 2024-12-13 01:48:56.479 [INFO][4912] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="02aaca0879c57a014cd6138eebfa22b0b5b5f1643a75dc38b593f914d57f737c" Namespace="calico-apiserver" Pod="calico-apiserver-886bb9bdf-f88f5" WorkloadEndpoint="localhost-k8s-calico--apiserver--886bb9bdf--f88f5-eth0" Dec 13 01:48:56.619070 containerd[1540]: 2024-12-13 01:48:56.534 [INFO][4944] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="02aaca0879c57a014cd6138eebfa22b0b5b5f1643a75dc38b593f914d57f737c" HandleID="k8s-pod-network.02aaca0879c57a014cd6138eebfa22b0b5b5f1643a75dc38b593f914d57f737c" Workload="localhost-k8s-calico--apiserver--886bb9bdf--f88f5-eth0" Dec 13 01:48:56.619070 containerd[1540]: 2024-12-13 01:48:56.548 [INFO][4944] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="02aaca0879c57a014cd6138eebfa22b0b5b5f1643a75dc38b593f914d57f737c" HandleID="k8s-pod-network.02aaca0879c57a014cd6138eebfa22b0b5b5f1643a75dc38b593f914d57f737c" Workload="localhost-k8s-calico--apiserver--886bb9bdf--f88f5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000291490), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-886bb9bdf-f88f5", "timestamp":"2024-12-13 01:48:56.534932434 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:48:56.619070 containerd[1540]: 2024-12-13 01:48:56.548 [INFO][4944] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:48:56.619070 containerd[1540]: 2024-12-13 01:48:56.555 [INFO][4944] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:48:56.619070 containerd[1540]: 2024-12-13 01:48:56.555 [INFO][4944] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:48:56.619070 containerd[1540]: 2024-12-13 01:48:56.563 [INFO][4944] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.02aaca0879c57a014cd6138eebfa22b0b5b5f1643a75dc38b593f914d57f737c" host="localhost" Dec 13 01:48:56.619070 containerd[1540]: 2024-12-13 01:48:56.569 [INFO][4944] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:48:56.619070 containerd[1540]: 2024-12-13 01:48:56.573 [INFO][4944] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:48:56.619070 containerd[1540]: 2024-12-13 01:48:56.574 [INFO][4944] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:48:56.619070 containerd[1540]: 2024-12-13 01:48:56.581 [INFO][4944] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:48:56.619070 containerd[1540]: 2024-12-13 01:48:56.581 [INFO][4944] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.02aaca0879c57a014cd6138eebfa22b0b5b5f1643a75dc38b593f914d57f737c" host="localhost" Dec 13 01:48:56.619070 containerd[1540]: 2024-12-13 01:48:56.583 [INFO][4944] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.02aaca0879c57a014cd6138eebfa22b0b5b5f1643a75dc38b593f914d57f737c Dec 13 01:48:56.619070 containerd[1540]: 2024-12-13 01:48:56.586 [INFO][4944] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.02aaca0879c57a014cd6138eebfa22b0b5b5f1643a75dc38b593f914d57f737c" host="localhost" Dec 13 01:48:56.619070 containerd[1540]: 2024-12-13 01:48:56.594 [INFO][4944] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.02aaca0879c57a014cd6138eebfa22b0b5b5f1643a75dc38b593f914d57f737c" host="localhost" Dec 13 01:48:56.619070 containerd[1540]: 2024-12-13 01:48:56.594 [INFO][4944] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.02aaca0879c57a014cd6138eebfa22b0b5b5f1643a75dc38b593f914d57f737c" host="localhost" Dec 13 01:48:56.619070 containerd[1540]: 2024-12-13 01:48:56.594 [INFO][4944] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:48:56.619070 containerd[1540]: 2024-12-13 01:48:56.594 [INFO][4944] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="02aaca0879c57a014cd6138eebfa22b0b5b5f1643a75dc38b593f914d57f737c" HandleID="k8s-pod-network.02aaca0879c57a014cd6138eebfa22b0b5b5f1643a75dc38b593f914d57f737c" Workload="localhost-k8s-calico--apiserver--886bb9bdf--f88f5-eth0" Dec 13 01:48:56.624956 containerd[1540]: 2024-12-13 01:48:56.598 [INFO][4912] cni-plugin/k8s.go 386: Populated endpoint ContainerID="02aaca0879c57a014cd6138eebfa22b0b5b5f1643a75dc38b593f914d57f737c" Namespace="calico-apiserver" Pod="calico-apiserver-886bb9bdf-f88f5" WorkloadEndpoint="localhost-k8s-calico--apiserver--886bb9bdf--f88f5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--886bb9bdf--f88f5-eth0", GenerateName:"calico-apiserver-886bb9bdf-", Namespace:"calico-apiserver", SelfLink:"", UID:"285c9a76-f344-4cf0-af98-33c38dd5f27a", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"886bb9bdf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-886bb9bdf-f88f5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3d4a0bd41e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:48:56.624956 containerd[1540]: 2024-12-13 01:48:56.598 [INFO][4912] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="02aaca0879c57a014cd6138eebfa22b0b5b5f1643a75dc38b593f914d57f737c" Namespace="calico-apiserver" Pod="calico-apiserver-886bb9bdf-f88f5" WorkloadEndpoint="localhost-k8s-calico--apiserver--886bb9bdf--f88f5-eth0" Dec 13 01:48:56.624956 containerd[1540]: 2024-12-13 01:48:56.598 [INFO][4912] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3d4a0bd41e1 ContainerID="02aaca0879c57a014cd6138eebfa22b0b5b5f1643a75dc38b593f914d57f737c" Namespace="calico-apiserver" Pod="calico-apiserver-886bb9bdf-f88f5" WorkloadEndpoint="localhost-k8s-calico--apiserver--886bb9bdf--f88f5-eth0" Dec 13 01:48:56.624956 containerd[1540]: 2024-12-13 01:48:56.604 [INFO][4912] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="02aaca0879c57a014cd6138eebfa22b0b5b5f1643a75dc38b593f914d57f737c" Namespace="calico-apiserver" Pod="calico-apiserver-886bb9bdf-f88f5" WorkloadEndpoint="localhost-k8s-calico--apiserver--886bb9bdf--f88f5-eth0" Dec 13 01:48:56.624956 containerd[1540]: 2024-12-13 01:48:56.605 [INFO][4912] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="02aaca0879c57a014cd6138eebfa22b0b5b5f1643a75dc38b593f914d57f737c" Namespace="calico-apiserver" Pod="calico-apiserver-886bb9bdf-f88f5" WorkloadEndpoint="localhost-k8s-calico--apiserver--886bb9bdf--f88f5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--886bb9bdf--f88f5-eth0", GenerateName:"calico-apiserver-886bb9bdf-", Namespace:"calico-apiserver", SelfLink:"", UID:"285c9a76-f344-4cf0-af98-33c38dd5f27a", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"886bb9bdf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"02aaca0879c57a014cd6138eebfa22b0b5b5f1643a75dc38b593f914d57f737c", Pod:"calico-apiserver-886bb9bdf-f88f5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3d4a0bd41e1", MAC:"5a:26:7b:c8:45:7e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:48:56.624956 containerd[1540]: 2024-12-13 01:48:56.614 [INFO][4912] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="02aaca0879c57a014cd6138eebfa22b0b5b5f1643a75dc38b593f914d57f737c" Namespace="calico-apiserver" Pod="calico-apiserver-886bb9bdf-f88f5" WorkloadEndpoint="localhost-k8s-calico--apiserver--886bb9bdf--f88f5-eth0" Dec 13 01:48:56.624956 containerd[1540]: time="2024-12-13T01:48:56.617710078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:56.646035 systemd[1]: Started cri-containerd-d728ef4ebfe3768295c0dbef4bb46a5cc958a5f938070dd054acc9f5d7b38614.scope - libcontainer container d728ef4ebfe3768295c0dbef4bb46a5cc958a5f938070dd054acc9f5d7b38614. Dec 13 01:48:56.655455 containerd[1540]: time="2024-12-13T01:48:56.654530089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:48:56.655675 containerd[1540]: time="2024-12-13T01:48:56.655433197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:48:56.655675 containerd[1540]: time="2024-12-13T01:48:56.655569298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:56.655753 containerd[1540]: time="2024-12-13T01:48:56.655696515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:56.660090 systemd-resolved[1466]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:48:56.682164 systemd[1]: Started cri-containerd-02aaca0879c57a014cd6138eebfa22b0b5b5f1643a75dc38b593f914d57f737c.scope - libcontainer container 02aaca0879c57a014cd6138eebfa22b0b5b5f1643a75dc38b593f914d57f737c. Dec 13 01:48:56.696672 systemd-resolved[1466]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:48:56.698161 containerd[1540]: time="2024-12-13T01:48:56.698048829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-jzxgp,Uid:b7419af9-8db7-4200-828e-4294ae89fbd9,Namespace:kube-system,Attempt:1,} returns sandbox id \"d728ef4ebfe3768295c0dbef4bb46a5cc958a5f938070dd054acc9f5d7b38614\"" Dec 13 01:48:56.700421 containerd[1540]: time="2024-12-13T01:48:56.700339732Z" level=info msg="CreateContainer within sandbox \"d728ef4ebfe3768295c0dbef4bb46a5cc958a5f938070dd054acc9f5d7b38614\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:48:56.724623 containerd[1540]: time="2024-12-13T01:48:56.724508416Z" level=info msg="CreateContainer within sandbox \"d728ef4ebfe3768295c0dbef4bb46a5cc958a5f938070dd054acc9f5d7b38614\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3bbf93debdcc52afd181c2184d59d5ab65d1939f97e94102995a0615de8c1ae1\"" Dec 13 01:48:56.724933 containerd[1540]: time="2024-12-13T01:48:56.724905271Z" level=info msg="StartContainer for \"3bbf93debdcc52afd181c2184d59d5ab65d1939f97e94102995a0615de8c1ae1\"" Dec 13 01:48:56.735364 containerd[1540]: time="2024-12-13T01:48:56.735273957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-886bb9bdf-f88f5,Uid:285c9a76-f344-4cf0-af98-33c38dd5f27a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"02aaca0879c57a014cd6138eebfa22b0b5b5f1643a75dc38b593f914d57f737c\"" Dec 13 01:48:56.752189 containerd[1540]: time="2024-12-13T01:48:56.752161179Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:56.753786 containerd[1540]: time="2024-12-13T01:48:56.753670165Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Dec 13 01:48:56.754331 containerd[1540]: time="2024-12-13T01:48:56.754197714Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:56.755878 containerd[1540]: time="2024-12-13T01:48:56.755860356Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:56.756291 containerd[1540]: time="2024-12-13T01:48:56.756274491Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.028071093s" Dec 13 01:48:56.756314 containerd[1540]: time="2024-12-13T01:48:56.756293765Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Dec 13 01:48:56.756770 containerd[1540]: time="2024-12-13T01:48:56.756682506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:48:56.758495 systemd[1]: Started cri-containerd-3bbf93debdcc52afd181c2184d59d5ab65d1939f97e94102995a0615de8c1ae1.scope - libcontainer container 3bbf93debdcc52afd181c2184d59d5ab65d1939f97e94102995a0615de8c1ae1. Dec 13 01:48:56.773265 containerd[1540]: time="2024-12-13T01:48:56.773232154Z" level=info msg="CreateContainer within sandbox \"26b2a78c390a3f37be7ddbcf0a27f72447742d95b1ebfcd0d96b118bd3abae4d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 01:48:56.780854 containerd[1540]: time="2024-12-13T01:48:56.780774052Z" level=info msg="StartContainer for \"3bbf93debdcc52afd181c2184d59d5ab65d1939f97e94102995a0615de8c1ae1\" returns successfully" Dec 13 01:48:56.782131 containerd[1540]: time="2024-12-13T01:48:56.782080847Z" level=info msg="CreateContainer within sandbox \"26b2a78c390a3f37be7ddbcf0a27f72447742d95b1ebfcd0d96b118bd3abae4d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"5e541ef34abcf78988759c9c1cd67c34a4e25ccd466c99c12a821deef885db6a\"" Dec 13 01:48:56.782568 containerd[1540]: time="2024-12-13T01:48:56.782545260Z" level=info msg="StartContainer for \"5e541ef34abcf78988759c9c1cd67c34a4e25ccd466c99c12a821deef885db6a\"" Dec 13 01:48:56.808162 systemd[1]: Started cri-containerd-5e541ef34abcf78988759c9c1cd67c34a4e25ccd466c99c12a821deef885db6a.scope - libcontainer container 5e541ef34abcf78988759c9c1cd67c34a4e25ccd466c99c12a821deef885db6a. Dec 13 01:48:56.810992 systemd[1]: Started sshd@101-139.178.70.110:22-36.138.19.180:53106.service - OpenSSH per-connection server daemon (36.138.19.180:53106). Dec 13 01:48:56.860889 containerd[1540]: time="2024-12-13T01:48:56.860861850Z" level=info msg="StartContainer for \"5e541ef34abcf78988759c9c1cd67c34a4e25ccd466c99c12a821deef885db6a\" returns successfully" Dec 13 01:48:57.189689 containerd[1540]: time="2024-12-13T01:48:57.189009194Z" level=info msg="StopPodSandbox for \"946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e\"" Dec 13 01:48:57.189689 containerd[1540]: time="2024-12-13T01:48:57.189043314Z" level=info msg="StopPodSandbox for \"7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f\"" Dec 13 01:48:57.261184 containerd[1540]: 2024-12-13 01:48:57.226 [INFO][5169] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e" Dec 13 01:48:57.261184 containerd[1540]: 2024-12-13 01:48:57.226 [INFO][5169] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e" iface="eth0" netns="/var/run/netns/cni-56025b75-9baa-10b1-3ff7-ee75bbbaf09e" Dec 13 01:48:57.261184 containerd[1540]: 2024-12-13 01:48:57.226 [INFO][5169] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e" iface="eth0" netns="/var/run/netns/cni-56025b75-9baa-10b1-3ff7-ee75bbbaf09e" Dec 13 01:48:57.261184 containerd[1540]: 2024-12-13 01:48:57.227 [INFO][5169] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e" iface="eth0" netns="/var/run/netns/cni-56025b75-9baa-10b1-3ff7-ee75bbbaf09e" Dec 13 01:48:57.261184 containerd[1540]: 2024-12-13 01:48:57.227 [INFO][5169] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e" Dec 13 01:48:57.261184 containerd[1540]: 2024-12-13 01:48:57.227 [INFO][5169] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e" Dec 13 01:48:57.261184 containerd[1540]: 2024-12-13 01:48:57.250 [INFO][5181] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e" HandleID="k8s-pod-network.946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e" Workload="localhost-k8s-calico--apiserver--886bb9bdf--hnf79-eth0" Dec 13 01:48:57.261184 containerd[1540]: 2024-12-13 01:48:57.250 [INFO][5181] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:48:57.261184 containerd[1540]: 2024-12-13 01:48:57.250 [INFO][5181] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:48:57.261184 containerd[1540]: 2024-12-13 01:48:57.255 [WARNING][5181] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e" HandleID="k8s-pod-network.946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e" Workload="localhost-k8s-calico--apiserver--886bb9bdf--hnf79-eth0" Dec 13 01:48:57.261184 containerd[1540]: 2024-12-13 01:48:57.255 [INFO][5181] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e" HandleID="k8s-pod-network.946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e" Workload="localhost-k8s-calico--apiserver--886bb9bdf--hnf79-eth0" Dec 13 01:48:57.261184 containerd[1540]: 2024-12-13 01:48:57.259 [INFO][5181] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:48:57.261184 containerd[1540]: 2024-12-13 01:48:57.260 [INFO][5169] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e" Dec 13 01:48:57.262256 containerd[1540]: time="2024-12-13T01:48:57.261310254Z" level=info msg="TearDown network for sandbox \"946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e\" successfully" Dec 13 01:48:57.262256 containerd[1540]: time="2024-12-13T01:48:57.261325771Z" level=info msg="StopPodSandbox for \"946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e\" returns successfully" Dec 13 01:48:57.262256 containerd[1540]: time="2024-12-13T01:48:57.261761039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-886bb9bdf-hnf79,Uid:c771d094-7c93-4bc5-90e6-c1ad822c0b38,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:48:57.277622 containerd[1540]: 2024-12-13 01:48:57.242 [INFO][5170] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f" Dec 13 01:48:57.277622 containerd[1540]: 2024-12-13 01:48:57.242 [INFO][5170] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f" iface="eth0" netns="/var/run/netns/cni-5488b6f7-c600-873a-6150-ddb293c0bdef" Dec 13 01:48:57.277622 containerd[1540]: 2024-12-13 01:48:57.243 [INFO][5170] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f" iface="eth0" netns="/var/run/netns/cni-5488b6f7-c600-873a-6150-ddb293c0bdef" Dec 13 01:48:57.277622 containerd[1540]: 2024-12-13 01:48:57.244 [INFO][5170] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f" iface="eth0" netns="/var/run/netns/cni-5488b6f7-c600-873a-6150-ddb293c0bdef" Dec 13 01:48:57.277622 containerd[1540]: 2024-12-13 01:48:57.244 [INFO][5170] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f" Dec 13 01:48:57.277622 containerd[1540]: 2024-12-13 01:48:57.244 [INFO][5170] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f" Dec 13 01:48:57.277622 containerd[1540]: 2024-12-13 01:48:57.266 [INFO][5186] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f" HandleID="k8s-pod-network.7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f" Workload="localhost-k8s-csi--node--driver--qmcx8-eth0" Dec 13 01:48:57.277622 containerd[1540]: 2024-12-13 01:48:57.266 [INFO][5186] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:48:57.277622 containerd[1540]: 2024-12-13 01:48:57.266 [INFO][5186] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:48:57.277622 containerd[1540]: 2024-12-13 01:48:57.272 [WARNING][5186] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f" HandleID="k8s-pod-network.7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f" Workload="localhost-k8s-csi--node--driver--qmcx8-eth0" Dec 13 01:48:57.277622 containerd[1540]: 2024-12-13 01:48:57.272 [INFO][5186] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f" HandleID="k8s-pod-network.7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f" Workload="localhost-k8s-csi--node--driver--qmcx8-eth0" Dec 13 01:48:57.277622 containerd[1540]: 2024-12-13 01:48:57.275 [INFO][5186] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:48:57.277622 containerd[1540]: 2024-12-13 01:48:57.276 [INFO][5170] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f" Dec 13 01:48:57.282315 containerd[1540]: time="2024-12-13T01:48:57.278112513Z" level=info msg="TearDown network for sandbox \"7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f\" successfully" Dec 13 01:48:57.282315 containerd[1540]: time="2024-12-13T01:48:57.278130337Z" level=info msg="StopPodSandbox for \"7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f\" returns successfully" Dec 13 01:48:57.282315 containerd[1540]: time="2024-12-13T01:48:57.278677221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qmcx8,Uid:505a3ea8-bd57-41cf-a662-11b3cdb671b9,Namespace:calico-system,Attempt:1,}" Dec 13 01:48:57.355831 systemd-networkd[1244]: calic1c30c7ccd7: Link UP Dec 13 01:48:57.356447 systemd-networkd[1244]: calic1c30c7ccd7: Gained carrier Dec 13 01:48:57.370555 containerd[1540]: 2024-12-13 01:48:57.304 [INFO][5194] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--886bb9bdf--hnf79-eth0 calico-apiserver-886bb9bdf- calico-apiserver c771d094-7c93-4bc5-90e6-c1ad822c0b38 790 0 2024-12-13 01:48:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:886bb9bdf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-886bb9bdf-hnf79 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic1c30c7ccd7 [] []}} ContainerID="5349706b39d618d5cd0c50c612ed27fecd5c8f7f8c0b9c415b48c1a26edfd919" Namespace="calico-apiserver" Pod="calico-apiserver-886bb9bdf-hnf79" WorkloadEndpoint="localhost-k8s-calico--apiserver--886bb9bdf--hnf79-" Dec 13 01:48:57.370555 containerd[1540]: 2024-12-13 01:48:57.304 [INFO][5194] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5349706b39d618d5cd0c50c612ed27fecd5c8f7f8c0b9c415b48c1a26edfd919" Namespace="calico-apiserver" Pod="calico-apiserver-886bb9bdf-hnf79" WorkloadEndpoint="localhost-k8s-calico--apiserver--886bb9bdf--hnf79-eth0" Dec 13 01:48:57.370555 containerd[1540]: 2024-12-13 01:48:57.326 [INFO][5216] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5349706b39d618d5cd0c50c612ed27fecd5c8f7f8c0b9c415b48c1a26edfd919" HandleID="k8s-pod-network.5349706b39d618d5cd0c50c612ed27fecd5c8f7f8c0b9c415b48c1a26edfd919" Workload="localhost-k8s-calico--apiserver--886bb9bdf--hnf79-eth0" Dec 13 01:48:57.370555 containerd[1540]: 2024-12-13 01:48:57.336 [INFO][5216] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5349706b39d618d5cd0c50c612ed27fecd5c8f7f8c0b9c415b48c1a26edfd919" HandleID="k8s-pod-network.5349706b39d618d5cd0c50c612ed27fecd5c8f7f8c0b9c415b48c1a26edfd919" Workload="localhost-k8s-calico--apiserver--886bb9bdf--hnf79-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031b5f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-886bb9bdf-hnf79", "timestamp":"2024-12-13 01:48:57.326344139 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:48:57.370555 containerd[1540]: 2024-12-13 01:48:57.336 [INFO][5216] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:48:57.370555 containerd[1540]: 2024-12-13 01:48:57.336 [INFO][5216] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:48:57.370555 containerd[1540]: 2024-12-13 01:48:57.336 [INFO][5216] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:48:57.370555 containerd[1540]: 2024-12-13 01:48:57.337 [INFO][5216] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5349706b39d618d5cd0c50c612ed27fecd5c8f7f8c0b9c415b48c1a26edfd919" host="localhost" Dec 13 01:48:57.370555 containerd[1540]: 2024-12-13 01:48:57.339 [INFO][5216] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:48:57.370555 containerd[1540]: 2024-12-13 01:48:57.341 [INFO][5216] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:48:57.370555 containerd[1540]: 2024-12-13 01:48:57.342 [INFO][5216] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:48:57.370555 containerd[1540]: 2024-12-13 01:48:57.345 [INFO][5216] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:48:57.370555 containerd[1540]: 2024-12-13 01:48:57.345 [INFO][5216] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5349706b39d618d5cd0c50c612ed27fecd5c8f7f8c0b9c415b48c1a26edfd919" host="localhost" Dec 13 01:48:57.370555 containerd[1540]: 2024-12-13 01:48:57.346 [INFO][5216] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5349706b39d618d5cd0c50c612ed27fecd5c8f7f8c0b9c415b48c1a26edfd919 Dec 13 01:48:57.370555 containerd[1540]: 2024-12-13 01:48:57.348 [INFO][5216] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5349706b39d618d5cd0c50c612ed27fecd5c8f7f8c0b9c415b48c1a26edfd919" host="localhost" Dec 13 01:48:57.370555 containerd[1540]: 2024-12-13 01:48:57.351 [INFO][5216] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.5349706b39d618d5cd0c50c612ed27fecd5c8f7f8c0b9c415b48c1a26edfd919" host="localhost" Dec 13 01:48:57.370555 containerd[1540]: 2024-12-13 01:48:57.351 [INFO][5216] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.5349706b39d618d5cd0c50c612ed27fecd5c8f7f8c0b9c415b48c1a26edfd919" host="localhost" Dec 13 01:48:57.370555 containerd[1540]: 2024-12-13 01:48:57.351 [INFO][5216] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:48:57.370555 containerd[1540]: 2024-12-13 01:48:57.351 [INFO][5216] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="5349706b39d618d5cd0c50c612ed27fecd5c8f7f8c0b9c415b48c1a26edfd919" HandleID="k8s-pod-network.5349706b39d618d5cd0c50c612ed27fecd5c8f7f8c0b9c415b48c1a26edfd919" Workload="localhost-k8s-calico--apiserver--886bb9bdf--hnf79-eth0" Dec 13 01:48:57.372934 containerd[1540]: 2024-12-13 01:48:57.353 [INFO][5194] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5349706b39d618d5cd0c50c612ed27fecd5c8f7f8c0b9c415b48c1a26edfd919" Namespace="calico-apiserver" Pod="calico-apiserver-886bb9bdf-hnf79" WorkloadEndpoint="localhost-k8s-calico--apiserver--886bb9bdf--hnf79-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--886bb9bdf--hnf79-eth0", GenerateName:"calico-apiserver-886bb9bdf-", Namespace:"calico-apiserver", SelfLink:"", UID:"c771d094-7c93-4bc5-90e6-c1ad822c0b38", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"886bb9bdf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-886bb9bdf-hnf79", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic1c30c7ccd7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:48:57.372934 containerd[1540]: 2024-12-13 01:48:57.353 [INFO][5194] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="5349706b39d618d5cd0c50c612ed27fecd5c8f7f8c0b9c415b48c1a26edfd919" Namespace="calico-apiserver" Pod="calico-apiserver-886bb9bdf-hnf79" WorkloadEndpoint="localhost-k8s-calico--apiserver--886bb9bdf--hnf79-eth0" Dec 13 01:48:57.372934 containerd[1540]: 2024-12-13 01:48:57.353 [INFO][5194] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic1c30c7ccd7 ContainerID="5349706b39d618d5cd0c50c612ed27fecd5c8f7f8c0b9c415b48c1a26edfd919" Namespace="calico-apiserver" Pod="calico-apiserver-886bb9bdf-hnf79" WorkloadEndpoint="localhost-k8s-calico--apiserver--886bb9bdf--hnf79-eth0" Dec 13 01:48:57.372934 containerd[1540]: 2024-12-13 01:48:57.356 [INFO][5194] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5349706b39d618d5cd0c50c612ed27fecd5c8f7f8c0b9c415b48c1a26edfd919" Namespace="calico-apiserver" Pod="calico-apiserver-886bb9bdf-hnf79" WorkloadEndpoint="localhost-k8s-calico--apiserver--886bb9bdf--hnf79-eth0" Dec 13 01:48:57.372934 containerd[1540]: 2024-12-13 01:48:57.357 [INFO][5194] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5349706b39d618d5cd0c50c612ed27fecd5c8f7f8c0b9c415b48c1a26edfd919" Namespace="calico-apiserver" Pod="calico-apiserver-886bb9bdf-hnf79" WorkloadEndpoint="localhost-k8s-calico--apiserver--886bb9bdf--hnf79-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--886bb9bdf--hnf79-eth0", GenerateName:"calico-apiserver-886bb9bdf-", Namespace:"calico-apiserver", SelfLink:"", UID:"c771d094-7c93-4bc5-90e6-c1ad822c0b38", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"886bb9bdf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5349706b39d618d5cd0c50c612ed27fecd5c8f7f8c0b9c415b48c1a26edfd919", Pod:"calico-apiserver-886bb9bdf-hnf79", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic1c30c7ccd7", MAC:"86:cd:92:85:1b:6e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:48:57.372934 containerd[1540]: 2024-12-13 01:48:57.366 [INFO][5194] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5349706b39d618d5cd0c50c612ed27fecd5c8f7f8c0b9c415b48c1a26edfd919" Namespace="calico-apiserver" Pod="calico-apiserver-886bb9bdf-hnf79" WorkloadEndpoint="localhost-k8s-calico--apiserver--886bb9bdf--hnf79-eth0" Dec 13 01:48:57.389285 systemd-networkd[1244]: calid06af89efa8: Link UP Dec 13 01:48:57.389633 systemd-networkd[1244]: calid06af89efa8: Gained carrier Dec 13 01:48:57.395278 containerd[1540]: time="2024-12-13T01:48:57.392064698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:48:57.395278 containerd[1540]: time="2024-12-13T01:48:57.392139490Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:48:57.395278 containerd[1540]: time="2024-12-13T01:48:57.392448036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:57.395278 containerd[1540]: time="2024-12-13T01:48:57.392545019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:57.418555 containerd[1540]: 2024-12-13 01:48:57.315 [INFO][5204] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--qmcx8-eth0 csi-node-driver- calico-system 505a3ea8-bd57-41cf-a662-11b3cdb671b9 791 0 2024-12-13 01:48:29 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-qmcx8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid06af89efa8 [] []}} ContainerID="e9a228d8b3f710c9c7dfc8223104db3aeadeec57cd8ccd33804c8e5a5cae33d0" Namespace="calico-system" Pod="csi-node-driver-qmcx8" WorkloadEndpoint="localhost-k8s-csi--node--driver--qmcx8-" Dec 13 01:48:57.418555 containerd[1540]: 2024-12-13 01:48:57.315 [INFO][5204] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e9a228d8b3f710c9c7dfc8223104db3aeadeec57cd8ccd33804c8e5a5cae33d0" Namespace="calico-system" Pod="csi-node-driver-qmcx8" WorkloadEndpoint="localhost-k8s-csi--node--driver--qmcx8-eth0" Dec 13 01:48:57.418555 containerd[1540]: 2024-12-13 01:48:57.342 [INFO][5220] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e9a228d8b3f710c9c7dfc8223104db3aeadeec57cd8ccd33804c8e5a5cae33d0" HandleID="k8s-pod-network.e9a228d8b3f710c9c7dfc8223104db3aeadeec57cd8ccd33804c8e5a5cae33d0" Workload="localhost-k8s-csi--node--driver--qmcx8-eth0" Dec 13 01:48:57.418555 containerd[1540]: 2024-12-13 01:48:57.347 [INFO][5220] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e9a228d8b3f710c9c7dfc8223104db3aeadeec57cd8ccd33804c8e5a5cae33d0" HandleID="k8s-pod-network.e9a228d8b3f710c9c7dfc8223104db3aeadeec57cd8ccd33804c8e5a5cae33d0" Workload="localhost-k8s-csi--node--driver--qmcx8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290b70), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-qmcx8", "timestamp":"2024-12-13 01:48:57.342347441 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:48:57.418555 containerd[1540]: 2024-12-13 01:48:57.347 [INFO][5220] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:48:57.418555 containerd[1540]: 2024-12-13 01:48:57.351 [INFO][5220] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:48:57.418555 containerd[1540]: 2024-12-13 01:48:57.351 [INFO][5220] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:48:57.418555 containerd[1540]: 2024-12-13 01:48:57.353 [INFO][5220] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e9a228d8b3f710c9c7dfc8223104db3aeadeec57cd8ccd33804c8e5a5cae33d0" host="localhost" Dec 13 01:48:57.418555 containerd[1540]: 2024-12-13 01:48:57.363 [INFO][5220] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:48:57.418555 containerd[1540]: 2024-12-13 01:48:57.371 [INFO][5220] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:48:57.418555 containerd[1540]: 2024-12-13 01:48:57.373 [INFO][5220] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:48:57.418555 containerd[1540]: 2024-12-13 01:48:57.375 [INFO][5220] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:48:57.418555 containerd[1540]: 2024-12-13 01:48:57.375 [INFO][5220] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e9a228d8b3f710c9c7dfc8223104db3aeadeec57cd8ccd33804c8e5a5cae33d0" host="localhost" Dec 13 01:48:57.418555 containerd[1540]: 2024-12-13 01:48:57.376 [INFO][5220] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e9a228d8b3f710c9c7dfc8223104db3aeadeec57cd8ccd33804c8e5a5cae33d0 Dec 13 01:48:57.418555 containerd[1540]: 2024-12-13 01:48:57.378 [INFO][5220] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e9a228d8b3f710c9c7dfc8223104db3aeadeec57cd8ccd33804c8e5a5cae33d0" host="localhost" Dec 13 01:48:57.418555 containerd[1540]: 2024-12-13 01:48:57.383 [INFO][5220] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.e9a228d8b3f710c9c7dfc8223104db3aeadeec57cd8ccd33804c8e5a5cae33d0" host="localhost" Dec 13 01:48:57.418555 containerd[1540]: 2024-12-13 01:48:57.383 [INFO][5220] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.e9a228d8b3f710c9c7dfc8223104db3aeadeec57cd8ccd33804c8e5a5cae33d0" host="localhost" Dec 13 01:48:57.418555 containerd[1540]: 2024-12-13 01:48:57.383 [INFO][5220] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:48:57.418555 containerd[1540]: 2024-12-13 01:48:57.383 [INFO][5220] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="e9a228d8b3f710c9c7dfc8223104db3aeadeec57cd8ccd33804c8e5a5cae33d0" HandleID="k8s-pod-network.e9a228d8b3f710c9c7dfc8223104db3aeadeec57cd8ccd33804c8e5a5cae33d0" Workload="localhost-k8s-csi--node--driver--qmcx8-eth0" Dec 13 01:48:57.420202 containerd[1540]: 2024-12-13 01:48:57.386 [INFO][5204] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e9a228d8b3f710c9c7dfc8223104db3aeadeec57cd8ccd33804c8e5a5cae33d0" Namespace="calico-system" Pod="csi-node-driver-qmcx8" WorkloadEndpoint="localhost-k8s-csi--node--driver--qmcx8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qmcx8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"505a3ea8-bd57-41cf-a662-11b3cdb671b9", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-qmcx8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid06af89efa8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:48:57.420202 containerd[1540]: 2024-12-13 01:48:57.386 [INFO][5204] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="e9a228d8b3f710c9c7dfc8223104db3aeadeec57cd8ccd33804c8e5a5cae33d0" Namespace="calico-system" Pod="csi-node-driver-qmcx8" WorkloadEndpoint="localhost-k8s-csi--node--driver--qmcx8-eth0" Dec 13 01:48:57.420202 containerd[1540]: 2024-12-13 01:48:57.386 [INFO][5204] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid06af89efa8 ContainerID="e9a228d8b3f710c9c7dfc8223104db3aeadeec57cd8ccd33804c8e5a5cae33d0" Namespace="calico-system" Pod="csi-node-driver-qmcx8" WorkloadEndpoint="localhost-k8s-csi--node--driver--qmcx8-eth0" Dec 13 01:48:57.420202 containerd[1540]: 2024-12-13 01:48:57.390 [INFO][5204] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e9a228d8b3f710c9c7dfc8223104db3aeadeec57cd8ccd33804c8e5a5cae33d0" Namespace="calico-system" Pod="csi-node-driver-qmcx8" WorkloadEndpoint="localhost-k8s-csi--node--driver--qmcx8-eth0" Dec 13 01:48:57.420202 containerd[1540]: 2024-12-13 01:48:57.390 [INFO][5204] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e9a228d8b3f710c9c7dfc8223104db3aeadeec57cd8ccd33804c8e5a5cae33d0" Namespace="calico-system" Pod="csi-node-driver-qmcx8" WorkloadEndpoint="localhost-k8s-csi--node--driver--qmcx8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qmcx8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"505a3ea8-bd57-41cf-a662-11b3cdb671b9", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e9a228d8b3f710c9c7dfc8223104db3aeadeec57cd8ccd33804c8e5a5cae33d0", Pod:"csi-node-driver-qmcx8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid06af89efa8", MAC:"c2:06:8f:b2:5e:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:48:57.420202 containerd[1540]: 2024-12-13 01:48:57.403 [INFO][5204] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e9a228d8b3f710c9c7dfc8223104db3aeadeec57cd8ccd33804c8e5a5cae33d0" Namespace="calico-system" Pod="csi-node-driver-qmcx8" WorkloadEndpoint="localhost-k8s-csi--node--driver--qmcx8-eth0" Dec 13 01:48:57.421841 systemd[1]: Started cri-containerd-5349706b39d618d5cd0c50c612ed27fecd5c8f7f8c0b9c415b48c1a26edfd919.scope - libcontainer container 5349706b39d618d5cd0c50c612ed27fecd5c8f7f8c0b9c415b48c1a26edfd919. Dec 13 01:48:57.434126 kubelet[3075]: I1213 01:48:57.433390 3075 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-jzxgp" podStartSLOduration=37.433357143 podStartE2EDuration="37.433357143s" podCreationTimestamp="2024-12-13 01:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:48:57.433072714 +0000 UTC m=+49.330961948" watchObservedRunningTime="2024-12-13 01:48:57.433357143 +0000 UTC m=+49.331246369" Dec 13 01:48:57.464867 containerd[1540]: time="2024-12-13T01:48:57.463731436Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:48:57.465888 containerd[1540]: time="2024-12-13T01:48:57.465047045Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:48:57.465888 containerd[1540]: time="2024-12-13T01:48:57.465061769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:57.467746 containerd[1540]: time="2024-12-13T01:48:57.466077644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:57.474617 systemd[1]: run-containerd-runc-k8s.io-02aaca0879c57a014cd6138eebfa22b0b5b5f1643a75dc38b593f914d57f737c-runc.viEuY9.mount: Deactivated successfully. Dec 13 01:48:57.474688 systemd[1]: run-netns-cni\x2d5488b6f7\x2dc600\x2d873a\x2d6150\x2dddb293c0bdef.mount: Deactivated successfully. Dec 13 01:48:57.474726 systemd[1]: run-netns-cni\x2d56025b75\x2d9baa\x2d10b1\x2d3ff7\x2dee75bbbaf09e.mount: Deactivated successfully. Dec 13 01:48:57.484595 systemd-resolved[1466]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:48:57.509830 systemd[1]: Started cri-containerd-e9a228d8b3f710c9c7dfc8223104db3aeadeec57cd8ccd33804c8e5a5cae33d0.scope - libcontainer container e9a228d8b3f710c9c7dfc8223104db3aeadeec57cd8ccd33804c8e5a5cae33d0. Dec 13 01:48:57.520974 containerd[1540]: time="2024-12-13T01:48:57.520947046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-886bb9bdf-hnf79,Uid:c771d094-7c93-4bc5-90e6-c1ad822c0b38,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"5349706b39d618d5cd0c50c612ed27fecd5c8f7f8c0b9c415b48c1a26edfd919\"" Dec 13 01:48:57.536123 systemd-resolved[1466]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:48:57.550451 containerd[1540]: time="2024-12-13T01:48:57.550424377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qmcx8,Uid:505a3ea8-bd57-41cf-a662-11b3cdb671b9,Namespace:calico-system,Attempt:1,} returns sandbox id \"e9a228d8b3f710c9c7dfc8223104db3aeadeec57cd8ccd33804c8e5a5cae33d0\"" Dec 13 01:48:57.556579 kubelet[3075]: I1213 01:48:57.556558 3075 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-54bc5f94b9-8mt2p" podStartSLOduration=26.52785851 podStartE2EDuration="28.556532554s" podCreationTimestamp="2024-12-13 01:48:29 +0000 UTC" firstStartedPulling="2024-12-13 01:48:54.727777707 +0000 UTC m=+46.625666932" lastFinishedPulling="2024-12-13 01:48:56.756451751 +0000 UTC m=+48.654340976" observedRunningTime="2024-12-13 01:48:57.486603605 +0000 UTC m=+49.384492834" watchObservedRunningTime="2024-12-13 01:48:57.556532554 +0000 UTC m=+49.454421787" Dec 13 01:48:57.684650 sshd[5119]: Invalid user debian from 36.138.19.180 port 53106 Dec 13 01:48:57.702004 systemd-networkd[1244]: cali2e3d526ff82: Gained IPv6LL Dec 13 01:48:57.886998 sshd[5119]: Connection closed by invalid user debian 36.138.19.180 port 53106 [preauth] Dec 13 01:48:57.888748 systemd[1]: sshd@101-139.178.70.110:22-36.138.19.180:53106.service: Deactivated successfully. Dec 13 01:48:57.894207 systemd-networkd[1244]: cali3d4a0bd41e1: Gained IPv6LL Dec 13 01:48:58.090056 systemd[1]: Started sshd@102-139.178.70.110:22-36.138.19.180:53122.service - OpenSSH per-connection server daemon (36.138.19.180:53122). Dec 13 01:48:58.534184 systemd-networkd[1244]: calic1c30c7ccd7: Gained IPv6LL Dec 13 01:48:58.534678 systemd-networkd[1244]: calid06af89efa8: Gained IPv6LL Dec 13 01:48:58.658414 containerd[1540]: time="2024-12-13T01:48:58.658155009Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:58.658681 containerd[1540]: time="2024-12-13T01:48:58.658654391Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Dec 13 01:48:58.659417 containerd[1540]: time="2024-12-13T01:48:58.658890114Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:58.660027 containerd[1540]: time="2024-12-13T01:48:58.660011146Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:58.660515 containerd[1540]: time="2024-12-13T01:48:58.660498681Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 1.90380065s" Dec 13 01:48:58.660545 containerd[1540]: time="2024-12-13T01:48:58.660517137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 01:48:58.661465 containerd[1540]: time="2024-12-13T01:48:58.661117968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:48:58.662717 containerd[1540]: time="2024-12-13T01:48:58.662699274Z" level=info msg="CreateContainer within sandbox \"02aaca0879c57a014cd6138eebfa22b0b5b5f1643a75dc38b593f914d57f737c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:48:58.679713 containerd[1540]: time="2024-12-13T01:48:58.679650916Z" level=info msg="CreateContainer within sandbox \"02aaca0879c57a014cd6138eebfa22b0b5b5f1643a75dc38b593f914d57f737c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c43c8a90a7bae0d1935e13055a7672322f9c9b71b7d67e401ac34eeb0d64ffc0\"" Dec 13 01:48:58.680096 containerd[1540]: time="2024-12-13T01:48:58.680049792Z" level=info msg="StartContainer for \"c43c8a90a7bae0d1935e13055a7672322f9c9b71b7d67e401ac34eeb0d64ffc0\"" Dec 13 01:48:58.715115 systemd[1]: Started cri-containerd-c43c8a90a7bae0d1935e13055a7672322f9c9b71b7d67e401ac34eeb0d64ffc0.scope - libcontainer container c43c8a90a7bae0d1935e13055a7672322f9c9b71b7d67e401ac34eeb0d64ffc0. Dec 13 01:48:58.745125 containerd[1540]: time="2024-12-13T01:48:58.745100919Z" level=info msg="StartContainer for \"c43c8a90a7bae0d1935e13055a7672322f9c9b71b7d67e401ac34eeb0d64ffc0\" returns successfully" Dec 13 01:48:58.969718 sshd[5367]: Invalid user debian from 36.138.19.180 port 53122 Dec 13 01:48:59.077171 containerd[1540]: time="2024-12-13T01:48:59.077140705Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:59.081747 containerd[1540]: time="2024-12-13T01:48:59.081718593Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Dec 13 01:48:59.083414 containerd[1540]: time="2024-12-13T01:48:59.082999256Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 421.866086ms" Dec 13 01:48:59.083414 containerd[1540]: time="2024-12-13T01:48:59.083020582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 01:48:59.084080 containerd[1540]: time="2024-12-13T01:48:59.083934684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 01:48:59.091250 containerd[1540]: time="2024-12-13T01:48:59.091234286Z" level=info msg="CreateContainer within sandbox \"5349706b39d618d5cd0c50c612ed27fecd5c8f7f8c0b9c415b48c1a26edfd919\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:48:59.163294 containerd[1540]: time="2024-12-13T01:48:59.163139625Z" level=info msg="CreateContainer within sandbox \"5349706b39d618d5cd0c50c612ed27fecd5c8f7f8c0b9c415b48c1a26edfd919\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"401d2724b7886a76ae53dd5c55f3c037899990064a4c4e679792b32937e4de33\"" Dec 13 01:48:59.169918 containerd[1540]: time="2024-12-13T01:48:59.163539214Z" level=info msg="StartContainer for \"401d2724b7886a76ae53dd5c55f3c037899990064a4c4e679792b32937e4de33\"" Dec 13 01:48:59.179962 sshd[5367]: Connection closed by invalid user debian 36.138.19.180 port 53122 [preauth] Dec 13 01:48:59.184039 systemd[1]: Started cri-containerd-401d2724b7886a76ae53dd5c55f3c037899990064a4c4e679792b32937e4de33.scope - libcontainer container 401d2724b7886a76ae53dd5c55f3c037899990064a4c4e679792b32937e4de33. Dec 13 01:48:59.184503 systemd[1]: sshd@102-139.178.70.110:22-36.138.19.180:53122.service: Deactivated successfully. Dec 13 01:48:59.228777 containerd[1540]: time="2024-12-13T01:48:59.228684310Z" level=info msg="StartContainer for \"401d2724b7886a76ae53dd5c55f3c037899990064a4c4e679792b32937e4de33\" returns successfully" Dec 13 01:48:59.373125 systemd[1]: Started sshd@103-139.178.70.110:22-36.138.19.180:53136.service - OpenSSH per-connection server daemon (36.138.19.180:53136). Dec 13 01:48:59.521535 kubelet[3075]: I1213 01:48:59.521196 3075 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-886bb9bdf-hnf79" podStartSLOduration=28.95987008 podStartE2EDuration="30.521168451s" podCreationTimestamp="2024-12-13 01:48:29 +0000 UTC" firstStartedPulling="2024-12-13 01:48:57.521944026 +0000 UTC m=+49.419833251" lastFinishedPulling="2024-12-13 01:48:59.083242397 +0000 UTC m=+50.981131622" observedRunningTime="2024-12-13 01:48:59.450059867 +0000 UTC m=+51.347949100" watchObservedRunningTime="2024-12-13 01:48:59.521168451 +0000 UTC m=+51.419057679" Dec 13 01:48:59.521535 kubelet[3075]: I1213 01:48:59.521310 3075 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-886bb9bdf-f88f5" podStartSLOduration=28.597219763 podStartE2EDuration="30.521297981s" podCreationTimestamp="2024-12-13 01:48:29 +0000 UTC" firstStartedPulling="2024-12-13 01:48:56.73662613 +0000 UTC m=+48.634515354" lastFinishedPulling="2024-12-13 01:48:58.660704347 +0000 UTC m=+50.558593572" observedRunningTime="2024-12-13 01:48:59.467636587 +0000 UTC m=+51.365525816" watchObservedRunningTime="2024-12-13 01:48:59.521297981 +0000 UTC m=+51.419187208" Dec 13 01:49:00.092712 sshd[5459]: Invalid user debian from 36.138.19.180 port 53136 Dec 13 01:49:00.266619 sshd[5459]: Connection closed by invalid user debian 36.138.19.180 port 53136 [preauth] Dec 13 01:49:00.267570 systemd[1]: sshd@103-139.178.70.110:22-36.138.19.180:53136.service: Deactivated successfully. Dec 13 01:49:00.368067 containerd[1540]: time="2024-12-13T01:49:00.367997018Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:49:00.369048 containerd[1540]: time="2024-12-13T01:49:00.369008622Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Dec 13 01:49:00.371859 containerd[1540]: time="2024-12-13T01:49:00.371829289Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:49:00.372662 containerd[1540]: time="2024-12-13T01:49:00.372270586Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.288316517s" Dec 13 01:49:00.372662 containerd[1540]: time="2024-12-13T01:49:00.372287660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 01:49:00.372662 containerd[1540]: time="2024-12-13T01:49:00.372593518Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:49:00.374181 containerd[1540]: time="2024-12-13T01:49:00.374164491Z" level=info msg="CreateContainer within sandbox \"e9a228d8b3f710c9c7dfc8223104db3aeadeec57cd8ccd33804c8e5a5cae33d0\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 01:49:00.393495 containerd[1540]: time="2024-12-13T01:49:00.393427041Z" level=info msg="CreateContainer within sandbox \"e9a228d8b3f710c9c7dfc8223104db3aeadeec57cd8ccd33804c8e5a5cae33d0\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"90d3b0895e5331287a42057cbe66ed241c477f44691100922381bb14d8d44e4c\"" Dec 13 01:49:00.394251 containerd[1540]: time="2024-12-13T01:49:00.393681770Z" level=info msg="StartContainer for \"90d3b0895e5331287a42057cbe66ed241c477f44691100922381bb14d8d44e4c\"" Dec 13 01:49:00.422097 systemd[1]: Started cri-containerd-90d3b0895e5331287a42057cbe66ed241c477f44691100922381bb14d8d44e4c.scope - libcontainer container 90d3b0895e5331287a42057cbe66ed241c477f44691100922381bb14d8d44e4c. Dec 13 01:49:00.441562 containerd[1540]: time="2024-12-13T01:49:00.441435875Z" level=info msg="StartContainer for \"90d3b0895e5331287a42057cbe66ed241c477f44691100922381bb14d8d44e4c\" returns successfully" Dec 13 01:49:00.443196 containerd[1540]: time="2024-12-13T01:49:00.443185134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 01:49:00.448472 kubelet[3075]: I1213 01:49:00.448186 3075 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:49:00.466035 systemd[1]: Started sshd@104-139.178.70.110:22-36.138.19.180:53138.service - OpenSSH per-connection server daemon (36.138.19.180:53138). Dec 13 01:49:00.667605 systemd[1]: run-containerd-runc-k8s.io-90d3b0895e5331287a42057cbe66ed241c477f44691100922381bb14d8d44e4c-runc.qp7iUG.mount: Deactivated successfully. Dec 13 01:49:01.307140 sshd[5511]: Invalid user debian from 36.138.19.180 port 53138 Dec 13 01:49:01.511468 sshd[5511]: Connection closed by invalid user debian 36.138.19.180 port 53138 [preauth] Dec 13 01:49:01.519096 systemd[1]: sshd@104-139.178.70.110:22-36.138.19.180:53138.service: Deactivated successfully. Dec 13 01:49:01.681424 systemd[1]: Started sshd@105-139.178.70.110:22-36.138.19.180:53148.service - OpenSSH per-connection server daemon (36.138.19.180:53148). Dec 13 01:49:02.279782 kubelet[3075]: I1213 01:49:02.279754 3075 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:49:02.372692 sshd[5516]: Invalid user debian from 36.138.19.180 port 53148 Dec 13 01:49:02.553242 sshd[5516]: Connection closed by invalid user debian 36.138.19.180 port 53148 [preauth] Dec 13 01:49:02.554643 systemd[1]: sshd@105-139.178.70.110:22-36.138.19.180:53148.service: Deactivated successfully. Dec 13 01:49:02.742982 containerd[1540]: time="2024-12-13T01:49:02.742937362Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:49:02.743941 containerd[1540]: time="2024-12-13T01:49:02.743862712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Dec 13 01:49:02.744887 containerd[1540]: time="2024-12-13T01:49:02.744245276Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:49:02.749044 containerd[1540]: time="2024-12-13T01:49:02.749029721Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:49:02.750347 containerd[1540]: time="2024-12-13T01:49:02.749491345Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.306188343s" Dec 13 01:49:02.750427 containerd[1540]: time="2024-12-13T01:49:02.750415452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 01:49:02.751184 systemd[1]: Started sshd@106-139.178.70.110:22-36.138.19.180:53156.service - OpenSSH per-connection server daemon (36.138.19.180:53156). Dec 13 01:49:02.754196 containerd[1540]: time="2024-12-13T01:49:02.754177591Z" level=info msg="CreateContainer within sandbox \"e9a228d8b3f710c9c7dfc8223104db3aeadeec57cd8ccd33804c8e5a5cae33d0\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 01:49:02.761119 containerd[1540]: time="2024-12-13T01:49:02.760862780Z" level=info msg="CreateContainer within sandbox \"e9a228d8b3f710c9c7dfc8223104db3aeadeec57cd8ccd33804c8e5a5cae33d0\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ec9449f6268d94043e40815d6b27f061f29f0d08af59e3b91af80d7883dcf56e\"" Dec 13 01:49:02.764404 containerd[1540]: time="2024-12-13T01:49:02.764187392Z" level=info msg="StartContainer for \"ec9449f6268d94043e40815d6b27f061f29f0d08af59e3b91af80d7883dcf56e\"" Dec 13 01:49:02.792015 systemd[1]: Started cri-containerd-ec9449f6268d94043e40815d6b27f061f29f0d08af59e3b91af80d7883dcf56e.scope - libcontainer container ec9449f6268d94043e40815d6b27f061f29f0d08af59e3b91af80d7883dcf56e. Dec 13 01:49:02.810115 containerd[1540]: time="2024-12-13T01:49:02.810041610Z" level=info msg="StartContainer for \"ec9449f6268d94043e40815d6b27f061f29f0d08af59e3b91af80d7883dcf56e\" returns successfully" Dec 13 01:49:03.581525 kubelet[3075]: I1213 01:49:03.581452 3075 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 01:49:03.588177 kubelet[3075]: I1213 01:49:03.588117 3075 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 01:49:03.588947 sshd[5531]: Invalid user debian from 36.138.19.180 port 53156 Dec 13 01:49:03.788828 sshd[5531]: Connection closed by invalid user debian 36.138.19.180 port 53156 [preauth] Dec 13 01:49:03.791368 systemd[1]: sshd@106-139.178.70.110:22-36.138.19.180:53156.service: Deactivated successfully. Dec 13 01:49:03.990464 systemd[1]: Started sshd@107-139.178.70.110:22-36.138.19.180:43808.service - OpenSSH per-connection server daemon (36.138.19.180:43808). Dec 13 01:49:04.802353 sshd[5573]: Invalid user debian from 36.138.19.180 port 43808 Dec 13 01:49:04.999788 sshd[5573]: Connection closed by invalid user debian 36.138.19.180 port 43808 [preauth] Dec 13 01:49:05.001369 systemd[1]: sshd@107-139.178.70.110:22-36.138.19.180:43808.service: Deactivated successfully. Dec 13 01:49:05.210872 systemd[1]: Started sshd@108-139.178.70.110:22-36.138.19.180:43814.service - OpenSSH per-connection server daemon (36.138.19.180:43814). Dec 13 01:49:06.030310 sshd[5578]: Invalid user debian from 36.138.19.180 port 43814 Dec 13 01:49:06.230888 sshd[5578]: Connection closed by invalid user debian 36.138.19.180 port 43814 [preauth] Dec 13 01:49:06.231769 systemd[1]: sshd@108-139.178.70.110:22-36.138.19.180:43814.service: Deactivated successfully. Dec 13 01:49:06.447806 systemd[1]: Started sshd@109-139.178.70.110:22-36.138.19.180:43828.service - OpenSSH per-connection server daemon (36.138.19.180:43828). Dec 13 01:49:07.291111 sshd[5583]: Invalid user debian from 36.138.19.180 port 43828 Dec 13 01:49:07.496475 sshd[5583]: Connection closed by invalid user debian 36.138.19.180 port 43828 [preauth] Dec 13 01:49:07.498055 systemd[1]: sshd@109-139.178.70.110:22-36.138.19.180:43828.service: Deactivated successfully. Dec 13 01:49:07.712682 systemd[1]: Started sshd@110-139.178.70.110:22-36.138.19.180:43840.service - OpenSSH per-connection server daemon (36.138.19.180:43840). Dec 13 01:49:08.241762 containerd[1540]: time="2024-12-13T01:49:08.241689292Z" level=info msg="StopPodSandbox for \"7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f\"" Dec 13 01:49:08.426304 containerd[1540]: 2024-12-13 01:49:08.398 [WARNING][5606] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qmcx8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"505a3ea8-bd57-41cf-a662-11b3cdb671b9", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e9a228d8b3f710c9c7dfc8223104db3aeadeec57cd8ccd33804c8e5a5cae33d0", Pod:"csi-node-driver-qmcx8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid06af89efa8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:49:08.426304 containerd[1540]: 2024-12-13 01:49:08.399 [INFO][5606] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f" Dec 13 01:49:08.426304 containerd[1540]: 2024-12-13 01:49:08.399 [INFO][5606] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f" iface="eth0" netns="" Dec 13 01:49:08.426304 containerd[1540]: 2024-12-13 01:49:08.399 [INFO][5606] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f" Dec 13 01:49:08.426304 containerd[1540]: 2024-12-13 01:49:08.399 [INFO][5606] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f" Dec 13 01:49:08.426304 containerd[1540]: 2024-12-13 01:49:08.419 [INFO][5612] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f" HandleID="k8s-pod-network.7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f" Workload="localhost-k8s-csi--node--driver--qmcx8-eth0" Dec 13 01:49:08.426304 containerd[1540]: 2024-12-13 01:49:08.419 [INFO][5612] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:49:08.426304 containerd[1540]: 2024-12-13 01:49:08.419 [INFO][5612] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:49:08.426304 containerd[1540]: 2024-12-13 01:49:08.423 [WARNING][5612] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f" HandleID="k8s-pod-network.7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f" Workload="localhost-k8s-csi--node--driver--qmcx8-eth0" Dec 13 01:49:08.426304 containerd[1540]: 2024-12-13 01:49:08.423 [INFO][5612] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f" HandleID="k8s-pod-network.7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f" Workload="localhost-k8s-csi--node--driver--qmcx8-eth0" Dec 13 01:49:08.426304 containerd[1540]: 2024-12-13 01:49:08.423 [INFO][5612] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:49:08.426304 containerd[1540]: 2024-12-13 01:49:08.425 [INFO][5606] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f" Dec 13 01:49:08.427779 containerd[1540]: time="2024-12-13T01:49:08.426344035Z" level=info msg="TearDown network for sandbox \"7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f\" successfully" Dec 13 01:49:08.427779 containerd[1540]: time="2024-12-13T01:49:08.426361036Z" level=info msg="StopPodSandbox for \"7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f\" returns successfully" Dec 13 01:49:08.456425 containerd[1540]: time="2024-12-13T01:49:08.456403463Z" level=info msg="RemovePodSandbox for \"7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f\"" Dec 13 01:49:08.456461 containerd[1540]: time="2024-12-13T01:49:08.456427314Z" level=info msg="Forcibly stopping sandbox \"7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f\"" Dec 13 01:49:08.505600 containerd[1540]: 2024-12-13 01:49:08.487 [WARNING][5630] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qmcx8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"505a3ea8-bd57-41cf-a662-11b3cdb671b9", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e9a228d8b3f710c9c7dfc8223104db3aeadeec57cd8ccd33804c8e5a5cae33d0", Pod:"csi-node-driver-qmcx8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid06af89efa8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:49:08.505600 containerd[1540]: 2024-12-13 01:49:08.487 [INFO][5630] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f" Dec 13 01:49:08.505600 containerd[1540]: 2024-12-13 01:49:08.487 [INFO][5630] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f" iface="eth0" netns="" Dec 13 01:49:08.505600 containerd[1540]: 2024-12-13 01:49:08.487 [INFO][5630] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f" Dec 13 01:49:08.505600 containerd[1540]: 2024-12-13 01:49:08.487 [INFO][5630] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f" Dec 13 01:49:08.505600 containerd[1540]: 2024-12-13 01:49:08.499 [INFO][5636] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f" HandleID="k8s-pod-network.7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f" Workload="localhost-k8s-csi--node--driver--qmcx8-eth0" Dec 13 01:49:08.505600 containerd[1540]: 2024-12-13 01:49:08.499 [INFO][5636] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:49:08.505600 containerd[1540]: 2024-12-13 01:49:08.499 [INFO][5636] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:49:08.505600 containerd[1540]: 2024-12-13 01:49:08.502 [WARNING][5636] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f" HandleID="k8s-pod-network.7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f" Workload="localhost-k8s-csi--node--driver--qmcx8-eth0" Dec 13 01:49:08.505600 containerd[1540]: 2024-12-13 01:49:08.502 [INFO][5636] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f" HandleID="k8s-pod-network.7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f" Workload="localhost-k8s-csi--node--driver--qmcx8-eth0" Dec 13 01:49:08.505600 containerd[1540]: 2024-12-13 01:49:08.503 [INFO][5636] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:49:08.505600 containerd[1540]: 2024-12-13 01:49:08.504 [INFO][5630] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f" Dec 13 01:49:08.506187 containerd[1540]: time="2024-12-13T01:49:08.505583371Z" level=info msg="TearDown network for sandbox \"7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f\" successfully" Dec 13 01:49:08.527966 containerd[1540]: time="2024-12-13T01:49:08.527918852Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:49:08.533785 containerd[1540]: time="2024-12-13T01:49:08.533767845Z" level=info msg="RemovePodSandbox \"7110117024424e7bef3c531d39dfed3d62ecba120de198826be886fdc871894f\" returns successfully" Dec 13 01:49:08.534325 containerd[1540]: time="2024-12-13T01:49:08.534180996Z" level=info msg="StopPodSandbox for \"13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2\"" Dec 13 01:49:08.562258 sshd[5590]: Invalid user debian from 36.138.19.180 port 43840 Dec 13 01:49:08.578986 containerd[1540]: 2024-12-13 01:49:08.555 [WARNING][5654] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--jzxgp-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"b7419af9-8db7-4200-828e-4294ae89fbd9", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d728ef4ebfe3768295c0dbef4bb46a5cc958a5f938070dd054acc9f5d7b38614", Pod:"coredns-76f75df574-jzxgp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2e3d526ff82", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:49:08.578986 containerd[1540]: 2024-12-13 01:49:08.555 [INFO][5654] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2" Dec 13 01:49:08.578986 containerd[1540]: 2024-12-13 01:49:08.555 [INFO][5654] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2" iface="eth0" netns="" Dec 13 01:49:08.578986 containerd[1540]: 2024-12-13 01:49:08.555 [INFO][5654] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2" Dec 13 01:49:08.578986 containerd[1540]: 2024-12-13 01:49:08.555 [INFO][5654] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2" Dec 13 01:49:08.578986 containerd[1540]: 2024-12-13 01:49:08.570 [INFO][5660] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2" HandleID="k8s-pod-network.13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2" Workload="localhost-k8s-coredns--76f75df574--jzxgp-eth0" Dec 13 01:49:08.578986 containerd[1540]: 2024-12-13 01:49:08.570 [INFO][5660] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:49:08.578986 containerd[1540]: 2024-12-13 01:49:08.571 [INFO][5660] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:49:08.578986 containerd[1540]: 2024-12-13 01:49:08.574 [WARNING][5660] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2" HandleID="k8s-pod-network.13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2" Workload="localhost-k8s-coredns--76f75df574--jzxgp-eth0" Dec 13 01:49:08.578986 containerd[1540]: 2024-12-13 01:49:08.575 [INFO][5660] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2" HandleID="k8s-pod-network.13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2" Workload="localhost-k8s-coredns--76f75df574--jzxgp-eth0" Dec 13 01:49:08.578986 containerd[1540]: 2024-12-13 01:49:08.575 [INFO][5660] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:49:08.578986 containerd[1540]: 2024-12-13 01:49:08.576 [INFO][5654] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2" Dec 13 01:49:08.580567 containerd[1540]: time="2024-12-13T01:49:08.579068504Z" level=info msg="TearDown network for sandbox \"13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2\" successfully" Dec 13 01:49:08.580567 containerd[1540]: time="2024-12-13T01:49:08.579083114Z" level=info msg="StopPodSandbox for \"13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2\" returns successfully" Dec 13 01:49:08.580567 containerd[1540]: time="2024-12-13T01:49:08.580268991Z" level=info msg="RemovePodSandbox for \"13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2\"" Dec 13 01:49:08.580567 containerd[1540]: time="2024-12-13T01:49:08.580289249Z" level=info msg="Forcibly stopping sandbox \"13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2\"" Dec 13 01:49:08.623392 containerd[1540]: 2024-12-13 01:49:08.604 [WARNING][5678] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--jzxgp-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"b7419af9-8db7-4200-828e-4294ae89fbd9", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d728ef4ebfe3768295c0dbef4bb46a5cc958a5f938070dd054acc9f5d7b38614", Pod:"coredns-76f75df574-jzxgp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2e3d526ff82", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:49:08.623392 containerd[1540]: 2024-12-13 01:49:08.605 [INFO][5678] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2" Dec 13 01:49:08.623392 containerd[1540]: 2024-12-13 01:49:08.605 [INFO][5678] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2" iface="eth0" netns="" Dec 13 01:49:08.623392 containerd[1540]: 2024-12-13 01:49:08.605 [INFO][5678] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2" Dec 13 01:49:08.623392 containerd[1540]: 2024-12-13 01:49:08.605 [INFO][5678] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2" Dec 13 01:49:08.623392 containerd[1540]: 2024-12-13 01:49:08.617 [INFO][5684] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2" HandleID="k8s-pod-network.13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2" Workload="localhost-k8s-coredns--76f75df574--jzxgp-eth0" Dec 13 01:49:08.623392 containerd[1540]: 2024-12-13 01:49:08.617 [INFO][5684] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:49:08.623392 containerd[1540]: 2024-12-13 01:49:08.617 [INFO][5684] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:49:08.623392 containerd[1540]: 2024-12-13 01:49:08.620 [WARNING][5684] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2" HandleID="k8s-pod-network.13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2" Workload="localhost-k8s-coredns--76f75df574--jzxgp-eth0" Dec 13 01:49:08.623392 containerd[1540]: 2024-12-13 01:49:08.620 [INFO][5684] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2" HandleID="k8s-pod-network.13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2" Workload="localhost-k8s-coredns--76f75df574--jzxgp-eth0" Dec 13 01:49:08.623392 containerd[1540]: 2024-12-13 01:49:08.621 [INFO][5684] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:49:08.623392 containerd[1540]: 2024-12-13 01:49:08.622 [INFO][5678] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2" Dec 13 01:49:08.623756 containerd[1540]: time="2024-12-13T01:49:08.623414288Z" level=info msg="TearDown network for sandbox \"13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2\" successfully" Dec 13 01:49:08.624768 containerd[1540]: time="2024-12-13T01:49:08.624751357Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:49:08.624811 containerd[1540]: time="2024-12-13T01:49:08.624797473Z" level=info msg="RemovePodSandbox \"13e8669e9887f6e28d39083c5ec66be3236ff6aed10d356e32c3d6e0dd77b7b2\" returns successfully" Dec 13 01:49:08.627639 containerd[1540]: time="2024-12-13T01:49:08.627623285Z" level=info msg="StopPodSandbox for \"444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a\"" Dec 13 01:49:08.674213 containerd[1540]: 2024-12-13 01:49:08.650 [WARNING][5702] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--4mlcx-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4d8029e0-2c95-494c-bb51-f3a7debfa6c1", ResourceVersion:"758", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"be2e0a7bb9226e8e5a80a5fa6f630de68e9800c9884349451993c897196688fb", Pod:"coredns-76f75df574-4mlcx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8971e558873", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:49:08.674213 containerd[1540]: 2024-12-13 01:49:08.650 [INFO][5702] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a" Dec 13 01:49:08.674213 containerd[1540]: 2024-12-13 01:49:08.650 [INFO][5702] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a" iface="eth0" netns="" Dec 13 01:49:08.674213 containerd[1540]: 2024-12-13 01:49:08.650 [INFO][5702] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a" Dec 13 01:49:08.674213 containerd[1540]: 2024-12-13 01:49:08.650 [INFO][5702] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a" Dec 13 01:49:08.674213 containerd[1540]: 2024-12-13 01:49:08.662 [INFO][5708] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a" HandleID="k8s-pod-network.444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a" Workload="localhost-k8s-coredns--76f75df574--4mlcx-eth0" Dec 13 01:49:08.674213 containerd[1540]: 2024-12-13 01:49:08.662 [INFO][5708] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:49:08.674213 containerd[1540]: 2024-12-13 01:49:08.662 [INFO][5708] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:49:08.674213 containerd[1540]: 2024-12-13 01:49:08.667 [WARNING][5708] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a" HandleID="k8s-pod-network.444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a" Workload="localhost-k8s-coredns--76f75df574--4mlcx-eth0" Dec 13 01:49:08.674213 containerd[1540]: 2024-12-13 01:49:08.667 [INFO][5708] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a" HandleID="k8s-pod-network.444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a" Workload="localhost-k8s-coredns--76f75df574--4mlcx-eth0" Dec 13 01:49:08.674213 containerd[1540]: 2024-12-13 01:49:08.670 [INFO][5708] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:49:08.674213 containerd[1540]: 2024-12-13 01:49:08.671 [INFO][5702] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a" Dec 13 01:49:08.674213 containerd[1540]: time="2024-12-13T01:49:08.674044548Z" level=info msg="TearDown network for sandbox \"444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a\" successfully" Dec 13 01:49:08.674213 containerd[1540]: time="2024-12-13T01:49:08.674059357Z" level=info msg="StopPodSandbox for \"444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a\" returns successfully" Dec 13 01:49:08.674653 containerd[1540]: time="2024-12-13T01:49:08.674380728Z" level=info msg="RemovePodSandbox for \"444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a\"" Dec 13 01:49:08.674653 containerd[1540]: time="2024-12-13T01:49:08.674396709Z" level=info msg="Forcibly stopping sandbox \"444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a\"" Dec 13 01:49:08.736812 containerd[1540]: 2024-12-13 01:49:08.716 [WARNING][5726] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--4mlcx-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4d8029e0-2c95-494c-bb51-f3a7debfa6c1", ResourceVersion:"758", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"be2e0a7bb9226e8e5a80a5fa6f630de68e9800c9884349451993c897196688fb", Pod:"coredns-76f75df574-4mlcx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8971e558873", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:49:08.736812 containerd[1540]: 2024-12-13 01:49:08.716 [INFO][5726] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a" Dec 13 01:49:08.736812 containerd[1540]: 2024-12-13 01:49:08.716 [INFO][5726] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a" iface="eth0" netns="" Dec 13 01:49:08.736812 containerd[1540]: 2024-12-13 01:49:08.716 [INFO][5726] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a" Dec 13 01:49:08.736812 containerd[1540]: 2024-12-13 01:49:08.716 [INFO][5726] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a" Dec 13 01:49:08.736812 containerd[1540]: 2024-12-13 01:49:08.730 [INFO][5732] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a" HandleID="k8s-pod-network.444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a" Workload="localhost-k8s-coredns--76f75df574--4mlcx-eth0" Dec 13 01:49:08.736812 containerd[1540]: 2024-12-13 01:49:08.730 [INFO][5732] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:49:08.736812 containerd[1540]: 2024-12-13 01:49:08.731 [INFO][5732] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:49:08.736812 containerd[1540]: 2024-12-13 01:49:08.734 [WARNING][5732] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a" HandleID="k8s-pod-network.444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a" Workload="localhost-k8s-coredns--76f75df574--4mlcx-eth0" Dec 13 01:49:08.736812 containerd[1540]: 2024-12-13 01:49:08.734 [INFO][5732] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a" HandleID="k8s-pod-network.444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a" Workload="localhost-k8s-coredns--76f75df574--4mlcx-eth0" Dec 13 01:49:08.736812 containerd[1540]: 2024-12-13 01:49:08.735 [INFO][5732] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:49:08.736812 containerd[1540]: 2024-12-13 01:49:08.735 [INFO][5726] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a" Dec 13 01:49:08.737607 containerd[1540]: time="2024-12-13T01:49:08.736831211Z" level=info msg="TearDown network for sandbox \"444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a\" successfully" Dec 13 01:49:08.738124 containerd[1540]: time="2024-12-13T01:49:08.738108804Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:49:08.738151 containerd[1540]: time="2024-12-13T01:49:08.738139327Z" level=info msg="RemovePodSandbox \"444b8ab96f79c8dfcaf85f8befd9bde4bf9139820a69d24e4f05e9f1d93aed2a\" returns successfully" Dec 13 01:49:08.738423 containerd[1540]: time="2024-12-13T01:49:08.738412607Z" level=info msg="StopPodSandbox for \"dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a\"" Dec 13 01:49:08.773388 sshd[5590]: Connection closed by invalid user debian 36.138.19.180 port 43840 [preauth] Dec 13 01:49:08.775130 systemd[1]: sshd@110-139.178.70.110:22-36.138.19.180:43840.service: Deactivated successfully. Dec 13 01:49:08.783298 containerd[1540]: 2024-12-13 01:49:08.761 [WARNING][5751] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--54bc5f94b9--8mt2p-eth0", GenerateName:"calico-kube-controllers-54bc5f94b9-", Namespace:"calico-system", SelfLink:"", UID:"72a2a28b-0b80-4d0a-89fb-10506cac7c8e", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54bc5f94b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"26b2a78c390a3f37be7ddbcf0a27f72447742d95b1ebfcd0d96b118bd3abae4d", Pod:"calico-kube-controllers-54bc5f94b9-8mt2p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic15a849e04a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:49:08.783298 containerd[1540]: 2024-12-13 01:49:08.761 [INFO][5751] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a" Dec 13 01:49:08.783298 containerd[1540]: 2024-12-13 01:49:08.761 [INFO][5751] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a" iface="eth0" netns="" Dec 13 01:49:08.783298 containerd[1540]: 2024-12-13 01:49:08.761 [INFO][5751] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a" Dec 13 01:49:08.783298 containerd[1540]: 2024-12-13 01:49:08.761 [INFO][5751] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a" Dec 13 01:49:08.783298 containerd[1540]: 2024-12-13 01:49:08.776 [INFO][5757] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a" HandleID="k8s-pod-network.dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a" Workload="localhost-k8s-calico--kube--controllers--54bc5f94b9--8mt2p-eth0" Dec 13 01:49:08.783298 containerd[1540]: 2024-12-13 01:49:08.777 [INFO][5757] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:49:08.783298 containerd[1540]: 2024-12-13 01:49:08.777 [INFO][5757] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:49:08.783298 containerd[1540]: 2024-12-13 01:49:08.780 [WARNING][5757] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a" HandleID="k8s-pod-network.dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a" Workload="localhost-k8s-calico--kube--controllers--54bc5f94b9--8mt2p-eth0" Dec 13 01:49:08.783298 containerd[1540]: 2024-12-13 01:49:08.780 [INFO][5757] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a" HandleID="k8s-pod-network.dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a" Workload="localhost-k8s-calico--kube--controllers--54bc5f94b9--8mt2p-eth0" Dec 13 01:49:08.783298 containerd[1540]: 2024-12-13 01:49:08.781 [INFO][5757] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:49:08.783298 containerd[1540]: 2024-12-13 01:49:08.782 [INFO][5751] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a" Dec 13 01:49:08.783873 containerd[1540]: time="2024-12-13T01:49:08.783305005Z" level=info msg="TearDown network for sandbox \"dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a\" successfully" Dec 13 01:49:08.783873 containerd[1540]: time="2024-12-13T01:49:08.783319272Z" level=info msg="StopPodSandbox for \"dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a\" returns successfully" Dec 13 01:49:08.783873 containerd[1540]: time="2024-12-13T01:49:08.783608430Z" level=info msg="RemovePodSandbox for \"dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a\"" Dec 13 01:49:08.783873 containerd[1540]: time="2024-12-13T01:49:08.783624149Z" level=info msg="Forcibly stopping sandbox \"dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a\"" Dec 13 01:49:08.822628 containerd[1540]: 2024-12-13 01:49:08.804 [WARNING][5777] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--54bc5f94b9--8mt2p-eth0", GenerateName:"calico-kube-controllers-54bc5f94b9-", Namespace:"calico-system", SelfLink:"", UID:"72a2a28b-0b80-4d0a-89fb-10506cac7c8e", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54bc5f94b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"26b2a78c390a3f37be7ddbcf0a27f72447742d95b1ebfcd0d96b118bd3abae4d", Pod:"calico-kube-controllers-54bc5f94b9-8mt2p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic15a849e04a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:49:08.822628 containerd[1540]: 2024-12-13 01:49:08.804 [INFO][5777] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a" Dec 13 01:49:08.822628 containerd[1540]: 2024-12-13 01:49:08.804 [INFO][5777] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a" iface="eth0" netns="" Dec 13 01:49:08.822628 containerd[1540]: 2024-12-13 01:49:08.804 [INFO][5777] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a" Dec 13 01:49:08.822628 containerd[1540]: 2024-12-13 01:49:08.804 [INFO][5777] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a" Dec 13 01:49:08.822628 containerd[1540]: 2024-12-13 01:49:08.816 [INFO][5783] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a" HandleID="k8s-pod-network.dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a" Workload="localhost-k8s-calico--kube--controllers--54bc5f94b9--8mt2p-eth0" Dec 13 01:49:08.822628 containerd[1540]: 2024-12-13 01:49:08.816 [INFO][5783] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:49:08.822628 containerd[1540]: 2024-12-13 01:49:08.816 [INFO][5783] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:49:08.822628 containerd[1540]: 2024-12-13 01:49:08.820 [WARNING][5783] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a" HandleID="k8s-pod-network.dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a" Workload="localhost-k8s-calico--kube--controllers--54bc5f94b9--8mt2p-eth0" Dec 13 01:49:08.822628 containerd[1540]: 2024-12-13 01:49:08.820 [INFO][5783] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a" HandleID="k8s-pod-network.dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a" Workload="localhost-k8s-calico--kube--controllers--54bc5f94b9--8mt2p-eth0" Dec 13 01:49:08.822628 containerd[1540]: 2024-12-13 01:49:08.820 [INFO][5783] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:49:08.822628 containerd[1540]: 2024-12-13 01:49:08.821 [INFO][5777] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a" Dec 13 01:49:08.823186 containerd[1540]: time="2024-12-13T01:49:08.822650511Z" level=info msg="TearDown network for sandbox \"dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a\" successfully" Dec 13 01:49:08.823847 containerd[1540]: time="2024-12-13T01:49:08.823831014Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:49:08.823940 containerd[1540]: time="2024-12-13T01:49:08.823865417Z" level=info msg="RemovePodSandbox \"dd5f11380bfcc2b12f466902a0abfec17dad45304d3315e540496d278292e29a\" returns successfully" Dec 13 01:49:08.824321 containerd[1540]: time="2024-12-13T01:49:08.824177657Z" level=info msg="StopPodSandbox for \"5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147\"" Dec 13 01:49:08.865073 containerd[1540]: 2024-12-13 01:49:08.845 [WARNING][5801] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--886bb9bdf--f88f5-eth0", GenerateName:"calico-apiserver-886bb9bdf-", Namespace:"calico-apiserver", SelfLink:"", UID:"285c9a76-f344-4cf0-af98-33c38dd5f27a", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"886bb9bdf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"02aaca0879c57a014cd6138eebfa22b0b5b5f1643a75dc38b593f914d57f737c", Pod:"calico-apiserver-886bb9bdf-f88f5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3d4a0bd41e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:49:08.865073 containerd[1540]: 2024-12-13 01:49:08.845 [INFO][5801] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147" Dec 13 01:49:08.865073 containerd[1540]: 2024-12-13 01:49:08.845 [INFO][5801] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147" iface="eth0" netns="" Dec 13 01:49:08.865073 containerd[1540]: 2024-12-13 01:49:08.845 [INFO][5801] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147" Dec 13 01:49:08.865073 containerd[1540]: 2024-12-13 01:49:08.845 [INFO][5801] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147" Dec 13 01:49:08.865073 containerd[1540]: 2024-12-13 01:49:08.858 [INFO][5807] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147" HandleID="k8s-pod-network.5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147" Workload="localhost-k8s-calico--apiserver--886bb9bdf--f88f5-eth0" Dec 13 01:49:08.865073 containerd[1540]: 2024-12-13 01:49:08.858 [INFO][5807] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:49:08.865073 containerd[1540]: 2024-12-13 01:49:08.858 [INFO][5807] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:49:08.865073 containerd[1540]: 2024-12-13 01:49:08.862 [WARNING][5807] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147" HandleID="k8s-pod-network.5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147" Workload="localhost-k8s-calico--apiserver--886bb9bdf--f88f5-eth0" Dec 13 01:49:08.865073 containerd[1540]: 2024-12-13 01:49:08.862 [INFO][5807] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147" HandleID="k8s-pod-network.5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147" Workload="localhost-k8s-calico--apiserver--886bb9bdf--f88f5-eth0" Dec 13 01:49:08.865073 containerd[1540]: 2024-12-13 01:49:08.862 [INFO][5807] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:49:08.865073 containerd[1540]: 2024-12-13 01:49:08.863 [INFO][5801] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147" Dec 13 01:49:08.866738 containerd[1540]: time="2024-12-13T01:49:08.865094932Z" level=info msg="TearDown network for sandbox \"5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147\" successfully" Dec 13 01:49:08.866738 containerd[1540]: time="2024-12-13T01:49:08.865110996Z" level=info msg="StopPodSandbox for \"5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147\" returns successfully" Dec 13 01:49:08.866738 containerd[1540]: time="2024-12-13T01:49:08.865445109Z" level=info msg="RemovePodSandbox for \"5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147\"" Dec 13 01:49:08.866738 containerd[1540]: time="2024-12-13T01:49:08.865462863Z" level=info msg="Forcibly stopping sandbox \"5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147\"" Dec 13 01:49:08.906954 containerd[1540]: 2024-12-13 01:49:08.885 [WARNING][5825] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--886bb9bdf--f88f5-eth0", GenerateName:"calico-apiserver-886bb9bdf-", Namespace:"calico-apiserver", SelfLink:"", UID:"285c9a76-f344-4cf0-af98-33c38dd5f27a", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"886bb9bdf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"02aaca0879c57a014cd6138eebfa22b0b5b5f1643a75dc38b593f914d57f737c", Pod:"calico-apiserver-886bb9bdf-f88f5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3d4a0bd41e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:49:08.906954 containerd[1540]: 2024-12-13 01:49:08.885 [INFO][5825] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147" Dec 13 01:49:08.906954 containerd[1540]: 2024-12-13 01:49:08.885 [INFO][5825] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147" iface="eth0" netns="" Dec 13 01:49:08.906954 containerd[1540]: 2024-12-13 01:49:08.885 [INFO][5825] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147" Dec 13 01:49:08.906954 containerd[1540]: 2024-12-13 01:49:08.885 [INFO][5825] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147" Dec 13 01:49:08.906954 containerd[1540]: 2024-12-13 01:49:08.900 [INFO][5831] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147" HandleID="k8s-pod-network.5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147" Workload="localhost-k8s-calico--apiserver--886bb9bdf--f88f5-eth0" Dec 13 01:49:08.906954 containerd[1540]: 2024-12-13 01:49:08.900 [INFO][5831] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:49:08.906954 containerd[1540]: 2024-12-13 01:49:08.900 [INFO][5831] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:49:08.906954 containerd[1540]: 2024-12-13 01:49:08.904 [WARNING][5831] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147" HandleID="k8s-pod-network.5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147" Workload="localhost-k8s-calico--apiserver--886bb9bdf--f88f5-eth0" Dec 13 01:49:08.906954 containerd[1540]: 2024-12-13 01:49:08.904 [INFO][5831] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147" HandleID="k8s-pod-network.5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147" Workload="localhost-k8s-calico--apiserver--886bb9bdf--f88f5-eth0" Dec 13 01:49:08.906954 containerd[1540]: 2024-12-13 01:49:08.905 [INFO][5831] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:49:08.906954 containerd[1540]: 2024-12-13 01:49:08.906 [INFO][5825] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147" Dec 13 01:49:08.907410 containerd[1540]: time="2024-12-13T01:49:08.906976042Z" level=info msg="TearDown network for sandbox \"5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147\" successfully" Dec 13 01:49:08.908696 containerd[1540]: time="2024-12-13T01:49:08.908671755Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:49:08.908735 containerd[1540]: time="2024-12-13T01:49:08.908703677Z" level=info msg="RemovePodSandbox \"5ebdccf280809eaafeceb0843154e150f3da0c4881968a0826ba45de95764147\" returns successfully" Dec 13 01:49:08.909233 containerd[1540]: time="2024-12-13T01:49:08.909140351Z" level=info msg="StopPodSandbox for \"946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e\"" Dec 13 01:49:08.953697 systemd[1]: Started sshd@111-139.178.70.110:22-36.138.19.180:43844.service - OpenSSH per-connection server daemon (36.138.19.180:43844). Dec 13 01:49:08.957010 containerd[1540]: 2024-12-13 01:49:08.933 [WARNING][5849] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--886bb9bdf--hnf79-eth0", GenerateName:"calico-apiserver-886bb9bdf-", Namespace:"calico-apiserver", SelfLink:"", UID:"c771d094-7c93-4bc5-90e6-c1ad822c0b38", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"886bb9bdf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5349706b39d618d5cd0c50c612ed27fecd5c8f7f8c0b9c415b48c1a26edfd919", Pod:"calico-apiserver-886bb9bdf-hnf79", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic1c30c7ccd7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:49:08.957010 containerd[1540]: 2024-12-13 01:49:08.933 [INFO][5849] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e" Dec 13 01:49:08.957010 containerd[1540]: 2024-12-13 01:49:08.934 [INFO][5849] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e" iface="eth0" netns="" Dec 13 01:49:08.957010 containerd[1540]: 2024-12-13 01:49:08.934 [INFO][5849] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e" Dec 13 01:49:08.957010 containerd[1540]: 2024-12-13 01:49:08.934 [INFO][5849] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e" Dec 13 01:49:08.957010 containerd[1540]: 2024-12-13 01:49:08.948 [INFO][5855] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e" HandleID="k8s-pod-network.946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e" Workload="localhost-k8s-calico--apiserver--886bb9bdf--hnf79-eth0" Dec 13 01:49:08.957010 containerd[1540]: 2024-12-13 01:49:08.948 [INFO][5855] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:49:08.957010 containerd[1540]: 2024-12-13 01:49:08.948 [INFO][5855] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:49:08.957010 containerd[1540]: 2024-12-13 01:49:08.953 [WARNING][5855] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e" HandleID="k8s-pod-network.946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e" Workload="localhost-k8s-calico--apiserver--886bb9bdf--hnf79-eth0" Dec 13 01:49:08.957010 containerd[1540]: 2024-12-13 01:49:08.953 [INFO][5855] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e" HandleID="k8s-pod-network.946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e" Workload="localhost-k8s-calico--apiserver--886bb9bdf--hnf79-eth0" Dec 13 01:49:08.957010 containerd[1540]: 2024-12-13 01:49:08.954 [INFO][5855] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:49:08.957010 containerd[1540]: 2024-12-13 01:49:08.955 [INFO][5849] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e" Dec 13 01:49:08.959212 containerd[1540]: time="2024-12-13T01:49:08.957285583Z" level=info msg="TearDown network for sandbox \"946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e\" successfully" Dec 13 01:49:08.959212 containerd[1540]: time="2024-12-13T01:49:08.957301314Z" level=info msg="StopPodSandbox for \"946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e\" returns successfully" Dec 13 01:49:08.959212 containerd[1540]: time="2024-12-13T01:49:08.957654347Z" level=info msg="RemovePodSandbox for \"946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e\"" Dec 13 01:49:08.959212 containerd[1540]: time="2024-12-13T01:49:08.957667554Z" level=info msg="Forcibly stopping sandbox \"946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e\"" Dec 13 01:49:09.007948 containerd[1540]: 2024-12-13 01:49:08.983 [WARNING][5875] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--886bb9bdf--hnf79-eth0", GenerateName:"calico-apiserver-886bb9bdf-", Namespace:"calico-apiserver", SelfLink:"", UID:"c771d094-7c93-4bc5-90e6-c1ad822c0b38", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"886bb9bdf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5349706b39d618d5cd0c50c612ed27fecd5c8f7f8c0b9c415b48c1a26edfd919", Pod:"calico-apiserver-886bb9bdf-hnf79", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic1c30c7ccd7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:49:09.007948 containerd[1540]: 2024-12-13 01:49:08.984 [INFO][5875] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e" Dec 13 01:49:09.007948 containerd[1540]: 2024-12-13 01:49:08.984 [INFO][5875] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e" iface="eth0" netns="" Dec 13 01:49:09.007948 containerd[1540]: 2024-12-13 01:49:08.984 [INFO][5875] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e" Dec 13 01:49:09.007948 containerd[1540]: 2024-12-13 01:49:08.984 [INFO][5875] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e" Dec 13 01:49:09.007948 containerd[1540]: 2024-12-13 01:49:08.998 [INFO][5882] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e" HandleID="k8s-pod-network.946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e" Workload="localhost-k8s-calico--apiserver--886bb9bdf--hnf79-eth0" Dec 13 01:49:09.007948 containerd[1540]: 2024-12-13 01:49:08.998 [INFO][5882] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:49:09.007948 containerd[1540]: 2024-12-13 01:49:08.998 [INFO][5882] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:49:09.007948 containerd[1540]: 2024-12-13 01:49:09.003 [WARNING][5882] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e" HandleID="k8s-pod-network.946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e" Workload="localhost-k8s-calico--apiserver--886bb9bdf--hnf79-eth0" Dec 13 01:49:09.007948 containerd[1540]: 2024-12-13 01:49:09.003 [INFO][5882] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e" HandleID="k8s-pod-network.946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e" Workload="localhost-k8s-calico--apiserver--886bb9bdf--hnf79-eth0" Dec 13 01:49:09.007948 containerd[1540]: 2024-12-13 01:49:09.004 [INFO][5882] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:49:09.007948 containerd[1540]: 2024-12-13 01:49:09.005 [INFO][5875] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e" Dec 13 01:49:09.009242 containerd[1540]: time="2024-12-13T01:49:09.008194515Z" level=info msg="TearDown network for sandbox \"946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e\" successfully" Dec 13 01:49:09.013582 containerd[1540]: time="2024-12-13T01:49:09.013568760Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:49:09.013663 containerd[1540]: time="2024-12-13T01:49:09.013653546Z" level=info msg="RemovePodSandbox \"946a58a5700a6d531d236667b1dab646d63c05eeb81cfa26488f477e89607a3e\" returns successfully" Dec 13 01:49:09.680516 sshd[5862]: Invalid user debian from 36.138.19.180 port 43844 Dec 13 01:49:09.853263 sshd[5862]: Connection closed by invalid user debian 36.138.19.180 port 43844 [preauth] Dec 13 01:49:09.854583 systemd[1]: sshd@111-139.178.70.110:22-36.138.19.180:43844.service: Deactivated successfully. Dec 13 01:49:10.059648 systemd[1]: Started sshd@112-139.178.70.110:22-36.138.19.180:43848.service - OpenSSH per-connection server daemon (36.138.19.180:43848). Dec 13 01:49:10.878556 sshd[5898]: Invalid user debian from 36.138.19.180 port 43848 Dec 13 01:49:11.079970 sshd[5898]: Connection closed by invalid user debian 36.138.19.180 port 43848 [preauth] Dec 13 01:49:11.081549 systemd[1]: sshd@112-139.178.70.110:22-36.138.19.180:43848.service: Deactivated successfully. Dec 13 01:49:11.288170 systemd[1]: Started sshd@113-139.178.70.110:22-36.138.19.180:43860.service - OpenSSH per-connection server daemon (36.138.19.180:43860). Dec 13 01:49:12.102190 sshd[5922]: Invalid user debian from 36.138.19.180 port 43860 Dec 13 01:49:12.302983 sshd[5922]: Connection closed by invalid user debian 36.138.19.180 port 43860 [preauth] Dec 13 01:49:12.304464 systemd[1]: sshd@113-139.178.70.110:22-36.138.19.180:43860.service: Deactivated successfully. Dec 13 01:49:12.489041 systemd[1]: Started sshd@114-139.178.70.110:22-36.138.19.180:43870.service - OpenSSH per-connection server daemon (36.138.19.180:43870). Dec 13 01:49:13.202194 sshd[5927]: Invalid user debian from 36.138.19.180 port 43870 Dec 13 01:49:13.375979 sshd[5927]: Connection closed by invalid user debian 36.138.19.180 port 43870 [preauth] Dec 13 01:49:13.378215 systemd[1]: sshd@114-139.178.70.110:22-36.138.19.180:43870.service: Deactivated successfully. Dec 13 01:49:13.588246 systemd[1]: Started sshd@115-139.178.70.110:22-36.138.19.180:43882.service - OpenSSH per-connection server daemon (36.138.19.180:43882). Dec 13 01:49:14.416311 sshd[5932]: Invalid user debian from 36.138.19.180 port 43882 Dec 13 01:49:14.619855 sshd[5932]: Connection closed by invalid user debian 36.138.19.180 port 43882 [preauth] Dec 13 01:49:14.620727 systemd[1]: sshd@115-139.178.70.110:22-36.138.19.180:43882.service: Deactivated successfully. Dec 13 01:49:14.685115 kubelet[3075]: I1213 01:49:14.684494 3075 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-qmcx8" podStartSLOduration=40.485969026 podStartE2EDuration="45.68446187s" podCreationTimestamp="2024-12-13 01:48:29 +0000 UTC" firstStartedPulling="2024-12-13 01:48:57.552325453 +0000 UTC m=+49.450214678" lastFinishedPulling="2024-12-13 01:49:02.750818296 +0000 UTC m=+54.648707522" observedRunningTime="2024-12-13 01:49:03.459330618 +0000 UTC m=+55.357219851" watchObservedRunningTime="2024-12-13 01:49:14.68446187 +0000 UTC m=+66.582351099" Dec 13 01:49:14.835358 systemd[1]: Started sshd@116-139.178.70.110:22-36.138.19.180:34594.service - OpenSSH per-connection server daemon (36.138.19.180:34594). Dec 13 01:49:15.698197 sshd[5958]: Invalid user debian from 36.138.19.180 port 34594 Dec 13 01:49:15.900118 sshd[5958]: Connection closed by invalid user debian 36.138.19.180 port 34594 [preauth] Dec 13 01:49:15.901784 systemd[1]: sshd@116-139.178.70.110:22-36.138.19.180:34594.service: Deactivated successfully. Dec 13 01:49:16.090423 systemd[1]: Started sshd@117-139.178.70.110:22-36.138.19.180:34596.service - OpenSSH per-connection server daemon (36.138.19.180:34596). Dec 13 01:49:16.740177 systemd[1]: Started sshd@118-139.178.70.110:22-194.169.175.37:47304.service - OpenSSH per-connection server daemon (194.169.175.37:47304). Dec 13 01:49:16.912057 sshd[5964]: Invalid user debian from 36.138.19.180 port 34596 Dec 13 01:49:17.105319 sshd[5964]: Connection closed by invalid user debian 36.138.19.180 port 34596 [preauth] Dec 13 01:49:17.106378 systemd[1]: sshd@117-139.178.70.110:22-36.138.19.180:34596.service: Deactivated successfully. Dec 13 01:49:17.322741 systemd[1]: Started sshd@119-139.178.70.110:22-36.138.19.180:34606.service - OpenSSH per-connection server daemon (36.138.19.180:34606). Dec 13 01:49:18.096578 sshd[5967]: Connection closed by authenticating user root 194.169.175.37 port 47304 [preauth] Dec 13 01:49:18.097676 systemd[1]: sshd@118-139.178.70.110:22-194.169.175.37:47304.service: Deactivated successfully. Dec 13 01:49:18.151275 sshd[5972]: Invalid user debian from 36.138.19.180 port 34606 Dec 13 01:49:18.365021 sshd[5972]: Connection closed by invalid user debian 36.138.19.180 port 34606 [preauth] Dec 13 01:49:18.366557 systemd[1]: sshd@119-139.178.70.110:22-36.138.19.180:34606.service: Deactivated successfully. Dec 13 01:49:18.408254 update_engine[1522]: I20241213 01:49:18.408202 1522 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 13 01:49:18.408254 update_engine[1522]: I20241213 01:49:18.408252 1522 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 13 01:49:18.412901 update_engine[1522]: I20241213 01:49:18.412777 1522 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 13 01:49:18.413698 update_engine[1522]: I20241213 01:49:18.413508 1522 omaha_request_params.cc:62] Current group set to stable Dec 13 01:49:18.414955 update_engine[1522]: I20241213 01:49:18.414394 1522 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 13 01:49:18.414955 update_engine[1522]: I20241213 01:49:18.414407 1522 update_attempter.cc:643] Scheduling an action processor start. Dec 13 01:49:18.414955 update_engine[1522]: I20241213 01:49:18.414423 1522 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 01:49:18.414955 update_engine[1522]: I20241213 01:49:18.414456 1522 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 13 01:49:18.414955 update_engine[1522]: I20241213 01:49:18.414502 1522 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 13 01:49:18.414955 update_engine[1522]: I20241213 01:49:18.414511 1522 omaha_request_action.cc:272] Request: Dec 13 01:49:18.414955 update_engine[1522]: Dec 13 01:49:18.414955 update_engine[1522]: Dec 13 01:49:18.414955 update_engine[1522]: Dec 13 01:49:18.414955 update_engine[1522]: Dec 13 01:49:18.414955 update_engine[1522]: Dec 13 01:49:18.414955 update_engine[1522]: Dec 13 01:49:18.414955 update_engine[1522]: Dec 13 01:49:18.414955 update_engine[1522]: Dec 13 01:49:18.414955 update_engine[1522]: I20241213 01:49:18.414517 1522 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:49:18.426790 update_engine[1522]: I20241213 01:49:18.426719 1522 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:49:18.427871 update_engine[1522]: I20241213 01:49:18.427835 1522 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:49:18.428232 locksmithd[1554]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 13 01:49:18.434487 update_engine[1522]: E20241213 01:49:18.434404 1522 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:49:18.434487 update_engine[1522]: I20241213 01:49:18.434466 1522 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 13 01:49:18.565763 systemd[1]: Started sshd@120-139.178.70.110:22-36.138.19.180:34614.service - OpenSSH per-connection server daemon (36.138.19.180:34614). Dec 13 01:49:19.415153 sshd[5981]: Invalid user debian from 36.138.19.180 port 34614 Dec 13 01:49:19.619907 sshd[5981]: Connection closed by invalid user debian 36.138.19.180 port 34614 [preauth] Dec 13 01:49:19.621669 systemd[1]: sshd@120-139.178.70.110:22-36.138.19.180:34614.service: Deactivated successfully. Dec 13 01:49:19.789315 systemd[1]: Started sshd@121-139.178.70.110:22-36.138.19.180:34622.service - OpenSSH per-connection server daemon (36.138.19.180:34622). Dec 13 01:49:20.489903 sshd[5986]: Invalid user debian from 36.138.19.180 port 34622 Dec 13 01:49:20.656790 sshd[5986]: Connection closed by invalid user debian 36.138.19.180 port 34622 [preauth] Dec 13 01:49:20.657436 systemd[1]: sshd@121-139.178.70.110:22-36.138.19.180:34622.service: Deactivated successfully. Dec 13 01:49:20.837990 systemd[1]: Started sshd@122-139.178.70.110:22-36.138.19.180:34626.service - OpenSSH per-connection server daemon (36.138.19.180:34626). Dec 13 01:49:21.532260 sshd[5991]: Invalid user debian from 36.138.19.180 port 34626 Dec 13 01:49:21.702422 sshd[5991]: Connection closed by invalid user debian 36.138.19.180 port 34626 [preauth] Dec 13 01:49:21.703766 systemd[1]: sshd@122-139.178.70.110:22-36.138.19.180:34626.service: Deactivated successfully. Dec 13 01:49:21.914052 systemd[1]: Started sshd@123-139.178.70.110:22-36.138.19.180:34628.service - OpenSSH per-connection server daemon (36.138.19.180:34628). Dec 13 01:49:22.729720 sshd[5996]: Invalid user debian from 36.138.19.180 port 34628 Dec 13 01:49:22.930363 sshd[5996]: Connection closed by invalid user debian 36.138.19.180 port 34628 [preauth] Dec 13 01:49:22.931029 systemd[1]: sshd@123-139.178.70.110:22-36.138.19.180:34628.service: Deactivated successfully. Dec 13 01:49:23.138082 systemd[1]: Started sshd@124-139.178.70.110:22-36.138.19.180:34630.service - OpenSSH per-connection server daemon (36.138.19.180:34630). Dec 13 01:49:23.952253 sshd[6001]: Invalid user debian from 36.138.19.180 port 34630 Dec 13 01:49:24.157206 sshd[6001]: Connection closed by invalid user debian 36.138.19.180 port 34630 [preauth] Dec 13 01:49:24.158434 systemd[1]: sshd@124-139.178.70.110:22-36.138.19.180:34630.service: Deactivated successfully. Dec 13 01:49:24.354384 systemd[1]: Started sshd@125-139.178.70.110:22-36.138.19.180:43188.service - OpenSSH per-connection server daemon (36.138.19.180:43188). Dec 13 01:49:25.147230 sshd[6009]: Invalid user debian from 36.138.19.180 port 43188 Dec 13 01:49:25.342519 sshd[6009]: Connection closed by invalid user debian 36.138.19.180 port 43188 [preauth] Dec 13 01:49:25.343575 systemd[1]: sshd@125-139.178.70.110:22-36.138.19.180:43188.service: Deactivated successfully. Dec 13 01:49:25.554550 systemd[1]: Started sshd@126-139.178.70.110:22-36.138.19.180:43202.service - OpenSSH per-connection server daemon (36.138.19.180:43202). Dec 13 01:49:26.380174 sshd[6014]: Invalid user debian from 36.138.19.180 port 43202 Dec 13 01:49:26.581957 sshd[6014]: Connection closed by invalid user debian 36.138.19.180 port 43202 [preauth] Dec 13 01:49:26.582385 systemd[1]: sshd@126-139.178.70.110:22-36.138.19.180:43202.service: Deactivated successfully. Dec 13 01:49:26.797768 systemd[1]: Started sshd@127-139.178.70.110:22-36.138.19.180:43210.service - OpenSSH per-connection server daemon (36.138.19.180:43210). Dec 13 01:49:26.861889 systemd[1]: Started sshd@128-139.178.70.110:22-139.178.89.65:46282.service - OpenSSH per-connection server daemon (139.178.89.65:46282). Dec 13 01:49:26.918940 sshd[6025]: Accepted publickey for core from 139.178.89.65 port 46282 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:49:26.936632 sshd[6025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:49:26.950996 systemd-logind[1521]: New session 10 of user core. Dec 13 01:49:26.956074 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:49:27.693846 sshd[6022]: Invalid user debian from 36.138.19.180 port 43210 Dec 13 01:49:27.734183 sshd[6025]: pam_unix(sshd:session): session closed for user core Dec 13 01:49:27.737466 systemd[1]: sshd@128-139.178.70.110:22-139.178.89.65:46282.service: Deactivated successfully. Dec 13 01:49:27.738980 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:49:27.739491 systemd-logind[1521]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:49:27.740494 systemd-logind[1521]: Removed session 10. Dec 13 01:49:27.898393 sshd[6022]: Connection closed by invalid user debian 36.138.19.180 port 43210 [preauth] Dec 13 01:49:27.899449 systemd[1]: sshd@127-139.178.70.110:22-36.138.19.180:43210.service: Deactivated successfully. Dec 13 01:49:28.072531 systemd[1]: Started sshd@129-139.178.70.110:22-36.138.19.180:43214.service - OpenSSH per-connection server daemon (36.138.19.180:43214). Dec 13 01:49:28.309253 update_engine[1522]: I20241213 01:49:28.308961 1522 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:49:28.309253 update_engine[1522]: I20241213 01:49:28.309142 1522 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:49:28.309574 update_engine[1522]: I20241213 01:49:28.309336 1522 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:49:28.314228 update_engine[1522]: E20241213 01:49:28.314133 1522 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:49:28.314228 update_engine[1522]: I20241213 01:49:28.314203 1522 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 13 01:49:28.776022 sshd[6045]: Invalid user debian from 36.138.19.180 port 43214 Dec 13 01:49:28.944801 sshd[6045]: Connection closed by invalid user debian 36.138.19.180 port 43214 [preauth] Dec 13 01:49:28.946634 systemd[1]: sshd@129-139.178.70.110:22-36.138.19.180:43214.service: Deactivated successfully. Dec 13 01:49:29.163179 systemd[1]: Started sshd@130-139.178.70.110:22-36.138.19.180:43222.service - OpenSSH per-connection server daemon (36.138.19.180:43222). Dec 13 01:49:29.985423 sshd[6050]: Invalid user debian from 36.138.19.180 port 43222 Dec 13 01:49:30.187687 sshd[6050]: Connection closed by invalid user debian 36.138.19.180 port 43222 [preauth] Dec 13 01:49:30.189021 systemd[1]: sshd@130-139.178.70.110:22-36.138.19.180:43222.service: Deactivated successfully. Dec 13 01:49:30.391713 systemd[1]: Started sshd@131-139.178.70.110:22-36.138.19.180:43230.service - OpenSSH per-connection server daemon (36.138.19.180:43230). Dec 13 01:49:31.226179 sshd[6061]: Invalid user debian from 36.138.19.180 port 43230 Dec 13 01:49:31.427392 sshd[6061]: Connection closed by invalid user debian 36.138.19.180 port 43230 [preauth] Dec 13 01:49:31.428749 systemd[1]: sshd@131-139.178.70.110:22-36.138.19.180:43230.service: Deactivated successfully. Dec 13 01:49:31.614550 systemd[1]: Started sshd@132-139.178.70.110:22-36.138.19.180:43244.service - OpenSSH per-connection server daemon (36.138.19.180:43244). Dec 13 01:49:32.348291 sshd[6066]: Invalid user debian from 36.138.19.180 port 43244 Dec 13 01:49:32.523280 sshd[6066]: Connection closed by invalid user debian 36.138.19.180 port 43244 [preauth] Dec 13 01:49:32.526007 systemd[1]: sshd@132-139.178.70.110:22-36.138.19.180:43244.service: Deactivated successfully. Dec 13 01:49:32.733469 systemd[1]: Started sshd@133-139.178.70.110:22-36.138.19.180:43246.service - OpenSSH per-connection server daemon (36.138.19.180:43246). Dec 13 01:49:32.739801 systemd[1]: Started sshd@134-139.178.70.110:22-139.178.89.65:37664.service - OpenSSH per-connection server daemon (139.178.89.65:37664). Dec 13 01:49:32.804588 sshd[6075]: Accepted publickey for core from 139.178.89.65 port 37664 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:49:32.807493 sshd[6075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:49:32.811691 systemd-logind[1521]: New session 11 of user core. Dec 13 01:49:32.819091 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:49:32.999810 sshd[6075]: pam_unix(sshd:session): session closed for user core Dec 13 01:49:33.003500 systemd[1]: sshd@134-139.178.70.110:22-139.178.89.65:37664.service: Deactivated successfully. Dec 13 01:49:33.005179 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:49:33.007853 systemd-logind[1521]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:49:33.009819 systemd-logind[1521]: Removed session 11. Dec 13 01:49:33.569952 sshd[6073]: Invalid user debian from 36.138.19.180 port 43246 Dec 13 01:49:33.775545 sshd[6073]: Connection closed by invalid user debian 36.138.19.180 port 43246 [preauth] Dec 13 01:49:33.776778 systemd[1]: sshd@133-139.178.70.110:22-36.138.19.180:43246.service: Deactivated successfully. Dec 13 01:49:33.985963 systemd[1]: Started sshd@135-139.178.70.110:22-36.138.19.180:49644.service - OpenSSH per-connection server daemon (36.138.19.180:49644). Dec 13 01:49:34.978332 sshd[6092]: Invalid user debian from 36.138.19.180 port 49644 Dec 13 01:49:35.182934 sshd[6092]: Connection closed by invalid user debian 36.138.19.180 port 49644 [preauth] Dec 13 01:49:35.184295 systemd[1]: sshd@135-139.178.70.110:22-36.138.19.180:49644.service: Deactivated successfully. Dec 13 01:49:35.355742 systemd[1]: Started sshd@136-139.178.70.110:22-36.138.19.180:49650.service - OpenSSH per-connection server daemon (36.138.19.180:49650). Dec 13 01:49:36.047970 sshd[6097]: Invalid user debian from 36.138.19.180 port 49650 Dec 13 01:49:36.218119 sshd[6097]: Connection closed by invalid user debian 36.138.19.180 port 49650 [preauth] Dec 13 01:49:36.218651 systemd[1]: sshd@136-139.178.70.110:22-36.138.19.180:49650.service: Deactivated successfully. Dec 13 01:49:36.418760 systemd[1]: Started sshd@137-139.178.70.110:22-36.138.19.180:49658.service - OpenSSH per-connection server daemon (36.138.19.180:49658). Dec 13 01:49:37.205696 sshd[6102]: Invalid user debian from 36.138.19.180 port 49658 Dec 13 01:49:37.399735 sshd[6102]: Connection closed by invalid user debian 36.138.19.180 port 49658 [preauth] Dec 13 01:49:37.400884 systemd[1]: sshd@137-139.178.70.110:22-36.138.19.180:49658.service: Deactivated successfully. Dec 13 01:49:37.600421 systemd[1]: Started sshd@138-139.178.70.110:22-36.138.19.180:49674.service - OpenSSH per-connection server daemon (36.138.19.180:49674). Dec 13 01:49:38.015111 systemd[1]: Started sshd@139-139.178.70.110:22-139.178.89.65:43400.service - OpenSSH per-connection server daemon (139.178.89.65:43400). Dec 13 01:49:38.043650 sshd[6112]: Accepted publickey for core from 139.178.89.65 port 43400 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:49:38.044599 sshd[6112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:49:38.047606 systemd-logind[1521]: New session 12 of user core. Dec 13 01:49:38.056086 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:49:38.148281 sshd[6112]: pam_unix(sshd:session): session closed for user core Dec 13 01:49:38.152383 systemd[1]: sshd@139-139.178.70.110:22-139.178.89.65:43400.service: Deactivated successfully. Dec 13 01:49:38.153431 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:49:38.154346 systemd-logind[1521]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:49:38.158096 systemd[1]: Started sshd@140-139.178.70.110:22-139.178.89.65:43406.service - OpenSSH per-connection server daemon (139.178.89.65:43406). Dec 13 01:49:38.158731 systemd-logind[1521]: Removed session 12. Dec 13 01:49:38.200084 sshd[6126]: Accepted publickey for core from 139.178.89.65 port 43406 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:49:38.200838 sshd[6126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:49:38.203917 systemd-logind[1521]: New session 13 of user core. Dec 13 01:49:38.210005 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:49:38.308211 update_engine[1522]: I20241213 01:49:38.308027 1522 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:49:38.308211 update_engine[1522]: I20241213 01:49:38.308166 1522 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:49:38.310500 update_engine[1522]: I20241213 01:49:38.310065 1522 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:49:38.318270 update_engine[1522]: E20241213 01:49:38.318243 1522 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:49:38.318344 update_engine[1522]: I20241213 01:49:38.318286 1522 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 13 01:49:38.344158 sshd[6126]: pam_unix(sshd:session): session closed for user core Dec 13 01:49:38.350583 systemd[1]: sshd@140-139.178.70.110:22-139.178.89.65:43406.service: Deactivated successfully. Dec 13 01:49:38.352423 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:49:38.353601 systemd-logind[1521]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:49:38.360878 systemd[1]: Started sshd@141-139.178.70.110:22-139.178.89.65:43420.service - OpenSSH per-connection server daemon (139.178.89.65:43420). Dec 13 01:49:38.364807 systemd-logind[1521]: Removed session 13. Dec 13 01:49:38.386357 sshd[6109]: Invalid user debian from 36.138.19.180 port 49674 Dec 13 01:49:38.399550 sshd[6137]: Accepted publickey for core from 139.178.89.65 port 43420 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:49:38.400347 sshd[6137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:49:38.403092 systemd-logind[1521]: New session 14 of user core. Dec 13 01:49:38.411004 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:49:38.499775 sshd[6137]: pam_unix(sshd:session): session closed for user core Dec 13 01:49:38.501746 systemd[1]: sshd@141-139.178.70.110:22-139.178.89.65:43420.service: Deactivated successfully. Dec 13 01:49:38.502871 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:49:38.503418 systemd-logind[1521]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:49:38.504100 systemd-logind[1521]: Removed session 14. Dec 13 01:49:38.580144 sshd[6109]: Connection closed by invalid user debian 36.138.19.180 port 49674 [preauth] Dec 13 01:49:38.581219 systemd[1]: sshd@138-139.178.70.110:22-36.138.19.180:49674.service: Deactivated successfully. Dec 13 01:49:38.759629 systemd[1]: Started sshd@142-139.178.70.110:22-36.138.19.180:49686.service - OpenSSH per-connection server daemon (36.138.19.180:49686). Dec 13 01:49:39.457397 sshd[6152]: Invalid user debian from 36.138.19.180 port 49686 Dec 13 01:49:39.628560 sshd[6152]: Connection closed by invalid user debian 36.138.19.180 port 49686 [preauth] Dec 13 01:49:39.629705 systemd[1]: sshd@142-139.178.70.110:22-36.138.19.180:49686.service: Deactivated successfully. Dec 13 01:49:39.810069 systemd[1]: Started sshd@143-139.178.70.110:22-36.138.19.180:49696.service - OpenSSH per-connection server daemon (36.138.19.180:49696). Dec 13 01:49:40.509686 sshd[6161]: Invalid user debian from 36.138.19.180 port 49696 Dec 13 01:49:40.681173 sshd[6161]: Connection closed by invalid user debian 36.138.19.180 port 49696 [preauth] Dec 13 01:49:40.682311 systemd[1]: sshd@143-139.178.70.110:22-36.138.19.180:49696.service: Deactivated successfully. Dec 13 01:49:40.861375 systemd[1]: Started sshd@144-139.178.70.110:22-36.138.19.180:49710.service - OpenSSH per-connection server daemon (36.138.19.180:49710). Dec 13 01:49:41.563839 sshd[6166]: Invalid user debian from 36.138.19.180 port 49710 Dec 13 01:49:41.736675 sshd[6166]: Connection closed by invalid user debian 36.138.19.180 port 49710 [preauth] Dec 13 01:49:41.737821 systemd[1]: sshd@144-139.178.70.110:22-36.138.19.180:49710.service: Deactivated successfully. Dec 13 01:49:41.914608 systemd[1]: Started sshd@145-139.178.70.110:22-36.138.19.180:49716.service - OpenSSH per-connection server daemon (36.138.19.180:49716). Dec 13 01:49:42.611386 sshd[6190]: Invalid user debian from 36.138.19.180 port 49716 Dec 13 01:49:42.782211 sshd[6190]: Connection closed by invalid user debian 36.138.19.180 port 49716 [preauth] Dec 13 01:49:42.783387 systemd[1]: sshd@145-139.178.70.110:22-36.138.19.180:49716.service: Deactivated successfully. Dec 13 01:49:42.970239 systemd[1]: Started sshd@146-139.178.70.110:22-36.138.19.180:49722.service - OpenSSH per-connection server daemon (36.138.19.180:49722). Dec 13 01:49:43.511988 systemd[1]: Started sshd@147-139.178.70.110:22-139.178.89.65:43428.service - OpenSSH per-connection server daemon (139.178.89.65:43428). Dec 13 01:49:43.540233 sshd[6202]: Accepted publickey for core from 139.178.89.65 port 43428 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:49:43.541129 sshd[6202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:49:43.544145 systemd-logind[1521]: New session 15 of user core. Dec 13 01:49:43.550081 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:49:43.665088 sshd[6202]: pam_unix(sshd:session): session closed for user core Dec 13 01:49:43.669086 systemd[1]: sshd@147-139.178.70.110:22-139.178.89.65:43428.service: Deactivated successfully. Dec 13 01:49:43.670134 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:49:43.671446 systemd-logind[1521]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:49:43.672328 systemd-logind[1521]: Removed session 15. Dec 13 01:49:43.680874 sshd[6199]: Invalid user debian from 36.138.19.180 port 49722 Dec 13 01:49:43.854333 sshd[6199]: Connection closed by invalid user debian 36.138.19.180 port 49722 [preauth] Dec 13 01:49:43.855022 systemd[1]: sshd@146-139.178.70.110:22-36.138.19.180:49722.service: Deactivated successfully. Dec 13 01:49:44.057907 systemd[1]: Started sshd@148-139.178.70.110:22-36.138.19.180:42756.service - OpenSSH per-connection server daemon (36.138.19.180:42756). Dec 13 01:49:45.013833 sshd[6217]: Invalid user debian from 36.138.19.180 port 42756 Dec 13 01:49:45.212869 sshd[6217]: Connection closed by invalid user debian 36.138.19.180 port 42756 [preauth] Dec 13 01:49:45.214515 systemd[1]: sshd@148-139.178.70.110:22-36.138.19.180:42756.service: Deactivated successfully. Dec 13 01:49:45.420569 systemd[1]: Started sshd@149-139.178.70.110:22-36.138.19.180:42760.service - OpenSSH per-connection server daemon (36.138.19.180:42760). Dec 13 01:49:46.244025 sshd[6243]: Invalid user debian from 36.138.19.180 port 42760 Dec 13 01:49:46.448619 sshd[6243]: Connection closed by invalid user debian 36.138.19.180 port 42760 [preauth] Dec 13 01:49:46.448995 systemd[1]: sshd@149-139.178.70.110:22-36.138.19.180:42760.service: Deactivated successfully. Dec 13 01:49:46.657725 systemd[1]: Started sshd@150-139.178.70.110:22-36.138.19.180:42762.service - OpenSSH per-connection server daemon (36.138.19.180:42762). Dec 13 01:49:47.480095 sshd[6248]: Invalid user debian from 36.138.19.180 port 42762 Dec 13 01:49:47.681459 sshd[6248]: Connection closed by invalid user debian 36.138.19.180 port 42762 [preauth] Dec 13 01:49:47.682128 systemd[1]: sshd@150-139.178.70.110:22-36.138.19.180:42762.service: Deactivated successfully. Dec 13 01:49:47.856138 systemd[1]: Started sshd@151-139.178.70.110:22-36.138.19.180:42776.service - OpenSSH per-connection server daemon (36.138.19.180:42776). Dec 13 01:49:48.313141 update_engine[1522]: I20241213 01:49:48.313097 1522 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:49:48.313388 update_engine[1522]: I20241213 01:49:48.313251 1522 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:49:48.313388 update_engine[1522]: I20241213 01:49:48.313380 1522 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:49:48.318397 update_engine[1522]: E20241213 01:49:48.318378 1522 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:49:48.318438 update_engine[1522]: I20241213 01:49:48.318408 1522 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 01:49:48.318438 update_engine[1522]: I20241213 01:49:48.318418 1522 omaha_request_action.cc:617] Omaha request response: Dec 13 01:49:48.318485 update_engine[1522]: E20241213 01:49:48.318470 1522 omaha_request_action.cc:636] Omaha request network transfer failed. Dec 13 01:49:48.318508 update_engine[1522]: I20241213 01:49:48.318495 1522 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Dec 13 01:49:48.318508 update_engine[1522]: I20241213 01:49:48.318499 1522 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 01:49:48.318508 update_engine[1522]: I20241213 01:49:48.318502 1522 update_attempter.cc:306] Processing Done. Dec 13 01:49:48.318567 update_engine[1522]: E20241213 01:49:48.318513 1522 update_attempter.cc:619] Update failed. Dec 13 01:49:48.318567 update_engine[1522]: I20241213 01:49:48.318517 1522 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Dec 13 01:49:48.318567 update_engine[1522]: I20241213 01:49:48.318521 1522 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Dec 13 01:49:48.318567 update_engine[1522]: I20241213 01:49:48.318523 1522 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Dec 13 01:49:48.318656 update_engine[1522]: I20241213 01:49:48.318573 1522 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 01:49:48.318656 update_engine[1522]: I20241213 01:49:48.318590 1522 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 13 01:49:48.318656 update_engine[1522]: I20241213 01:49:48.318594 1522 omaha_request_action.cc:272] Request: Dec 13 01:49:48.318656 update_engine[1522]: Dec 13 01:49:48.318656 update_engine[1522]: Dec 13 01:49:48.318656 update_engine[1522]: Dec 13 01:49:48.318656 update_engine[1522]: Dec 13 01:49:48.318656 update_engine[1522]: Dec 13 01:49:48.318656 update_engine[1522]: Dec 13 01:49:48.318656 update_engine[1522]: I20241213 01:49:48.318598 1522 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:49:48.318901 update_engine[1522]: I20241213 01:49:48.318672 1522 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:49:48.318901 update_engine[1522]: I20241213 01:49:48.318775 1522 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:49:48.321679 locksmithd[1554]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Dec 13 01:49:48.383609 update_engine[1522]: E20241213 01:49:48.383438 1522 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:49:48.383609 update_engine[1522]: I20241213 01:49:48.383498 1522 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 01:49:48.383609 update_engine[1522]: I20241213 01:49:48.383507 1522 omaha_request_action.cc:617] Omaha request response: Dec 13 01:49:48.383609 update_engine[1522]: I20241213 01:49:48.383513 1522 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 01:49:48.383609 update_engine[1522]: I20241213 01:49:48.383517 1522 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 01:49:48.383609 update_engine[1522]: I20241213 01:49:48.383522 1522 update_attempter.cc:306] Processing Done. Dec 13 01:49:48.383609 update_engine[1522]: I20241213 01:49:48.383527 1522 update_attempter.cc:310] Error event sent. Dec 13 01:49:48.383609 update_engine[1522]: I20241213 01:49:48.383535 1522 update_check_scheduler.cc:74] Next update check in 49m38s Dec 13 01:49:48.394646 locksmithd[1554]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Dec 13 01:49:48.545037 sshd[6253]: Invalid user debian from 36.138.19.180 port 42776 Dec 13 01:49:48.676985 systemd[1]: Started sshd@152-139.178.70.110:22-139.178.89.65:36606.service - OpenSSH per-connection server daemon (139.178.89.65:36606). Dec 13 01:49:48.714261 sshd[6253]: Connection closed by invalid user debian 36.138.19.180 port 42776 [preauth] Dec 13 01:49:48.715614 systemd[1]: sshd@151-139.178.70.110:22-36.138.19.180:42776.service: Deactivated successfully. Dec 13 01:49:48.873208 sshd[6256]: Accepted publickey for core from 139.178.89.65 port 36606 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:49:48.918232 sshd[6256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:49:48.929186 systemd[1]: Started sshd@153-139.178.70.110:22-36.138.19.180:42780.service - OpenSSH per-connection server daemon (36.138.19.180:42780). Dec 13 01:49:48.932548 systemd-logind[1521]: New session 16 of user core. Dec 13 01:49:48.933607 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:49:49.376997 sshd[6256]: pam_unix(sshd:session): session closed for user core Dec 13 01:49:49.378577 systemd[1]: sshd@152-139.178.70.110:22-139.178.89.65:36606.service: Deactivated successfully. Dec 13 01:49:49.379801 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:49:49.380827 systemd-logind[1521]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:49:49.381433 systemd-logind[1521]: Removed session 16. Dec 13 01:49:49.755289 sshd[6261]: Invalid user debian from 36.138.19.180 port 42780 Dec 13 01:49:49.963410 sshd[6261]: Connection closed by invalid user debian 36.138.19.180 port 42780 [preauth] Dec 13 01:49:49.964733 systemd[1]: sshd@153-139.178.70.110:22-36.138.19.180:42780.service: Deactivated successfully. Dec 13 01:49:50.173962 systemd[1]: Started sshd@154-139.178.70.110:22-36.138.19.180:42794.service - OpenSSH per-connection server daemon (36.138.19.180:42794). Dec 13 01:49:51.013998 sshd[6275]: Invalid user debian from 36.138.19.180 port 42794 Dec 13 01:49:51.214671 sshd[6275]: Connection closed by invalid user debian 36.138.19.180 port 42794 [preauth] Dec 13 01:49:51.216260 systemd[1]: sshd@154-139.178.70.110:22-36.138.19.180:42794.service: Deactivated successfully. Dec 13 01:49:51.429097 systemd[1]: Started sshd@155-139.178.70.110:22-36.138.19.180:42808.service - OpenSSH per-connection server daemon (36.138.19.180:42808). Dec 13 01:49:52.250029 sshd[6280]: Invalid user debian from 36.138.19.180 port 42808 Dec 13 01:49:52.452070 sshd[6280]: Connection closed by invalid user debian 36.138.19.180 port 42808 [preauth] Dec 13 01:49:52.453213 systemd[1]: sshd@155-139.178.70.110:22-36.138.19.180:42808.service: Deactivated successfully. Dec 13 01:49:52.661543 systemd[1]: Started sshd@156-139.178.70.110:22-36.138.19.180:42818.service - OpenSSH per-connection server daemon (36.138.19.180:42818). Dec 13 01:49:53.490070 sshd[6285]: Invalid user debian from 36.138.19.180 port 42818 Dec 13 01:49:53.692050 sshd[6285]: Connection closed by invalid user debian 36.138.19.180 port 42818 [preauth] Dec 13 01:49:53.693442 systemd[1]: sshd@156-139.178.70.110:22-36.138.19.180:42818.service: Deactivated successfully. Dec 13 01:49:53.901756 systemd[1]: Started sshd@157-139.178.70.110:22-36.138.19.180:33192.service - OpenSSH per-connection server daemon (36.138.19.180:33192). Dec 13 01:49:54.387669 systemd[1]: Started sshd@158-139.178.70.110:22-139.178.89.65:36616.service - OpenSSH per-connection server daemon (139.178.89.65:36616). Dec 13 01:49:54.434282 sshd[6295]: Accepted publickey for core from 139.178.89.65 port 36616 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:49:54.436612 sshd[6295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:49:54.439910 systemd-logind[1521]: New session 17 of user core. Dec 13 01:49:54.451029 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:49:54.600614 sshd[6295]: pam_unix(sshd:session): session closed for user core Dec 13 01:49:54.603228 systemd[1]: sshd@158-139.178.70.110:22-139.178.89.65:36616.service: Deactivated successfully. Dec 13 01:49:54.604447 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:49:54.604990 systemd-logind[1521]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:49:54.605585 systemd-logind[1521]: Removed session 17. Dec 13 01:49:54.707671 sshd[6292]: Invalid user debian from 36.138.19.180 port 33192 Dec 13 01:49:54.906971 sshd[6292]: Connection closed by invalid user debian 36.138.19.180 port 33192 [preauth] Dec 13 01:49:54.908723 systemd[1]: sshd@157-139.178.70.110:22-36.138.19.180:33192.service: Deactivated successfully. Dec 13 01:49:55.120466 systemd[1]: Started sshd@159-139.178.70.110:22-36.138.19.180:33202.service - OpenSSH per-connection server daemon (36.138.19.180:33202). Dec 13 01:49:55.963864 sshd[6311]: Invalid user debian from 36.138.19.180 port 33202 Dec 13 01:49:56.167813 sshd[6311]: Connection closed by invalid user debian 36.138.19.180 port 33202 [preauth] Dec 13 01:49:56.169149 systemd[1]: sshd@159-139.178.70.110:22-36.138.19.180:33202.service: Deactivated successfully. Dec 13 01:49:56.344083 systemd[1]: Started sshd@160-139.178.70.110:22-36.138.19.180:33218.service - OpenSSH per-connection server daemon (36.138.19.180:33218). Dec 13 01:49:57.030304 sshd[6316]: Invalid user debian from 36.138.19.180 port 33218 Dec 13 01:49:57.199057 sshd[6316]: Connection closed by invalid user debian 36.138.19.180 port 33218 [preauth] Dec 13 01:49:57.200344 systemd[1]: sshd@160-139.178.70.110:22-36.138.19.180:33218.service: Deactivated successfully. Dec 13 01:49:57.414957 systemd[1]: Started sshd@161-139.178.70.110:22-36.138.19.180:33224.service - OpenSSH per-connection server daemon (36.138.19.180:33224). Dec 13 01:49:58.233973 sshd[6321]: Invalid user debian from 36.138.19.180 port 33224 Dec 13 01:49:58.436623 sshd[6321]: Connection closed by invalid user debian 36.138.19.180 port 33224 [preauth] Dec 13 01:49:58.437741 systemd[1]: sshd@161-139.178.70.110:22-36.138.19.180:33224.service: Deactivated successfully. Dec 13 01:49:58.645546 systemd[1]: Started sshd@162-139.178.70.110:22-36.138.19.180:33236.service - OpenSSH per-connection server daemon (36.138.19.180:33236). Dec 13 01:49:59.463958 sshd[6326]: Invalid user debian from 36.138.19.180 port 33236 Dec 13 01:49:59.609287 systemd[1]: Started sshd@163-139.178.70.110:22-139.178.89.65:37030.service - OpenSSH per-connection server daemon (139.178.89.65:37030). Dec 13 01:49:59.638080 sshd[6329]: Accepted publickey for core from 139.178.89.65 port 37030 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:49:59.638937 sshd[6329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:49:59.641390 systemd-logind[1521]: New session 18 of user core. Dec 13 01:49:59.648051 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:49:59.665946 sshd[6326]: Connection closed by invalid user debian 36.138.19.180 port 33236 [preauth] Dec 13 01:49:59.666988 systemd[1]: sshd@162-139.178.70.110:22-36.138.19.180:33236.service: Deactivated successfully. Dec 13 01:49:59.736628 sshd[6329]: pam_unix(sshd:session): session closed for user core Dec 13 01:49:59.742401 systemd[1]: sshd@163-139.178.70.110:22-139.178.89.65:37030.service: Deactivated successfully. Dec 13 01:49:59.743860 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:49:59.745008 systemd-logind[1521]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:49:59.746750 systemd[1]: Started sshd@164-139.178.70.110:22-139.178.89.65:37034.service - OpenSSH per-connection server daemon (139.178.89.65:37034). Dec 13 01:49:59.747633 systemd-logind[1521]: Removed session 18. Dec 13 01:49:59.775205 sshd[6343]: Accepted publickey for core from 139.178.89.65 port 37034 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:49:59.776015 sshd[6343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:49:59.778479 systemd-logind[1521]: New session 19 of user core. Dec 13 01:49:59.781020 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:49:59.877108 systemd[1]: Started sshd@165-139.178.70.110:22-36.138.19.180:33246.service - OpenSSH per-connection server daemon (36.138.19.180:33246). Dec 13 01:50:00.116972 sshd[6343]: pam_unix(sshd:session): session closed for user core Dec 13 01:50:00.128177 systemd[1]: Started sshd@166-139.178.70.110:22-139.178.89.65:37042.service - OpenSSH per-connection server daemon (139.178.89.65:37042). Dec 13 01:50:00.128449 systemd[1]: sshd@164-139.178.70.110:22-139.178.89.65:37034.service: Deactivated successfully. Dec 13 01:50:00.132494 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:50:00.134321 systemd-logind[1521]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:50:00.135506 systemd-logind[1521]: Removed session 19. Dec 13 01:50:00.176510 sshd[6355]: Accepted publickey for core from 139.178.89.65 port 37042 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:50:00.177344 sshd[6355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:50:00.179886 systemd-logind[1521]: New session 20 of user core. Dec 13 01:50:00.188019 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:50:00.718008 sshd[6351]: Invalid user debian from 36.138.19.180 port 33246 Dec 13 01:50:00.920760 sshd[6351]: Connection closed by invalid user debian 36.138.19.180 port 33246 [preauth] Dec 13 01:50:00.919725 systemd[1]: sshd@165-139.178.70.110:22-36.138.19.180:33246.service: Deactivated successfully. Dec 13 01:50:01.116104 systemd[1]: Started sshd@167-139.178.70.110:22-36.138.19.180:33256.service - OpenSSH per-connection server daemon (36.138.19.180:33256). Dec 13 01:50:01.481286 sshd[6355]: pam_unix(sshd:session): session closed for user core Dec 13 01:50:01.486568 systemd[1]: sshd@166-139.178.70.110:22-139.178.89.65:37042.service: Deactivated successfully. Dec 13 01:50:01.487880 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:50:01.489322 systemd-logind[1521]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:50:01.490148 systemd[1]: Started sshd@168-139.178.70.110:22-139.178.89.65:37050.service - OpenSSH per-connection server daemon (139.178.89.65:37050). Dec 13 01:50:01.491600 systemd-logind[1521]: Removed session 20. Dec 13 01:50:01.559659 sshd[6378]: Accepted publickey for core from 139.178.89.65 port 37050 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:50:01.560849 sshd[6378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:50:01.563939 systemd-logind[1521]: New session 21 of user core. Dec 13 01:50:01.567051 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:50:01.918114 sshd[6370]: Invalid user debian from 36.138.19.180 port 33256 Dec 13 01:50:02.103037 sshd[6378]: pam_unix(sshd:session): session closed for user core Dec 13 01:50:02.108772 systemd[1]: sshd@168-139.178.70.110:22-139.178.89.65:37050.service: Deactivated successfully. Dec 13 01:50:02.110693 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:50:02.112488 systemd-logind[1521]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:50:02.114249 sshd[6370]: Connection closed by invalid user debian 36.138.19.180 port 33256 [preauth] Dec 13 01:50:02.118990 systemd[1]: Started sshd@169-139.178.70.110:22-139.178.89.65:37058.service - OpenSSH per-connection server daemon (139.178.89.65:37058). Dec 13 01:50:02.119999 systemd[1]: sshd@167-139.178.70.110:22-36.138.19.180:33256.service: Deactivated successfully. Dec 13 01:50:02.123175 systemd-logind[1521]: Removed session 21. Dec 13 01:50:02.168720 sshd[6392]: Accepted publickey for core from 139.178.89.65 port 37058 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:50:02.169707 sshd[6392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:50:02.172844 systemd-logind[1521]: New session 22 of user core. Dec 13 01:50:02.177055 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:50:02.276071 sshd[6392]: pam_unix(sshd:session): session closed for user core Dec 13 01:50:02.277644 systemd-logind[1521]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:50:02.278676 systemd[1]: sshd@169-139.178.70.110:22-139.178.89.65:37058.service: Deactivated successfully. Dec 13 01:50:02.279861 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:50:02.281646 systemd-logind[1521]: Removed session 22. Dec 13 01:50:02.292541 systemd[1]: Started sshd@170-139.178.70.110:22-36.138.19.180:33268.service - OpenSSH per-connection server daemon (36.138.19.180:33268). Dec 13 01:50:02.987443 sshd[6406]: Invalid user debian from 36.138.19.180 port 33268 Dec 13 01:50:03.157751 sshd[6406]: Connection closed by invalid user debian 36.138.19.180 port 33268 [preauth] Dec 13 01:50:03.159168 systemd[1]: sshd@170-139.178.70.110:22-36.138.19.180:33268.service: Deactivated successfully. Dec 13 01:50:03.370378 systemd[1]: Started sshd@171-139.178.70.110:22-36.138.19.180:33282.service - OpenSSH per-connection server daemon (36.138.19.180:33282). Dec 13 01:50:04.190421 sshd[6416]: Invalid user debian from 36.138.19.180 port 33282 Dec 13 01:50:04.392343 sshd[6416]: Connection closed by invalid user debian 36.138.19.180 port 33282 [preauth] Dec 13 01:50:04.393049 systemd[1]: sshd@171-139.178.70.110:22-36.138.19.180:33282.service: Deactivated successfully. Dec 13 01:50:04.593697 systemd[1]: Started sshd@172-139.178.70.110:22-36.138.19.180:47306.service - OpenSSH per-connection server daemon (36.138.19.180:47306). Dec 13 01:50:05.399323 sshd[6441]: Invalid user debian from 36.138.19.180 port 47306 Dec 13 01:50:05.597294 sshd[6441]: Connection closed by invalid user debian 36.138.19.180 port 47306 [preauth] Dec 13 01:50:05.599372 systemd[1]: sshd@172-139.178.70.110:22-36.138.19.180:47306.service: Deactivated successfully. Dec 13 01:50:05.814076 systemd[1]: Started sshd@173-139.178.70.110:22-36.138.19.180:47312.service - OpenSSH per-connection server daemon (36.138.19.180:47312). Dec 13 01:50:06.641365 sshd[6449]: Invalid user debian from 36.138.19.180 port 47312 Dec 13 01:50:06.844980 sshd[6449]: Connection closed by invalid user debian 36.138.19.180 port 47312 [preauth] Dec 13 01:50:06.846577 systemd[1]: sshd@173-139.178.70.110:22-36.138.19.180:47312.service: Deactivated successfully. Dec 13 01:50:07.055774 systemd[1]: Started sshd@174-139.178.70.110:22-36.138.19.180:47326.service - OpenSSH per-connection server daemon (36.138.19.180:47326). Dec 13 01:50:07.285615 systemd[1]: Started sshd@175-139.178.70.110:22-139.178.89.65:37062.service - OpenSSH per-connection server daemon (139.178.89.65:37062). Dec 13 01:50:07.318334 sshd[6457]: Accepted publickey for core from 139.178.89.65 port 37062 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:50:07.319309 sshd[6457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:50:07.323490 systemd-logind[1521]: New session 23 of user core. Dec 13 01:50:07.333085 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 01:50:07.437253 sshd[6457]: pam_unix(sshd:session): session closed for user core Dec 13 01:50:07.438829 systemd-logind[1521]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:50:07.439944 systemd[1]: sshd@175-139.178.70.110:22-139.178.89.65:37062.service: Deactivated successfully. Dec 13 01:50:07.441363 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:50:07.442158 systemd-logind[1521]: Removed session 23. Dec 13 01:50:07.877904 sshd[6454]: Invalid user debian from 36.138.19.180 port 47326 Dec 13 01:50:08.082466 sshd[6454]: Connection closed by invalid user debian 36.138.19.180 port 47326 [preauth] Dec 13 01:50:08.083261 systemd[1]: sshd@174-139.178.70.110:22-36.138.19.180:47326.service: Deactivated successfully. Dec 13 01:50:08.290401 systemd[1]: Started sshd@176-139.178.70.110:22-36.138.19.180:47338.service - OpenSSH per-connection server daemon (36.138.19.180:47338). Dec 13 01:50:09.124910 sshd[6474]: Invalid user debian from 36.138.19.180 port 47338 Dec 13 01:50:09.324764 sshd[6474]: Connection closed by invalid user debian 36.138.19.180 port 47338 [preauth] Dec 13 01:50:09.325492 systemd[1]: sshd@176-139.178.70.110:22-36.138.19.180:47338.service: Deactivated successfully. Dec 13 01:50:09.504786 systemd[1]: Started sshd@177-139.178.70.110:22-36.138.19.180:47350.service - OpenSSH per-connection server daemon (36.138.19.180:47350). Dec 13 01:50:10.205828 sshd[6479]: Invalid user debian from 36.138.19.180 port 47350 Dec 13 01:50:10.377250 sshd[6479]: Connection closed by invalid user debian 36.138.19.180 port 47350 [preauth] Dec 13 01:50:10.378638 systemd[1]: sshd@177-139.178.70.110:22-36.138.19.180:47350.service: Deactivated successfully. Dec 13 01:50:10.578736 systemd[1]: Started sshd@178-139.178.70.110:22-36.138.19.180:47362.service - OpenSSH per-connection server daemon (36.138.19.180:47362). Dec 13 01:50:11.246723 systemd[1]: run-containerd-runc-k8s.io-5e541ef34abcf78988759c9c1cd67c34a4e25ccd466c99c12a821deef885db6a-runc.Y974vH.mount: Deactivated successfully. Dec 13 01:50:11.379526 sshd[6490]: Invalid user debian from 36.138.19.180 port 47362 Dec 13 01:50:11.575995 sshd[6490]: Connection closed by invalid user debian 36.138.19.180 port 47362 [preauth] Dec 13 01:50:11.575841 systemd[1]: sshd@178-139.178.70.110:22-36.138.19.180:47362.service: Deactivated successfully. Dec 13 01:50:11.783998 systemd[1]: Started sshd@179-139.178.70.110:22-36.138.19.180:47368.service - OpenSSH per-connection server daemon (36.138.19.180:47368). Dec 13 01:50:12.445233 systemd[1]: Started sshd@180-139.178.70.110:22-139.178.89.65:36254.service - OpenSSH per-connection server daemon (139.178.89.65:36254). Dec 13 01:50:12.477683 sshd[6517]: Accepted publickey for core from 139.178.89.65 port 36254 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:50:12.478419 sshd[6517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:50:12.480761 systemd-logind[1521]: New session 24 of user core. Dec 13 01:50:12.488003 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 01:50:12.602438 sshd[6517]: pam_unix(sshd:session): session closed for user core Dec 13 01:50:12.603950 systemd[1]: sshd@180-139.178.70.110:22-139.178.89.65:36254.service: Deactivated successfully. Dec 13 01:50:12.605164 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 01:50:12.605964 systemd-logind[1521]: Session 24 logged out. Waiting for processes to exit. Dec 13 01:50:12.606550 systemd-logind[1521]: Removed session 24. Dec 13 01:50:12.629104 sshd[6514]: Invalid user debian from 36.138.19.180 port 47368 Dec 13 01:50:12.830801 sshd[6514]: Connection closed by invalid user debian 36.138.19.180 port 47368 [preauth] Dec 13 01:50:12.832455 systemd[1]: sshd@179-139.178.70.110:22-36.138.19.180:47368.service: Deactivated successfully. Dec 13 01:50:13.046019 systemd[1]: Started sshd@181-139.178.70.110:22-36.138.19.180:47374.service - OpenSSH per-connection server daemon (36.138.19.180:47374). Dec 13 01:50:13.878584 sshd[6531]: Invalid user debian from 36.138.19.180 port 47374 Dec 13 01:50:14.085479 sshd[6531]: Connection closed by invalid user debian 36.138.19.180 port 47374 [preauth] Dec 13 01:50:14.086909 systemd[1]: sshd@181-139.178.70.110:22-36.138.19.180:47374.service: Deactivated successfully. Dec 13 01:50:14.294802 systemd[1]: Started sshd@182-139.178.70.110:22-36.138.19.180:38246.service - OpenSSH per-connection server daemon (36.138.19.180:38246). Dec 13 01:50:15.119752 sshd[6536]: Invalid user debian from 36.138.19.180 port 38246 Dec 13 01:50:15.323417 sshd[6536]: Connection closed by invalid user debian 36.138.19.180 port 38246 [preauth] Dec 13 01:50:15.324981 systemd[1]: sshd@182-139.178.70.110:22-36.138.19.180:38246.service: Deactivated successfully. Dec 13 01:50:15.532833 systemd[1]: Started sshd@183-139.178.70.110:22-36.138.19.180:38260.service - OpenSSH per-connection server daemon (36.138.19.180:38260). Dec 13 01:50:16.370435 sshd[6564]: Invalid user debian from 36.138.19.180 port 38260 Dec 13 01:50:16.571951 sshd[6564]: Connection closed by invalid user debian 36.138.19.180 port 38260 [preauth] Dec 13 01:50:16.572783 systemd[1]: sshd@183-139.178.70.110:22-36.138.19.180:38260.service: Deactivated successfully. Dec 13 01:50:16.784708 systemd[1]: Started sshd@184-139.178.70.110:22-36.138.19.180:38272.service - OpenSSH per-connection server daemon (36.138.19.180:38272). Dec 13 01:50:17.611972 systemd[1]: Started sshd@185-139.178.70.110:22-139.178.89.65:36268.service - OpenSSH per-connection server daemon (139.178.89.65:36268). Dec 13 01:50:17.618433 sshd[6569]: Invalid user debian from 36.138.19.180 port 38272 Dec 13 01:50:17.671240 sshd[6574]: Accepted publickey for core from 139.178.89.65 port 36268 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:50:17.672145 sshd[6574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:50:17.674848 systemd-logind[1521]: New session 25 of user core. Dec 13 01:50:17.683070 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 01:50:17.816208 sshd[6574]: pam_unix(sshd:session): session closed for user core Dec 13 01:50:17.817878 systemd-logind[1521]: Session 25 logged out. Waiting for processes to exit. Dec 13 01:50:17.818138 systemd[1]: sshd@185-139.178.70.110:22-139.178.89.65:36268.service: Deactivated successfully. Dec 13 01:50:17.819240 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 01:50:17.820258 systemd-logind[1521]: Removed session 25. Dec 13 01:50:17.824156 sshd[6569]: Connection closed by invalid user debian 36.138.19.180 port 38272 [preauth] Dec 13 01:50:17.822963 systemd[1]: sshd@184-139.178.70.110:22-36.138.19.180:38272.service: Deactivated successfully. Dec 13 01:50:18.016531 systemd[1]: Started sshd@186-139.178.70.110:22-36.138.19.180:38288.service - OpenSSH per-connection server daemon (36.138.19.180:38288). Dec 13 01:50:18.800410 sshd[6589]: Invalid user debian from 36.138.19.180 port 38288 Dec 13 01:50:18.993290 sshd[6589]: Connection closed by invalid user debian 36.138.19.180 port 38288 [preauth] Dec 13 01:50:18.994978 systemd[1]: sshd@186-139.178.70.110:22-36.138.19.180:38288.service: Deactivated successfully. Dec 13 01:50:19.206985 systemd[1]: Started sshd@187-139.178.70.110:22-36.138.19.180:38290.service - OpenSSH per-connection server daemon (36.138.19.180:38290). Dec 13 01:50:20.031754 sshd[6594]: Invalid user debian from 36.138.19.180 port 38290 Dec 13 01:50:20.234973 sshd[6594]: Connection closed by invalid user debian 36.138.19.180 port 38290 [preauth] Dec 13 01:50:20.235885 systemd[1]: sshd@187-139.178.70.110:22-36.138.19.180:38290.service: Deactivated successfully. Dec 13 01:50:20.420074 systemd[1]: Started sshd@188-139.178.70.110:22-36.138.19.180:38296.service - OpenSSH per-connection server daemon (36.138.19.180:38296). Dec 13 01:50:21.128389 sshd[6599]: Invalid user debian from 36.138.19.180 port 38296 Dec 13 01:50:21.303918 sshd[6599]: Connection closed by invalid user debian 36.138.19.180 port 38296 [preauth] Dec 13 01:50:21.303804 systemd[1]: sshd@188-139.178.70.110:22-36.138.19.180:38296.service: Deactivated successfully.