Sep 5 00:11:34.958956 kernel: microcode: updated early: 0xf4 -> 0xfc, date = 2023-07-27 Sep 5 00:11:34.958970 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 4 15:54:07 -00 2024 Sep 5 00:11:34.958977 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.oem.id=packet flatcar.autologin verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 5 00:11:34.958982 kernel: BIOS-provided physical RAM map: Sep 5 00:11:34.958986 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Sep 5 00:11:34.958990 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Sep 5 00:11:34.958995 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Sep 5 00:11:34.958999 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Sep 5 00:11:34.959003 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Sep 5 00:11:34.959007 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b2cfff] usable Sep 5 00:11:34.959011 kernel: BIOS-e820: [mem 0x0000000081b2d000-0x0000000081b2dfff] ACPI NVS Sep 5 00:11:34.959016 kernel: BIOS-e820: [mem 0x0000000081b2e000-0x0000000081b2efff] reserved Sep 5 00:11:34.959020 kernel: BIOS-e820: [mem 0x0000000081b2f000-0x000000008afccfff] usable Sep 5 00:11:34.959024 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Sep 5 00:11:34.959030 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Sep 5 00:11:34.959034 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Sep 5 00:11:34.959040 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Sep 5 00:11:34.959044 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Sep 5 00:11:34.959049 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Sep 5 00:11:34.959054 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 5 00:11:34.959058 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Sep 5 00:11:34.959063 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Sep 5 00:11:34.959067 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Sep 5 00:11:34.959072 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Sep 5 00:11:34.959076 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Sep 5 00:11:34.959081 kernel: NX (Execute Disable) protection: active Sep 5 00:11:34.959085 kernel: APIC: Static calls initialized Sep 5 00:11:34.959090 kernel: SMBIOS 3.2.1 present. Sep 5 00:11:34.959096 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Sep 5 00:11:34.959100 kernel: tsc: Detected 3400.000 MHz processor Sep 5 00:11:34.959105 kernel: tsc: Detected 3399.906 MHz TSC Sep 5 00:11:34.959109 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 5 00:11:34.959115 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 5 00:11:34.959119 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Sep 5 00:11:34.959124 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Sep 5 00:11:34.959129 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 5 00:11:34.959134 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Sep 5 00:11:34.959138 kernel: Using GB pages for direct mapping Sep 5 00:11:34.959144 kernel: ACPI: Early table checksum verification disabled Sep 5 00:11:34.959149 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Sep 5 00:11:34.959156 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Sep 5 00:11:34.959161 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Sep 5 00:11:34.959166 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Sep 5 00:11:34.959171 kernel: ACPI: FACS 0x000000008C66CF80 000040 Sep 5 00:11:34.959177 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Sep 5 00:11:34.959182 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Sep 5 00:11:34.959187 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Sep 5 00:11:34.959192 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Sep 5 00:11:34.959197 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Sep 5 00:11:34.959202 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Sep 5 00:11:34.959207 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Sep 5 00:11:34.959213 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Sep 5 00:11:34.959218 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 5 00:11:34.959223 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Sep 5 00:11:34.959228 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Sep 5 00:11:34.959233 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 5 00:11:34.959238 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 5 00:11:34.959243 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Sep 5 00:11:34.959248 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Sep 5 00:11:34.959253 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 5 00:11:34.959262 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Sep 5 00:11:34.959267 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Sep 5 00:11:34.959272 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Sep 5 00:11:34.959277 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Sep 5 00:11:34.959282 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Sep 5 00:11:34.959287 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Sep 5 00:11:34.959292 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Sep 5 00:11:34.959297 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Sep 5 00:11:34.959330 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Sep 5 00:11:34.959350 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Sep 5 00:11:34.959356 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Sep 5 00:11:34.959361 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Sep 5 00:11:34.959366 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Sep 5 00:11:34.959371 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Sep 5 00:11:34.959376 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Sep 5 00:11:34.959381 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Sep 5 00:11:34.959386 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Sep 5 00:11:34.959392 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Sep 5 00:11:34.959397 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Sep 5 00:11:34.959402 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Sep 5 00:11:34.959407 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Sep 5 00:11:34.959411 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Sep 5 00:11:34.959416 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Sep 5 00:11:34.959421 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Sep 5 00:11:34.959426 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Sep 5 00:11:34.959431 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Sep 5 00:11:34.959437 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Sep 5 00:11:34.959442 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Sep 5 00:11:34.959447 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Sep 5 00:11:34.959452 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Sep 5 00:11:34.959457 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Sep 5 00:11:34.959462 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Sep 5 00:11:34.959467 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Sep 5 00:11:34.959472 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Sep 5 00:11:34.959476 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Sep 5 00:11:34.959481 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Sep 5 00:11:34.959487 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Sep 5 00:11:34.959492 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Sep 5 00:11:34.959497 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Sep 5 00:11:34.959502 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Sep 5 00:11:34.959507 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Sep 5 00:11:34.959512 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Sep 5 00:11:34.959517 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Sep 5 00:11:34.959522 kernel: No NUMA configuration found Sep 5 00:11:34.959527 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Sep 5 00:11:34.959533 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Sep 5 00:11:34.959538 kernel: Zone ranges: Sep 5 00:11:34.959543 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 5 00:11:34.959548 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 5 00:11:34.959553 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Sep 5 00:11:34.959558 kernel: Movable zone start for each node Sep 5 00:11:34.959563 kernel: Early memory node ranges Sep 5 00:11:34.959568 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Sep 5 00:11:34.959573 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Sep 5 00:11:34.959578 kernel: node 0: [mem 0x0000000040400000-0x0000000081b2cfff] Sep 5 00:11:34.959583 kernel: node 0: [mem 0x0000000081b2f000-0x000000008afccfff] Sep 5 00:11:34.959588 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Sep 5 00:11:34.959594 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Sep 5 00:11:34.959602 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Sep 5 00:11:34.959608 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Sep 5 00:11:34.959613 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 5 00:11:34.959619 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Sep 5 00:11:34.959625 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Sep 5 00:11:34.959630 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Sep 5 00:11:34.959636 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Sep 5 00:11:34.959641 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Sep 5 00:11:34.959646 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Sep 5 00:11:34.959652 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Sep 5 00:11:34.959657 kernel: ACPI: PM-Timer IO Port: 0x1808 Sep 5 00:11:34.959662 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Sep 5 00:11:34.959668 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Sep 5 00:11:34.959674 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Sep 5 00:11:34.959679 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Sep 5 00:11:34.959685 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Sep 5 00:11:34.959690 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Sep 5 00:11:34.959695 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Sep 5 00:11:34.959701 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Sep 5 00:11:34.959706 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Sep 5 00:11:34.959711 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Sep 5 00:11:34.959716 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Sep 5 00:11:34.959722 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Sep 5 00:11:34.959728 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Sep 5 00:11:34.959733 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Sep 5 00:11:34.959739 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Sep 5 00:11:34.959744 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Sep 5 00:11:34.959749 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Sep 5 00:11:34.959754 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 5 00:11:34.959760 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 5 00:11:34.959765 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 5 00:11:34.959771 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 5 00:11:34.959777 kernel: TSC deadline timer available Sep 5 00:11:34.959782 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Sep 5 00:11:34.959788 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Sep 5 00:11:34.959793 kernel: Booting paravirtualized kernel on bare hardware Sep 5 00:11:34.959799 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 5 00:11:34.959804 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Sep 5 00:11:34.959809 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u262144 Sep 5 00:11:34.959815 kernel: pcpu-alloc: s196904 r8192 d32472 u262144 alloc=1*2097152 Sep 5 00:11:34.959820 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Sep 5 00:11:34.959827 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.oem.id=packet flatcar.autologin verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 5 00:11:34.959832 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 5 00:11:34.959837 kernel: random: crng init done Sep 5 00:11:34.959843 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Sep 5 00:11:34.959848 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Sep 5 00:11:34.959854 kernel: Fallback order for Node 0: 0 Sep 5 00:11:34.959859 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Sep 5 00:11:34.959864 kernel: Policy zone: Normal Sep 5 00:11:34.959871 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 5 00:11:34.959876 kernel: software IO TLB: area num 16. Sep 5 00:11:34.959881 kernel: Memory: 32720312K/33452980K available (12288K kernel code, 2304K rwdata, 22708K rodata, 42704K init, 2488K bss, 732408K reserved, 0K cma-reserved) Sep 5 00:11:34.959887 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Sep 5 00:11:34.959892 kernel: ftrace: allocating 37748 entries in 148 pages Sep 5 00:11:34.959898 kernel: ftrace: allocated 148 pages with 3 groups Sep 5 00:11:34.959903 kernel: Dynamic Preempt: voluntary Sep 5 00:11:34.959908 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 5 00:11:34.959914 kernel: rcu: RCU event tracing is enabled. Sep 5 00:11:34.959921 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Sep 5 00:11:34.959926 kernel: Trampoline variant of Tasks RCU enabled. Sep 5 00:11:34.959931 kernel: Rude variant of Tasks RCU enabled. Sep 5 00:11:34.959937 kernel: Tracing variant of Tasks RCU enabled. Sep 5 00:11:34.959942 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 5 00:11:34.959948 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Sep 5 00:11:34.959953 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Sep 5 00:11:34.959958 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 5 00:11:34.959963 kernel: Console: colour dummy device 80x25 Sep 5 00:11:34.959970 kernel: printk: console [tty0] enabled Sep 5 00:11:34.959975 kernel: printk: console [ttyS1] enabled Sep 5 00:11:34.959980 kernel: ACPI: Core revision 20230628 Sep 5 00:11:34.959986 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Sep 5 00:11:34.959991 kernel: APIC: Switch to symmetric I/O mode setup Sep 5 00:11:34.959996 kernel: DMAR: Host address width 39 Sep 5 00:11:34.960002 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Sep 5 00:11:34.960007 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Sep 5 00:11:34.960012 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Sep 5 00:11:34.960019 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Sep 5 00:11:34.960024 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Sep 5 00:11:34.960030 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Sep 5 00:11:34.960035 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Sep 5 00:11:34.960041 kernel: x2apic enabled Sep 5 00:11:34.960046 kernel: APIC: Switched APIC routing to: cluster x2apic Sep 5 00:11:34.960051 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Sep 5 00:11:34.960057 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Sep 5 00:11:34.960062 kernel: CPU0: Thermal monitoring enabled (TM1) Sep 5 00:11:34.960068 kernel: process: using mwait in idle threads Sep 5 00:11:34.960074 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 5 00:11:34.960079 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Sep 5 00:11:34.960085 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 5 00:11:34.960090 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Sep 5 00:11:34.960095 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Sep 5 00:11:34.960100 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Sep 5 00:11:34.960106 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Sep 5 00:11:34.960111 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Sep 5 00:11:34.960116 kernel: RETBleed: Mitigation: Enhanced IBRS Sep 5 00:11:34.960122 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 5 00:11:34.960128 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 5 00:11:34.960133 kernel: TAA: Mitigation: TSX disabled Sep 5 00:11:34.960138 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Sep 5 00:11:34.960144 kernel: SRBDS: Mitigation: Microcode Sep 5 00:11:34.960149 kernel: GDS: Mitigation: Microcode Sep 5 00:11:34.960154 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 5 00:11:34.960160 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 5 00:11:34.960165 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 5 00:11:34.960170 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 5 00:11:34.960176 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 5 00:11:34.960181 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 5 00:11:34.960187 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 5 00:11:34.960192 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 5 00:11:34.960198 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Sep 5 00:11:34.960203 kernel: Freeing SMP alternatives memory: 32K Sep 5 00:11:34.960208 kernel: pid_max: default: 32768 minimum: 301 Sep 5 00:11:34.960214 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 5 00:11:34.960219 kernel: landlock: Up and running. Sep 5 00:11:34.960224 kernel: SELinux: Initializing. Sep 5 00:11:34.960230 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 5 00:11:34.960235 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 5 00:11:34.960240 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Sep 5 00:11:34.960247 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1. Sep 5 00:11:34.960252 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1. Sep 5 00:11:34.960258 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1. Sep 5 00:11:34.960273 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Sep 5 00:11:34.960278 kernel: ... version: 4 Sep 5 00:11:34.960283 kernel: ... bit width: 48 Sep 5 00:11:34.960289 kernel: ... generic registers: 4 Sep 5 00:11:34.960294 kernel: ... value mask: 0000ffffffffffff Sep 5 00:11:34.960326 kernel: ... max period: 00007fffffffffff Sep 5 00:11:34.960332 kernel: ... fixed-purpose events: 3 Sep 5 00:11:34.960353 kernel: ... event mask: 000000070000000f Sep 5 00:11:34.960358 kernel: signal: max sigframe size: 2032 Sep 5 00:11:34.960363 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Sep 5 00:11:34.960369 kernel: rcu: Hierarchical SRCU implementation. Sep 5 00:11:34.960374 kernel: rcu: Max phase no-delay instances is 400. Sep 5 00:11:34.960380 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Sep 5 00:11:34.960385 kernel: smp: Bringing up secondary CPUs ... Sep 5 00:11:34.960390 kernel: smpboot: x86: Booting SMP configuration: Sep 5 00:11:34.960396 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Sep 5 00:11:34.960402 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 5 00:11:34.960408 kernel: smp: Brought up 1 node, 16 CPUs Sep 5 00:11:34.960413 kernel: smpboot: Max logical packages: 1 Sep 5 00:11:34.960419 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Sep 5 00:11:34.960424 kernel: devtmpfs: initialized Sep 5 00:11:34.960429 kernel: x86/mm: Memory block size: 128MB Sep 5 00:11:34.960435 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b2d000-0x81b2dfff] (4096 bytes) Sep 5 00:11:34.960440 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Sep 5 00:11:34.960447 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 5 00:11:34.960452 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Sep 5 00:11:34.960457 kernel: pinctrl core: initialized pinctrl subsystem Sep 5 00:11:34.960463 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 5 00:11:34.960468 kernel: audit: initializing netlink subsys (disabled) Sep 5 00:11:34.960473 kernel: audit: type=2000 audit(1725495089.039:1): state=initialized audit_enabled=0 res=1 Sep 5 00:11:34.960479 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 5 00:11:34.960484 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 5 00:11:34.960489 kernel: cpuidle: using governor menu Sep 5 00:11:34.960496 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 5 00:11:34.960501 kernel: dca service started, version 1.12.1 Sep 5 00:11:34.960506 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Sep 5 00:11:34.960512 kernel: PCI: Using configuration type 1 for base access Sep 5 00:11:34.960517 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Sep 5 00:11:34.960522 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 5 00:11:34.960528 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 5 00:11:34.960533 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 5 00:11:34.960539 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 5 00:11:34.960545 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 5 00:11:34.960550 kernel: ACPI: Added _OSI(Module Device) Sep 5 00:11:34.960556 kernel: ACPI: Added _OSI(Processor Device) Sep 5 00:11:34.960561 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 5 00:11:34.960566 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 5 00:11:34.960572 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Sep 5 00:11:34.960577 kernel: ACPI: Dynamic OEM Table Load: Sep 5 00:11:34.960582 kernel: ACPI: SSDT 0xFFFF91F981ECC400 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Sep 5 00:11:34.960588 kernel: ACPI: Dynamic OEM Table Load: Sep 5 00:11:34.960594 kernel: ACPI: SSDT 0xFFFF91F981EC0800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Sep 5 00:11:34.960599 kernel: ACPI: Dynamic OEM Table Load: Sep 5 00:11:34.960605 kernel: ACPI: SSDT 0xFFFF91F981536D00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Sep 5 00:11:34.960610 kernel: ACPI: Dynamic OEM Table Load: Sep 5 00:11:34.960615 kernel: ACPI: SSDT 0xFFFF91F981EC3800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Sep 5 00:11:34.960620 kernel: ACPI: Dynamic OEM Table Load: Sep 5 00:11:34.960626 kernel: ACPI: SSDT 0xFFFF91F981ED4000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Sep 5 00:11:34.960631 kernel: ACPI: Dynamic OEM Table Load: Sep 5 00:11:34.960636 kernel: ACPI: SSDT 0xFFFF91F981EC9000 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Sep 5 00:11:34.960642 kernel: ACPI: _OSC evaluated successfully for all CPUs Sep 5 00:11:34.960648 kernel: ACPI: Interpreter enabled Sep 5 00:11:34.960653 kernel: ACPI: PM: (supports S0 S5) Sep 5 00:11:34.960659 kernel: ACPI: Using IOAPIC for interrupt routing Sep 5 00:11:34.960664 kernel: HEST: Enabling Firmware First mode for corrected errors. Sep 5 00:11:34.960670 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Sep 5 00:11:34.960675 kernel: HEST: Table parsing has been initialized. Sep 5 00:11:34.960680 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Sep 5 00:11:34.960686 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 5 00:11:34.960691 kernel: PCI: Using E820 reservations for host bridge windows Sep 5 00:11:34.960697 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Sep 5 00:11:34.960703 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Sep 5 00:11:34.960708 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Sep 5 00:11:34.960714 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Sep 5 00:11:34.960719 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Sep 5 00:11:34.960724 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Sep 5 00:11:34.960730 kernel: ACPI: \_TZ_.FN00: New power resource Sep 5 00:11:34.960735 kernel: ACPI: \_TZ_.FN01: New power resource Sep 5 00:11:34.960740 kernel: ACPI: \_TZ_.FN02: New power resource Sep 5 00:11:34.960747 kernel: ACPI: \_TZ_.FN03: New power resource Sep 5 00:11:34.960752 kernel: ACPI: \_TZ_.FN04: New power resource Sep 5 00:11:34.960757 kernel: ACPI: \PIN_: New power resource Sep 5 00:11:34.960763 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Sep 5 00:11:34.960864 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 5 00:11:34.960920 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Sep 5 00:11:34.960969 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Sep 5 00:11:34.960979 kernel: PCI host bridge to bus 0000:00 Sep 5 00:11:34.961029 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 5 00:11:34.961073 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 5 00:11:34.961115 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 5 00:11:34.961157 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Sep 5 00:11:34.961199 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Sep 5 00:11:34.961240 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Sep 5 00:11:34.961329 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Sep 5 00:11:34.961401 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Sep 5 00:11:34.961453 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Sep 5 00:11:34.961506 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Sep 5 00:11:34.961554 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Sep 5 00:11:34.961606 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Sep 5 00:11:34.961658 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Sep 5 00:11:34.961710 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Sep 5 00:11:34.961758 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Sep 5 00:11:34.961805 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Sep 5 00:11:34.961856 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Sep 5 00:11:34.961903 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Sep 5 00:11:34.961953 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Sep 5 00:11:34.962003 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Sep 5 00:11:34.962051 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 5 00:11:34.962106 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Sep 5 00:11:34.962153 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 5 00:11:34.962206 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Sep 5 00:11:34.962256 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Sep 5 00:11:34.962351 kernel: pci 0000:00:16.0: PME# supported from D3hot Sep 5 00:11:34.962410 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Sep 5 00:11:34.962461 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Sep 5 00:11:34.962507 kernel: pci 0000:00:16.1: PME# supported from D3hot Sep 5 00:11:34.962558 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Sep 5 00:11:34.962606 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Sep 5 00:11:34.962655 kernel: pci 0000:00:16.4: PME# supported from D3hot Sep 5 00:11:34.962706 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Sep 5 00:11:34.962755 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Sep 5 00:11:34.962802 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Sep 5 00:11:34.962850 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Sep 5 00:11:34.962896 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Sep 5 00:11:34.962944 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Sep 5 00:11:34.962995 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Sep 5 00:11:34.963042 kernel: pci 0000:00:17.0: PME# supported from D3hot Sep 5 00:11:34.963095 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Sep 5 00:11:34.963143 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Sep 5 00:11:34.963201 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Sep 5 00:11:34.963250 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Sep 5 00:11:34.963331 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Sep 5 00:11:34.963393 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Sep 5 00:11:34.963448 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Sep 5 00:11:34.963496 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Sep 5 00:11:34.963550 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Sep 5 00:11:34.963598 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Sep 5 00:11:34.963649 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Sep 5 00:11:34.963697 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 5 00:11:34.963749 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Sep 5 00:11:34.963801 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Sep 5 00:11:34.963851 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Sep 5 00:11:34.963899 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Sep 5 00:11:34.963952 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Sep 5 00:11:34.964001 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Sep 5 00:11:34.964055 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Sep 5 00:11:34.964106 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Sep 5 00:11:34.964157 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Sep 5 00:11:34.964207 kernel: pci 0000:01:00.0: PME# supported from D3cold Sep 5 00:11:34.964255 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Sep 5 00:11:34.964360 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Sep 5 00:11:34.964415 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Sep 5 00:11:34.964464 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Sep 5 00:11:34.964514 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Sep 5 00:11:34.964565 kernel: pci 0000:01:00.1: PME# supported from D3cold Sep 5 00:11:34.964614 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Sep 5 00:11:34.964662 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Sep 5 00:11:34.964712 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 5 00:11:34.964759 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Sep 5 00:11:34.964808 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Sep 5 00:11:34.964857 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Sep 5 00:11:34.964910 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Sep 5 00:11:34.964963 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Sep 5 00:11:34.965012 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Sep 5 00:11:34.965061 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Sep 5 00:11:34.965109 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Sep 5 00:11:34.965160 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Sep 5 00:11:34.965208 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Sep 5 00:11:34.965257 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Sep 5 00:11:34.965353 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Sep 5 00:11:34.965408 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Sep 5 00:11:34.965460 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Sep 5 00:11:34.965509 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Sep 5 00:11:34.965558 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Sep 5 00:11:34.965606 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Sep 5 00:11:34.965656 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Sep 5 00:11:34.965707 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Sep 5 00:11:34.965755 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Sep 5 00:11:34.965802 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Sep 5 00:11:34.965851 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Sep 5 00:11:34.965905 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Sep 5 00:11:34.965954 kernel: pci 0000:06:00.0: enabling Extended Tags Sep 5 00:11:34.966004 kernel: pci 0000:06:00.0: supports D1 D2 Sep 5 00:11:34.966053 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 5 00:11:34.966105 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Sep 5 00:11:34.966153 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Sep 5 00:11:34.966202 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Sep 5 00:11:34.966256 kernel: pci_bus 0000:07: extended config space not accessible Sep 5 00:11:34.966366 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Sep 5 00:11:34.966418 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Sep 5 00:11:34.966469 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Sep 5 00:11:34.966523 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Sep 5 00:11:34.966574 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 5 00:11:34.966626 kernel: pci 0000:07:00.0: supports D1 D2 Sep 5 00:11:34.966677 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 5 00:11:34.966727 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Sep 5 00:11:34.966776 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Sep 5 00:11:34.966825 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Sep 5 00:11:34.966834 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Sep 5 00:11:34.966842 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Sep 5 00:11:34.966848 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Sep 5 00:11:34.966854 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Sep 5 00:11:34.966860 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Sep 5 00:11:34.966866 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Sep 5 00:11:34.966871 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Sep 5 00:11:34.966877 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Sep 5 00:11:34.966883 kernel: iommu: Default domain type: Translated Sep 5 00:11:34.966889 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 5 00:11:34.966895 kernel: PCI: Using ACPI for IRQ routing Sep 5 00:11:34.966901 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 5 00:11:34.966907 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Sep 5 00:11:34.966912 kernel: e820: reserve RAM buffer [mem 0x81b2d000-0x83ffffff] Sep 5 00:11:34.966918 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Sep 5 00:11:34.966923 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Sep 5 00:11:34.966929 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Sep 5 00:11:34.966934 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Sep 5 00:11:34.966986 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Sep 5 00:11:34.967037 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Sep 5 00:11:34.967089 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 5 00:11:34.967098 kernel: vgaarb: loaded Sep 5 00:11:34.967104 kernel: clocksource: Switched to clocksource tsc-early Sep 5 00:11:34.967109 kernel: VFS: Disk quotas dquot_6.6.0 Sep 5 00:11:34.967115 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 5 00:11:34.967121 kernel: pnp: PnP ACPI init Sep 5 00:11:34.967171 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Sep 5 00:11:34.967221 kernel: pnp 00:02: [dma 0 disabled] Sep 5 00:11:34.967272 kernel: pnp 00:03: [dma 0 disabled] Sep 5 00:11:34.967366 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Sep 5 00:11:34.967412 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Sep 5 00:11:34.967458 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Sep 5 00:11:34.967506 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Sep 5 00:11:34.967552 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Sep 5 00:11:34.967598 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Sep 5 00:11:34.967640 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Sep 5 00:11:34.967684 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Sep 5 00:11:34.967726 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Sep 5 00:11:34.967771 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Sep 5 00:11:34.967814 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Sep 5 00:11:34.967865 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Sep 5 00:11:34.967910 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Sep 5 00:11:34.967956 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Sep 5 00:11:34.968001 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Sep 5 00:11:34.968044 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Sep 5 00:11:34.968088 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Sep 5 00:11:34.968131 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Sep 5 00:11:34.968182 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Sep 5 00:11:34.968191 kernel: pnp: PnP ACPI: found 10 devices Sep 5 00:11:34.968197 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 5 00:11:34.968203 kernel: NET: Registered PF_INET protocol family Sep 5 00:11:34.968208 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 00:11:34.968214 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Sep 5 00:11:34.968220 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 5 00:11:34.968226 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 00:11:34.968233 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 5 00:11:34.968239 kernel: TCP: Hash tables configured (established 262144 bind 65536) Sep 5 00:11:34.968245 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 5 00:11:34.968250 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 5 00:11:34.968256 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 5 00:11:34.968264 kernel: NET: Registered PF_XDP protocol family Sep 5 00:11:34.968359 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Sep 5 00:11:34.968410 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Sep 5 00:11:34.968461 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Sep 5 00:11:34.968512 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Sep 5 00:11:34.968562 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Sep 5 00:11:34.968613 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Sep 5 00:11:34.968664 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Sep 5 00:11:34.968712 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 5 00:11:34.968764 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Sep 5 00:11:34.968812 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Sep 5 00:11:34.968864 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Sep 5 00:11:34.968911 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Sep 5 00:11:34.968960 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Sep 5 00:11:34.969008 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Sep 5 00:11:34.969057 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Sep 5 00:11:34.969108 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Sep 5 00:11:34.969157 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Sep 5 00:11:34.969204 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Sep 5 00:11:34.969256 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Sep 5 00:11:34.969333 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Sep 5 00:11:34.969401 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Sep 5 00:11:34.969450 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Sep 5 00:11:34.969497 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Sep 5 00:11:34.969546 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Sep 5 00:11:34.969593 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Sep 5 00:11:34.969638 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 5 00:11:34.969682 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 5 00:11:34.969725 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 5 00:11:34.969768 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Sep 5 00:11:34.969811 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Sep 5 00:11:34.969859 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Sep 5 00:11:34.969907 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Sep 5 00:11:34.969957 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Sep 5 00:11:34.970002 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Sep 5 00:11:34.970049 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Sep 5 00:11:34.970093 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Sep 5 00:11:34.970140 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Sep 5 00:11:34.970187 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Sep 5 00:11:34.970233 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Sep 5 00:11:34.970283 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Sep 5 00:11:34.970291 kernel: PCI: CLS 64 bytes, default 64 Sep 5 00:11:34.970326 kernel: DMAR: No ATSR found Sep 5 00:11:34.970332 kernel: DMAR: No SATC found Sep 5 00:11:34.970358 kernel: DMAR: dmar0: Using Queued invalidation Sep 5 00:11:34.970408 kernel: pci 0000:00:00.0: Adding to iommu group 0 Sep 5 00:11:34.970456 kernel: pci 0000:00:01.0: Adding to iommu group 1 Sep 5 00:11:34.970508 kernel: pci 0000:00:08.0: Adding to iommu group 2 Sep 5 00:11:34.970556 kernel: pci 0000:00:12.0: Adding to iommu group 3 Sep 5 00:11:34.970605 kernel: pci 0000:00:14.0: Adding to iommu group 4 Sep 5 00:11:34.970653 kernel: pci 0000:00:14.2: Adding to iommu group 4 Sep 5 00:11:34.970701 kernel: pci 0000:00:15.0: Adding to iommu group 5 Sep 5 00:11:34.970749 kernel: pci 0000:00:15.1: Adding to iommu group 5 Sep 5 00:11:34.970799 kernel: pci 0000:00:16.0: Adding to iommu group 6 Sep 5 00:11:34.970847 kernel: pci 0000:00:16.1: Adding to iommu group 6 Sep 5 00:11:34.970897 kernel: pci 0000:00:16.4: Adding to iommu group 6 Sep 5 00:11:34.970945 kernel: pci 0000:00:17.0: Adding to iommu group 7 Sep 5 00:11:34.970994 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Sep 5 00:11:34.971042 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Sep 5 00:11:34.971090 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Sep 5 00:11:34.971139 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Sep 5 00:11:34.971187 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Sep 5 00:11:34.971237 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Sep 5 00:11:34.971290 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Sep 5 00:11:34.971380 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Sep 5 00:11:34.971428 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Sep 5 00:11:34.971478 kernel: pci 0000:01:00.0: Adding to iommu group 1 Sep 5 00:11:34.971527 kernel: pci 0000:01:00.1: Adding to iommu group 1 Sep 5 00:11:34.971578 kernel: pci 0000:03:00.0: Adding to iommu group 15 Sep 5 00:11:34.971627 kernel: pci 0000:04:00.0: Adding to iommu group 16 Sep 5 00:11:34.971677 kernel: pci 0000:06:00.0: Adding to iommu group 17 Sep 5 00:11:34.971732 kernel: pci 0000:07:00.0: Adding to iommu group 17 Sep 5 00:11:34.971740 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Sep 5 00:11:34.971747 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 5 00:11:34.971752 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Sep 5 00:11:34.971758 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Sep 5 00:11:34.971764 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Sep 5 00:11:34.971770 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Sep 5 00:11:34.971775 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Sep 5 00:11:34.971825 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Sep 5 00:11:34.971836 kernel: Initialise system trusted keyrings Sep 5 00:11:34.971842 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Sep 5 00:11:34.971847 kernel: Key type asymmetric registered Sep 5 00:11:34.971853 kernel: Asymmetric key parser 'x509' registered Sep 5 00:11:34.971859 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 5 00:11:34.971864 kernel: io scheduler mq-deadline registered Sep 5 00:11:34.971870 kernel: io scheduler kyber registered Sep 5 00:11:34.971876 kernel: io scheduler bfq registered Sep 5 00:11:34.971925 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Sep 5 00:11:34.971974 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Sep 5 00:11:34.972024 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Sep 5 00:11:34.972073 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Sep 5 00:11:34.972121 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Sep 5 00:11:34.972169 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Sep 5 00:11:34.972222 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Sep 5 00:11:34.972233 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Sep 5 00:11:34.972239 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Sep 5 00:11:34.972244 kernel: pstore: Using crash dump compression: deflate Sep 5 00:11:34.972250 kernel: pstore: Registered erst as persistent store backend Sep 5 00:11:34.972256 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 5 00:11:34.972264 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 5 00:11:34.972270 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 5 00:11:34.972276 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 5 00:11:34.972282 kernel: hpet_acpi_add: no address or irqs in _CRS Sep 5 00:11:34.972382 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Sep 5 00:11:34.972390 kernel: i8042: PNP: No PS/2 controller found. Sep 5 00:11:34.972434 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Sep 5 00:11:34.972478 kernel: rtc_cmos rtc_cmos: registered as rtc0 Sep 5 00:11:34.972523 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-09-05T00:11:33 UTC (1725495093) Sep 5 00:11:34.972568 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Sep 5 00:11:34.972576 kernel: intel_pstate: Intel P-state driver initializing Sep 5 00:11:34.972582 kernel: intel_pstate: Disabling energy efficiency optimization Sep 5 00:11:34.972589 kernel: intel_pstate: HWP enabled Sep 5 00:11:34.972595 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Sep 5 00:11:34.972601 kernel: vesafb: scrolling: redraw Sep 5 00:11:34.972607 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Sep 5 00:11:34.972612 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x00000000770abf58, using 768k, total 768k Sep 5 00:11:34.972618 kernel: Console: switching to colour frame buffer device 128x48 Sep 5 00:11:34.972624 kernel: fb0: VESA VGA frame buffer device Sep 5 00:11:34.972630 kernel: NET: Registered PF_INET6 protocol family Sep 5 00:11:34.972635 kernel: Segment Routing with IPv6 Sep 5 00:11:34.972642 kernel: In-situ OAM (IOAM) with IPv6 Sep 5 00:11:34.972648 kernel: NET: Registered PF_PACKET protocol family Sep 5 00:11:34.972653 kernel: Key type dns_resolver registered Sep 5 00:11:34.972659 kernel: microcode: Microcode Update Driver: v2.2. Sep 5 00:11:34.972665 kernel: IPI shorthand broadcast: enabled Sep 5 00:11:34.972670 kernel: sched_clock: Marking stable (2477000703, 1380667248)->(4395746327, -538078376) Sep 5 00:11:34.972676 kernel: registered taskstats version 1 Sep 5 00:11:34.972682 kernel: Loading compiled-in X.509 certificates Sep 5 00:11:34.972688 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: 8669771ab5e11f458b79e6634fe685dacc266b18' Sep 5 00:11:34.972694 kernel: Key type .fscrypt registered Sep 5 00:11:34.972700 kernel: Key type fscrypt-provisioning registered Sep 5 00:11:34.972706 kernel: ima: Allocated hash algorithm: sha1 Sep 5 00:11:34.972711 kernel: ima: No architecture policies found Sep 5 00:11:34.972717 kernel: clk: Disabling unused clocks Sep 5 00:11:34.972723 kernel: Freeing unused kernel image (initmem) memory: 42704K Sep 5 00:11:34.972728 kernel: Write protecting the kernel read-only data: 36864k Sep 5 00:11:34.972734 kernel: Freeing unused kernel image (rodata/data gap) memory: 1868K Sep 5 00:11:34.972740 kernel: Run /init as init process Sep 5 00:11:34.972747 kernel: with arguments: Sep 5 00:11:34.972752 kernel: /init Sep 5 00:11:34.972758 kernel: with environment: Sep 5 00:11:34.972763 kernel: HOME=/ Sep 5 00:11:34.972769 kernel: TERM=linux Sep 5 00:11:34.972775 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 5 00:11:34.972782 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 00:11:34.972790 systemd[1]: Detected architecture x86-64. Sep 5 00:11:34.972796 systemd[1]: Running in initrd. Sep 5 00:11:34.972802 systemd[1]: No hostname configured, using default hostname. Sep 5 00:11:34.972808 systemd[1]: Hostname set to . Sep 5 00:11:34.972813 systemd[1]: Initializing machine ID from random generator. Sep 5 00:11:34.972820 systemd[1]: Queued start job for default target initrd.target. Sep 5 00:11:34.972826 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:11:34.972832 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:11:34.972839 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 00:11:34.972845 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 5 00:11:34.972851 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-ROOT.device - /dev/disk/by-partlabel/ROOT... Sep 5 00:11:34.972857 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 5 00:11:34.972863 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 5 00:11:34.972870 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 5 00:11:34.972875 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz Sep 5 00:11:34.972882 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns Sep 5 00:11:34.972888 kernel: clocksource: Switched to clocksource tsc Sep 5 00:11:34.972894 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:11:34.972900 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:11:34.972906 systemd[1]: Reached target ignition-diskful-subsequent.target - Ignition Subsequent Boot Disk Setup. Sep 5 00:11:34.972912 systemd[1]: Reached target paths.target - Path Units. Sep 5 00:11:34.972918 systemd[1]: Reached target slices.target - Slice Units. Sep 5 00:11:34.972924 systemd[1]: Reached target swap.target - Swaps. Sep 5 00:11:34.972930 systemd[1]: Reached target timers.target - Timer Units. Sep 5 00:11:34.972937 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 00:11:34.972943 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 00:11:34.972949 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 5 00:11:34.972955 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 5 00:11:34.972961 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 00:11:34.972967 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:11:34.972973 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 00:11:34.972979 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 5 00:11:34.972986 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 00:11:34.972992 systemd[1]: Starting systemd-fsck-usr.service... Sep 5 00:11:34.972998 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 00:11:34.973014 systemd-journald[260]: Collecting audit messages is disabled. Sep 5 00:11:34.973030 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 00:11:34.973037 systemd-journald[260]: Journal started Sep 5 00:11:34.973050 systemd-journald[260]: Runtime Journal (/run/log/journal/f38516e809c0444699b368789a21880d) is 8.0M, max 639.9M, 631.9M free. Sep 5 00:11:35.006636 systemd-modules-load[262]: Inserted module 'overlay' Sep 5 00:11:35.031350 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:11:35.031362 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 00:11:35.037074 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 5 00:11:35.037164 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:11:35.037248 systemd[1]: Finished systemd-fsck-usr.service. Sep 5 00:11:35.038138 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 00:11:35.038540 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 00:11:35.080264 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 5 00:11:35.099177 systemd-modules-load[262]: Inserted module 'br_netfilter' Sep 5 00:11:35.155377 kernel: Bridge firewalling registered Sep 5 00:11:35.099578 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 00:11:35.165707 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:11:35.186920 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 00:11:35.208006 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:11:35.249579 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:11:35.260925 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:11:35.261341 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 00:11:35.267268 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:11:35.267615 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:11:35.276618 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:11:35.288314 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 5 00:11:35.337993 dracut-cmdline[299]: dracut-dracut-053 Sep 5 00:11:35.345378 dracut-cmdline[299]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.oem.id=packet flatcar.autologin verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 5 00:11:35.415360 kernel: SCSI subsystem initialized Sep 5 00:11:35.438313 kernel: Loading iSCSI transport class v2.0-870. Sep 5 00:11:35.461292 kernel: iscsi: registered transport (tcp) Sep 5 00:11:35.492768 kernel: iscsi: registered transport (qla4xxx) Sep 5 00:11:35.492790 kernel: QLogic iSCSI HBA Driver Sep 5 00:11:35.526009 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 5 00:11:35.545426 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 5 00:11:35.630440 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 5 00:11:35.630464 kernel: device-mapper: uevent: version 1.0.3 Sep 5 00:11:35.650205 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 5 00:11:35.709323 kernel: raid6: avx2x4 gen() 52088 MB/s Sep 5 00:11:35.741334 kernel: raid6: avx2x2 gen() 52656 MB/s Sep 5 00:11:35.777939 kernel: raid6: avx2x1 gen() 45164 MB/s Sep 5 00:11:35.777957 kernel: raid6: using algorithm avx2x2 gen() 52656 MB/s Sep 5 00:11:35.825881 kernel: raid6: .... xor() 31721 MB/s, rmw enabled Sep 5 00:11:35.825899 kernel: raid6: using avx2x2 recovery algorithm Sep 5 00:11:35.867306 kernel: xor: automatically using best checksumming function avx Sep 5 00:11:35.984300 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 5 00:11:35.989963 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 5 00:11:36.012614 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:11:36.019636 systemd-udevd[486]: Using default interface naming scheme 'v255'. Sep 5 00:11:36.023362 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:11:36.059512 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 5 00:11:36.086879 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 00:11:36.109498 dracut-pre-trigger[499]: rd.md=0: removing MD RAID activation Sep 5 00:11:36.109564 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 00:11:36.169379 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:11:36.194494 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 00:11:36.229228 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 5 00:11:36.229250 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 5 00:11:36.229270 kernel: cryptd: max_cpu_qlen set to 1000 Sep 5 00:11:36.194548 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:11:36.244726 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:11:36.310476 kernel: PTP clock support registered Sep 5 00:11:36.310492 kernel: ACPI: bus type USB registered Sep 5 00:11:36.310502 kernel: usbcore: registered new interface driver usbfs Sep 5 00:11:36.310511 kernel: usbcore: registered new interface driver hub Sep 5 00:11:36.310520 kernel: usbcore: registered new device driver usb Sep 5 00:11:36.273342 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:11:36.347405 kernel: libata version 3.00 loaded. Sep 5 00:11:36.347423 kernel: AVX2 version of gcm_enc/dec engaged. Sep 5 00:11:36.347436 kernel: AES CTR mode by8 optimization enabled Sep 5 00:11:36.273375 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:11:36.393831 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Sep 5 00:11:36.393846 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Sep 5 00:11:36.393853 kernel: ahci 0000:00:17.0: version 3.0 Sep 5 00:11:36.393946 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Sep 5 00:11:36.340439 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:11:37.023332 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Sep 5 00:11:37.023430 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Sep 5 00:11:37.023513 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Sep 5 00:11:37.023620 kernel: pps pps0: new PPS source ptp0 Sep 5 00:11:37.023731 kernel: igb 0000:03:00.0: added PHC on eth0 Sep 5 00:11:37.023846 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Sep 5 00:11:37.023927 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:e6:6a Sep 5 00:11:37.023993 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Sep 5 00:11:37.024058 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Sep 5 00:11:37.024120 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Sep 5 00:11:37.024181 kernel: scsi host0: ahci Sep 5 00:11:37.024244 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Sep 5 00:11:37.024314 kernel: pps pps1: new PPS source ptp1 Sep 5 00:11:37.024376 kernel: igb 0000:04:00.0: added PHC on eth1 Sep 5 00:11:37.024439 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Sep 5 00:11:37.024500 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:e6:6b Sep 5 00:11:37.024562 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Sep 5 00:11:37.024635 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Sep 5 00:11:37.024700 kernel: scsi host1: ahci Sep 5 00:11:37.024762 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Sep 5 00:11:37.024824 kernel: scsi host2: ahci Sep 5 00:11:37.024883 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Sep 5 00:11:37.024942 kernel: scsi host3: ahci Sep 5 00:11:37.025003 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Sep 5 00:11:37.025065 kernel: hub 1-0:1.0: USB hub found Sep 5 00:11:37.025130 kernel: scsi host4: ahci Sep 5 00:11:37.025189 kernel: hub 1-0:1.0: 16 ports detected Sep 5 00:11:37.025248 kernel: scsi host5: ahci Sep 5 00:11:37.025321 kernel: hub 2-0:1.0: USB hub found Sep 5 00:11:37.025391 kernel: scsi host6: ahci Sep 5 00:11:37.025455 kernel: hub 2-0:1.0: 10 ports detected Sep 5 00:11:37.025522 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 132 Sep 5 00:11:37.025537 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Sep 5 00:11:37.025608 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 132 Sep 5 00:11:37.025617 kernel: hub 1-14:1.0: USB hub found Sep 5 00:11:37.025680 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 132 Sep 5 00:11:37.025689 kernel: hub 1-14:1.0: 4 ports detected Sep 5 00:11:37.025749 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 132 Sep 5 00:11:37.025757 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 132 Sep 5 00:11:37.025764 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 132 Sep 5 00:11:37.025773 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 132 Sep 5 00:11:36.533440 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:11:37.074156 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Sep 5 00:11:37.074241 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Sep 5 00:11:37.074314 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Sep 5 00:11:37.083670 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:11:37.104435 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:11:37.141059 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:11:37.221294 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Sep 5 00:11:37.317274 kernel: ata7: SATA link down (SStatus 0 SControl 300) Sep 5 00:11:37.317334 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 5 00:11:37.332264 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Sep 5 00:11:37.332405 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Sep 5 00:11:37.348268 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Sep 5 00:11:37.348507 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 5 00:11:37.401335 kernel: ata3: SATA link down (SStatus 0 SControl 300) Sep 5 00:11:37.416304 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 5 00:11:37.431266 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Sep 5 00:11:37.447270 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Sep 5 00:11:37.463299 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Sep 5 00:11:37.499305 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Sep 5 00:11:37.499347 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Sep 5 00:11:37.534435 kernel: ata1.00: Features: NCQ-prio Sep 5 00:11:37.548291 kernel: ata2.00: Features: NCQ-prio Sep 5 00:11:37.548307 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Sep 5 00:11:37.565355 kernel: ata1.00: configured for UDMA/133 Sep 5 00:11:37.570302 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Sep 5 00:11:37.570391 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Sep 5 00:11:37.571302 kernel: ata2.00: configured for UDMA/133 Sep 5 00:11:37.583705 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Sep 5 00:11:37.584295 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Sep 5 00:11:37.676308 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 5 00:11:37.676347 kernel: ata2.00: Enabling discard_zeroes_data Sep 5 00:11:37.699765 kernel: ata1.00: Enabling discard_zeroes_data Sep 5 00:11:37.699781 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Sep 5 00:11:37.704479 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Sep 5 00:11:37.719453 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Sep 5 00:11:37.719532 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks Sep 5 00:11:37.724678 kernel: sd 1:0:0:0: [sda] Write Protect is off Sep 5 00:11:37.729906 kernel: sd 0:0:0:0: [sdb] Write Protect is off Sep 5 00:11:37.734689 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 Sep 5 00:11:37.734774 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 5 00:11:37.739521 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Sep 5 00:11:37.748524 kernel: sd 1:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Sep 5 00:11:37.757580 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 5 00:11:37.868713 kernel: ata2.00: Enabling discard_zeroes_data Sep 5 00:11:37.868730 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Sep 5 00:11:37.868820 kernel: sd 0:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Sep 5 00:11:37.869301 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Sep 5 00:11:37.884265 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Sep 5 00:11:37.884368 kernel: ata1.00: Enabling discard_zeroes_data Sep 5 00:11:37.969315 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Sep 5 00:11:37.969333 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk Sep 5 00:11:38.001653 kernel: usbcore: registered new interface driver usbhid Sep 5 00:11:38.001696 kernel: usbhid: USB HID core driver Sep 5 00:11:38.012325 kernel: BTRFS: device fsid 0dc40443-7f77-4fa7-b5e4-579d4bba0772 devid 1 transid 37 /dev/sdb3 scanned by (udev-worker) (542) Sep 5 00:11:38.022336 systemd[1]: Found device dev-disk-by\x2dpartlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Sep 5 00:11:38.063581 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/sdb6 scanned by (udev-worker) (566) Sep 5 00:11:38.062381 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Sep 5 00:11:38.118118 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Sep 5 00:11:38.118130 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Sep 5 00:11:38.099558 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Sep 5 00:11:38.231305 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Sep 5 00:11:38.231394 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Sep 5 00:11:38.231463 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Sep 5 00:11:38.231541 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Sep 5 00:11:38.231550 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Sep 5 00:11:38.144480 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Sep 5 00:11:38.148282 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Sep 5 00:11:38.267985 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 5 00:11:38.298594 systemd[1]: Starting decrypt-root.service - Generate and execute a systemd-cryptsetup service to decrypt the ROOT partition... Sep 5 00:11:38.352516 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 5 00:11:38.352923 systemd[1]: Finished decrypt-root.service - Generate and execute a systemd-cryptsetup service to decrypt the ROOT partition. Sep 5 00:11:38.383087 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 5 00:11:38.383279 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 5 00:11:38.412679 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 5 00:11:38.438420 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 00:11:38.438473 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 00:11:38.466450 systemd[1]: Reached target basic.target - Basic System. Sep 5 00:11:38.503622 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 5 00:11:38.505131 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 5 00:11:38.540615 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 5 00:11:38.552082 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 00:11:38.594357 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 5 00:11:38.580407 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:11:38.605393 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 00:11:38.633387 sh[708]: Success Sep 5 00:11:38.643391 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 5 00:11:38.655111 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 5 00:11:38.688174 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 5 00:11:38.699489 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 5 00:11:38.714178 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 5 00:11:38.721111 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 5 00:11:38.858421 kernel: BTRFS info (device dm-0): first mount of filesystem 0dc40443-7f77-4fa7-b5e4-579d4bba0772 Sep 5 00:11:38.858434 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:11:38.858441 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 5 00:11:38.858449 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 5 00:11:38.858456 kernel: BTRFS info (device dm-0): using free space tree Sep 5 00:11:38.858462 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 5 00:11:38.860595 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 5 00:11:38.896995 systemd-fsck[752]: ROOT: clean, 85/553520 files, 83083/553472 blocks Sep 5 00:11:38.906669 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 5 00:11:38.928486 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 5 00:11:39.028265 kernel: EXT4-fs (sdb9): mounted filesystem bdbe0f61-2675-40b7-b9ae-5653402e9b23 r/w with ordered data mode. Quota mode: none. Sep 5 00:11:39.028658 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 5 00:11:39.038793 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 5 00:11:39.080424 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 00:11:39.190224 kernel: BTRFS info (device sdb6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 5 00:11:39.190236 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:11:39.190243 kernel: BTRFS info (device sdb6): using free space tree Sep 5 00:11:39.190250 kernel: BTRFS info (device sdb6): enabling ssd optimizations Sep 5 00:11:39.190265 kernel: BTRFS info (device sdb6): auto enabling async discard Sep 5 00:11:39.089426 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 5 00:11:39.207578 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 00:11:39.217221 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 5 00:11:39.253629 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 5 00:11:39.318135 initrd-setup-root[785]: cut: /sysroot/etc/passwd: No such file or directory Sep 5 00:11:39.328350 initrd-setup-root[792]: cut: /sysroot/etc/group: No such file or directory Sep 5 00:11:39.339352 initrd-setup-root[799]: cut: /sysroot/etc/shadow: No such file or directory Sep 5 00:11:39.349381 initrd-setup-root[806]: cut: /sysroot/etc/gshadow: No such file or directory Sep 5 00:11:39.417019 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 5 00:11:39.438498 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 5 00:11:39.449603 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 00:11:39.460631 systemd[1]: Reached target ignition-subsequent.target - Subsequent (Not Ignition) boot complete. Sep 5 00:11:39.503462 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 5 00:11:39.530433 initrd-setup-root-after-ignition[951]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:11:39.530433 initrd-setup-root-after-ignition[951]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:11:39.544521 initrd-setup-root-after-ignition[955]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:11:39.578781 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 5 00:11:39.579015 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 5 00:11:39.600805 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 5 00:11:39.620527 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 5 00:11:39.640829 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 5 00:11:39.650515 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 5 00:11:39.704165 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 00:11:39.729576 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 5 00:11:39.744686 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:11:39.753582 systemd[1]: Stopped target timers.target - Timer Units. Sep 5 00:11:39.781798 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 5 00:11:39.782031 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 00:11:39.823713 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 5 00:11:39.833842 systemd[1]: Stopped target basic.target - Basic System. Sep 5 00:11:39.851948 systemd[1]: Stopped target ignition-subsequent.target - Subsequent (Not Ignition) boot complete. Sep 5 00:11:39.872849 systemd[1]: Stopped target ignition-diskful-subsequent.target - Ignition Subsequent Boot Disk Setup. Sep 5 00:11:39.896949 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 5 00:11:39.920962 systemd[1]: Stopped target paths.target - Path Units. Sep 5 00:11:39.938943 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 5 00:11:39.957843 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 00:11:39.978840 systemd[1]: Stopped target slices.target - Slice Units. Sep 5 00:11:39.997842 systemd[1]: Stopped target sockets.target - Socket Units. Sep 5 00:11:40.015860 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 5 00:11:40.035847 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 5 00:11:40.055827 systemd[1]: Stopped target swap.target - Swaps. Sep 5 00:11:40.073908 systemd[1]: iscsid.socket: Deactivated successfully. Sep 5 00:11:40.074177 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 00:11:40.090868 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 5 00:11:40.091136 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 00:11:40.108848 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 5 00:11:40.109193 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 5 00:11:40.136935 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:11:40.156721 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 5 00:11:40.157156 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:11:40.175855 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:11:40.196732 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 5 00:11:40.200511 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:11:40.218722 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 5 00:11:40.219080 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 5 00:11:40.251749 systemd[1]: decrypt-root.service: Deactivated successfully. Sep 5 00:11:40.252152 systemd[1]: Stopped decrypt-root.service - Generate and execute a systemd-cryptsetup service to decrypt the ROOT partition. Sep 5 00:11:40.274905 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 5 00:11:40.275249 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 5 00:11:40.293896 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 5 00:11:40.294252 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 00:11:40.317895 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 5 00:11:40.318234 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 5 00:11:40.335931 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 00:11:40.336297 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:11:40.355881 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 5 00:11:40.356228 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 5 00:11:40.373892 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 5 00:11:40.374235 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:11:40.398891 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 5 00:11:40.399231 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:11:40.419904 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 5 00:11:40.420242 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 00:11:40.451246 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:11:40.482506 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 5 00:11:40.482832 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:11:40.506820 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 5 00:11:40.507125 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 5 00:11:40.524645 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 5 00:11:40.524757 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:11:40.545598 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 5 00:11:40.545753 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 5 00:11:40.575763 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 5 00:11:40.575923 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 5 00:11:40.606735 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 00:11:40.606895 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:11:40.648556 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 5 00:11:40.907353 systemd-journald[260]: Received SIGTERM from PID 1 (systemd). Sep 5 00:11:40.667423 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 5 00:11:40.667460 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:11:40.690527 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 5 00:11:40.690596 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 00:11:40.710594 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 5 00:11:40.710710 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:11:40.733632 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:11:40.733784 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:11:40.755716 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 5 00:11:40.755948 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 5 00:11:40.776298 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 5 00:11:40.776528 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 5 00:11:40.800412 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 5 00:11:40.836645 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 5 00:11:40.856292 systemd[1]: Switching root. Sep 5 00:11:40.907703 systemd-journald[260]: Journal stopped Sep 5 00:11:34.958956 kernel: microcode: updated early: 0xf4 -> 0xfc, date = 2023-07-27 Sep 5 00:11:34.958970 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 4 15:54:07 -00 2024 Sep 5 00:11:34.958977 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.oem.id=packet flatcar.autologin verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 5 00:11:34.958982 kernel: BIOS-provided physical RAM map: Sep 5 00:11:34.958986 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Sep 5 00:11:34.958990 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Sep 5 00:11:34.958995 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Sep 5 00:11:34.958999 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Sep 5 00:11:34.959003 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Sep 5 00:11:34.959007 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b2cfff] usable Sep 5 00:11:34.959011 kernel: BIOS-e820: [mem 0x0000000081b2d000-0x0000000081b2dfff] ACPI NVS Sep 5 00:11:34.959016 kernel: BIOS-e820: [mem 0x0000000081b2e000-0x0000000081b2efff] reserved Sep 5 00:11:34.959020 kernel: BIOS-e820: [mem 0x0000000081b2f000-0x000000008afccfff] usable Sep 5 00:11:34.959024 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Sep 5 00:11:34.959030 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Sep 5 00:11:34.959034 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Sep 5 00:11:34.959040 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Sep 5 00:11:34.959044 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Sep 5 00:11:34.959049 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Sep 5 00:11:34.959054 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 5 00:11:34.959058 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Sep 5 00:11:34.959063 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Sep 5 00:11:34.959067 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Sep 5 00:11:34.959072 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Sep 5 00:11:34.959076 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Sep 5 00:11:34.959081 kernel: NX (Execute Disable) protection: active Sep 5 00:11:34.959085 kernel: APIC: Static calls initialized Sep 5 00:11:34.959090 kernel: SMBIOS 3.2.1 present. Sep 5 00:11:34.959096 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Sep 5 00:11:34.959100 kernel: tsc: Detected 3400.000 MHz processor Sep 5 00:11:34.959105 kernel: tsc: Detected 3399.906 MHz TSC Sep 5 00:11:34.959109 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 5 00:11:34.959115 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 5 00:11:34.959119 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Sep 5 00:11:34.959124 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Sep 5 00:11:34.959129 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 5 00:11:34.959134 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Sep 5 00:11:34.959138 kernel: Using GB pages for direct mapping Sep 5 00:11:34.959144 kernel: ACPI: Early table checksum verification disabled Sep 5 00:11:34.959149 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Sep 5 00:11:34.959156 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Sep 5 00:11:34.959161 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Sep 5 00:11:34.959166 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Sep 5 00:11:34.959171 kernel: ACPI: FACS 0x000000008C66CF80 000040 Sep 5 00:11:34.959177 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Sep 5 00:11:34.959182 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Sep 5 00:11:34.959187 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Sep 5 00:11:34.959192 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Sep 5 00:11:34.959197 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Sep 5 00:11:34.959202 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Sep 5 00:11:34.959207 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Sep 5 00:11:34.959213 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Sep 5 00:11:34.959218 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 5 00:11:34.959223 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Sep 5 00:11:34.959228 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Sep 5 00:11:34.959233 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 5 00:11:34.959238 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 5 00:11:34.959243 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Sep 5 00:11:34.959248 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Sep 5 00:11:34.959253 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 5 00:11:34.959262 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Sep 5 00:11:34.959267 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Sep 5 00:11:34.959272 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Sep 5 00:11:34.959277 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Sep 5 00:11:34.959282 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Sep 5 00:11:34.959287 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Sep 5 00:11:34.959292 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Sep 5 00:11:34.959297 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Sep 5 00:11:34.959330 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Sep 5 00:11:34.959350 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Sep 5 00:11:34.959356 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Sep 5 00:11:34.959361 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Sep 5 00:11:34.959366 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Sep 5 00:11:34.959371 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Sep 5 00:11:34.959376 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Sep 5 00:11:34.959381 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Sep 5 00:11:34.959386 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Sep 5 00:11:34.959392 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Sep 5 00:11:34.959397 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Sep 5 00:11:34.959402 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Sep 5 00:11:34.959407 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Sep 5 00:11:34.959411 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Sep 5 00:11:34.959416 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Sep 5 00:11:34.959421 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Sep 5 00:11:34.959426 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Sep 5 00:11:34.959431 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Sep 5 00:11:34.959437 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Sep 5 00:11:34.959442 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Sep 5 00:11:34.959447 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Sep 5 00:11:34.959452 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Sep 5 00:11:34.959457 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Sep 5 00:11:34.959462 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Sep 5 00:11:34.959467 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Sep 5 00:11:34.959472 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Sep 5 00:11:34.959476 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Sep 5 00:11:34.959481 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Sep 5 00:11:34.959487 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Sep 5 00:11:34.959492 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Sep 5 00:11:34.959497 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Sep 5 00:11:34.959502 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Sep 5 00:11:34.959507 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Sep 5 00:11:34.959512 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Sep 5 00:11:34.959517 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Sep 5 00:11:34.959522 kernel: No NUMA configuration found Sep 5 00:11:34.959527 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Sep 5 00:11:34.959533 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Sep 5 00:11:34.959538 kernel: Zone ranges: Sep 5 00:11:34.959543 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 5 00:11:34.959548 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 5 00:11:34.959553 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Sep 5 00:11:34.959558 kernel: Movable zone start for each node Sep 5 00:11:34.959563 kernel: Early memory node ranges Sep 5 00:11:34.959568 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Sep 5 00:11:34.959573 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Sep 5 00:11:34.959578 kernel: node 0: [mem 0x0000000040400000-0x0000000081b2cfff] Sep 5 00:11:34.959583 kernel: node 0: [mem 0x0000000081b2f000-0x000000008afccfff] Sep 5 00:11:34.959588 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Sep 5 00:11:34.959594 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Sep 5 00:11:34.959602 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Sep 5 00:11:34.959608 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Sep 5 00:11:34.959613 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 5 00:11:34.959619 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Sep 5 00:11:34.959625 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Sep 5 00:11:34.959630 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Sep 5 00:11:34.959636 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Sep 5 00:11:34.959641 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Sep 5 00:11:34.959646 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Sep 5 00:11:34.959652 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Sep 5 00:11:34.959657 kernel: ACPI: PM-Timer IO Port: 0x1808 Sep 5 00:11:34.959662 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Sep 5 00:11:34.959668 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Sep 5 00:11:34.959674 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Sep 5 00:11:34.959679 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Sep 5 00:11:34.959685 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Sep 5 00:11:34.959690 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Sep 5 00:11:34.959695 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Sep 5 00:11:34.959701 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Sep 5 00:11:34.959706 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Sep 5 00:11:34.959711 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Sep 5 00:11:34.959716 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Sep 5 00:11:34.959722 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Sep 5 00:11:34.959728 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Sep 5 00:11:34.959733 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Sep 5 00:11:34.959739 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Sep 5 00:11:34.959744 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Sep 5 00:11:34.959749 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Sep 5 00:11:34.959754 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 5 00:11:34.959760 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 5 00:11:34.959765 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 5 00:11:34.959771 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 5 00:11:34.959777 kernel: TSC deadline timer available Sep 5 00:11:34.959782 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Sep 5 00:11:34.959788 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Sep 5 00:11:34.959793 kernel: Booting paravirtualized kernel on bare hardware Sep 5 00:11:34.959799 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 5 00:11:34.959804 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Sep 5 00:11:34.959809 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u262144 Sep 5 00:11:34.959815 kernel: pcpu-alloc: s196904 r8192 d32472 u262144 alloc=1*2097152 Sep 5 00:11:34.959820 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Sep 5 00:11:34.959827 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.oem.id=packet flatcar.autologin verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 5 00:11:34.959832 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 5 00:11:34.959837 kernel: random: crng init done Sep 5 00:11:34.959843 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Sep 5 00:11:34.959848 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Sep 5 00:11:34.959854 kernel: Fallback order for Node 0: 0 Sep 5 00:11:34.959859 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Sep 5 00:11:34.959864 kernel: Policy zone: Normal Sep 5 00:11:34.959871 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 5 00:11:34.959876 kernel: software IO TLB: area num 16. Sep 5 00:11:34.959881 kernel: Memory: 32720312K/33452980K available (12288K kernel code, 2304K rwdata, 22708K rodata, 42704K init, 2488K bss, 732408K reserved, 0K cma-reserved) Sep 5 00:11:34.959887 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Sep 5 00:11:34.959892 kernel: ftrace: allocating 37748 entries in 148 pages Sep 5 00:11:34.959898 kernel: ftrace: allocated 148 pages with 3 groups Sep 5 00:11:34.959903 kernel: Dynamic Preempt: voluntary Sep 5 00:11:34.959908 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 5 00:11:34.959914 kernel: rcu: RCU event tracing is enabled. Sep 5 00:11:34.959921 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Sep 5 00:11:34.959926 kernel: Trampoline variant of Tasks RCU enabled. Sep 5 00:11:34.959931 kernel: Rude variant of Tasks RCU enabled. Sep 5 00:11:34.959937 kernel: Tracing variant of Tasks RCU enabled. Sep 5 00:11:34.959942 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 5 00:11:34.959948 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Sep 5 00:11:34.959953 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Sep 5 00:11:34.959958 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 5 00:11:34.959963 kernel: Console: colour dummy device 80x25 Sep 5 00:11:34.959970 kernel: printk: console [tty0] enabled Sep 5 00:11:34.959975 kernel: printk: console [ttyS1] enabled Sep 5 00:11:34.959980 kernel: ACPI: Core revision 20230628 Sep 5 00:11:34.959986 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Sep 5 00:11:34.959991 kernel: APIC: Switch to symmetric I/O mode setup Sep 5 00:11:34.959996 kernel: DMAR: Host address width 39 Sep 5 00:11:34.960002 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Sep 5 00:11:34.960007 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Sep 5 00:11:34.960012 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Sep 5 00:11:34.960019 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Sep 5 00:11:34.960024 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Sep 5 00:11:34.960030 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Sep 5 00:11:34.960035 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Sep 5 00:11:34.960041 kernel: x2apic enabled Sep 5 00:11:34.960046 kernel: APIC: Switched APIC routing to: cluster x2apic Sep 5 00:11:34.960051 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Sep 5 00:11:34.960057 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Sep 5 00:11:34.960062 kernel: CPU0: Thermal monitoring enabled (TM1) Sep 5 00:11:34.960068 kernel: process: using mwait in idle threads Sep 5 00:11:34.960074 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 5 00:11:34.960079 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Sep 5 00:11:34.960085 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 5 00:11:34.960090 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Sep 5 00:11:34.960095 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Sep 5 00:11:34.960100 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Sep 5 00:11:34.960106 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Sep 5 00:11:34.960111 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Sep 5 00:11:34.960116 kernel: RETBleed: Mitigation: Enhanced IBRS Sep 5 00:11:34.960122 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 5 00:11:34.960128 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 5 00:11:34.960133 kernel: TAA: Mitigation: TSX disabled Sep 5 00:11:34.960138 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Sep 5 00:11:34.960144 kernel: SRBDS: Mitigation: Microcode Sep 5 00:11:34.960149 kernel: GDS: Mitigation: Microcode Sep 5 00:11:34.960154 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 5 00:11:34.960160 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 5 00:11:34.960165 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 5 00:11:34.960170 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 5 00:11:34.960176 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 5 00:11:34.960181 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 5 00:11:34.960187 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 5 00:11:34.960192 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 5 00:11:34.960198 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Sep 5 00:11:34.960203 kernel: Freeing SMP alternatives memory: 32K Sep 5 00:11:34.960208 kernel: pid_max: default: 32768 minimum: 301 Sep 5 00:11:34.960214 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 5 00:11:34.960219 kernel: landlock: Up and running. Sep 5 00:11:34.960224 kernel: SELinux: Initializing. Sep 5 00:11:34.960230 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 5 00:11:34.960235 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 5 00:11:34.960240 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Sep 5 00:11:34.960247 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1. Sep 5 00:11:34.960252 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1. Sep 5 00:11:34.960258 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1. Sep 5 00:11:34.960273 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Sep 5 00:11:34.960278 kernel: ... version: 4 Sep 5 00:11:34.960283 kernel: ... bit width: 48 Sep 5 00:11:34.960289 kernel: ... generic registers: 4 Sep 5 00:11:34.960294 kernel: ... value mask: 0000ffffffffffff Sep 5 00:11:34.960326 kernel: ... max period: 00007fffffffffff Sep 5 00:11:34.960332 kernel: ... fixed-purpose events: 3 Sep 5 00:11:34.960353 kernel: ... event mask: 000000070000000f Sep 5 00:11:34.960358 kernel: signal: max sigframe size: 2032 Sep 5 00:11:34.960363 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Sep 5 00:11:34.960369 kernel: rcu: Hierarchical SRCU implementation. Sep 5 00:11:34.960374 kernel: rcu: Max phase no-delay instances is 400. Sep 5 00:11:34.960380 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Sep 5 00:11:34.960385 kernel: smp: Bringing up secondary CPUs ... Sep 5 00:11:34.960390 kernel: smpboot: x86: Booting SMP configuration: Sep 5 00:11:34.960396 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Sep 5 00:11:34.960402 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 5 00:11:34.960408 kernel: smp: Brought up 1 node, 16 CPUs Sep 5 00:11:34.960413 kernel: smpboot: Max logical packages: 1 Sep 5 00:11:34.960419 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Sep 5 00:11:34.960424 kernel: devtmpfs: initialized Sep 5 00:11:34.960429 kernel: x86/mm: Memory block size: 128MB Sep 5 00:11:34.960435 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b2d000-0x81b2dfff] (4096 bytes) Sep 5 00:11:34.960440 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Sep 5 00:11:34.960447 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 5 00:11:34.960452 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Sep 5 00:11:34.960457 kernel: pinctrl core: initialized pinctrl subsystem Sep 5 00:11:34.960463 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 5 00:11:34.960468 kernel: audit: initializing netlink subsys (disabled) Sep 5 00:11:34.960473 kernel: audit: type=2000 audit(1725495089.039:1): state=initialized audit_enabled=0 res=1 Sep 5 00:11:34.960479 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 5 00:11:34.960484 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 5 00:11:34.960489 kernel: cpuidle: using governor menu Sep 5 00:11:34.960496 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 5 00:11:34.960501 kernel: dca service started, version 1.12.1 Sep 5 00:11:34.960506 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Sep 5 00:11:34.960512 kernel: PCI: Using configuration type 1 for base access Sep 5 00:11:34.960517 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Sep 5 00:11:34.960522 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 5 00:11:34.960528 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 5 00:11:34.960533 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 5 00:11:34.960539 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 5 00:11:34.960545 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 5 00:11:34.960550 kernel: ACPI: Added _OSI(Module Device) Sep 5 00:11:34.960556 kernel: ACPI: Added _OSI(Processor Device) Sep 5 00:11:34.960561 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 5 00:11:34.960566 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 5 00:11:34.960572 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Sep 5 00:11:34.960577 kernel: ACPI: Dynamic OEM Table Load: Sep 5 00:11:34.960582 kernel: ACPI: SSDT 0xFFFF91F981ECC400 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Sep 5 00:11:34.960588 kernel: ACPI: Dynamic OEM Table Load: Sep 5 00:11:34.960594 kernel: ACPI: SSDT 0xFFFF91F981EC0800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Sep 5 00:11:34.960599 kernel: ACPI: Dynamic OEM Table Load: Sep 5 00:11:34.960605 kernel: ACPI: SSDT 0xFFFF91F981536D00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Sep 5 00:11:34.960610 kernel: ACPI: Dynamic OEM Table Load: Sep 5 00:11:34.960615 kernel: ACPI: SSDT 0xFFFF91F981EC3800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Sep 5 00:11:34.960620 kernel: ACPI: Dynamic OEM Table Load: Sep 5 00:11:34.960626 kernel: ACPI: SSDT 0xFFFF91F981ED4000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Sep 5 00:11:34.960631 kernel: ACPI: Dynamic OEM Table Load: Sep 5 00:11:34.960636 kernel: ACPI: SSDT 0xFFFF91F981EC9000 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Sep 5 00:11:34.960642 kernel: ACPI: _OSC evaluated successfully for all CPUs Sep 5 00:11:34.960648 kernel: ACPI: Interpreter enabled Sep 5 00:11:34.960653 kernel: ACPI: PM: (supports S0 S5) Sep 5 00:11:34.960659 kernel: ACPI: Using IOAPIC for interrupt routing Sep 5 00:11:34.960664 kernel: HEST: Enabling Firmware First mode for corrected errors. Sep 5 00:11:34.960670 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Sep 5 00:11:34.960675 kernel: HEST: Table parsing has been initialized. Sep 5 00:11:34.960680 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Sep 5 00:11:34.960686 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 5 00:11:34.960691 kernel: PCI: Using E820 reservations for host bridge windows Sep 5 00:11:34.960697 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Sep 5 00:11:34.960703 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Sep 5 00:11:34.960708 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Sep 5 00:11:34.960714 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Sep 5 00:11:34.960719 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Sep 5 00:11:34.960724 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Sep 5 00:11:34.960730 kernel: ACPI: \_TZ_.FN00: New power resource Sep 5 00:11:34.960735 kernel: ACPI: \_TZ_.FN01: New power resource Sep 5 00:11:34.960740 kernel: ACPI: \_TZ_.FN02: New power resource Sep 5 00:11:34.960747 kernel: ACPI: \_TZ_.FN03: New power resource Sep 5 00:11:34.960752 kernel: ACPI: \_TZ_.FN04: New power resource Sep 5 00:11:34.960757 kernel: ACPI: \PIN_: New power resource Sep 5 00:11:34.960763 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Sep 5 00:11:34.960864 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 5 00:11:34.960920 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Sep 5 00:11:34.960969 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Sep 5 00:11:34.960979 kernel: PCI host bridge to bus 0000:00 Sep 5 00:11:34.961029 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 5 00:11:34.961073 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 5 00:11:34.961115 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 5 00:11:34.961157 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Sep 5 00:11:34.961199 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Sep 5 00:11:34.961240 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Sep 5 00:11:34.961329 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Sep 5 00:11:34.961401 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Sep 5 00:11:34.961453 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Sep 5 00:11:34.961506 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Sep 5 00:11:34.961554 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Sep 5 00:11:34.961606 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Sep 5 00:11:34.961658 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Sep 5 00:11:34.961710 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Sep 5 00:11:34.961758 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Sep 5 00:11:34.961805 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Sep 5 00:11:34.961856 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Sep 5 00:11:34.961903 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Sep 5 00:11:34.961953 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Sep 5 00:11:34.962003 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Sep 5 00:11:34.962051 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 5 00:11:34.962106 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Sep 5 00:11:34.962153 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 5 00:11:34.962206 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Sep 5 00:11:34.962256 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Sep 5 00:11:34.962351 kernel: pci 0000:00:16.0: PME# supported from D3hot Sep 5 00:11:34.962410 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Sep 5 00:11:34.962461 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Sep 5 00:11:34.962507 kernel: pci 0000:00:16.1: PME# supported from D3hot Sep 5 00:11:34.962558 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Sep 5 00:11:34.962606 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Sep 5 00:11:34.962655 kernel: pci 0000:00:16.4: PME# supported from D3hot Sep 5 00:11:34.962706 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Sep 5 00:11:34.962755 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Sep 5 00:11:34.962802 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Sep 5 00:11:34.962850 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Sep 5 00:11:34.962896 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Sep 5 00:11:34.962944 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Sep 5 00:11:34.962995 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Sep 5 00:11:34.963042 kernel: pci 0000:00:17.0: PME# supported from D3hot Sep 5 00:11:34.963095 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Sep 5 00:11:34.963143 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Sep 5 00:11:34.963201 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Sep 5 00:11:34.963250 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Sep 5 00:11:34.963331 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Sep 5 00:11:34.963393 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Sep 5 00:11:34.963448 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Sep 5 00:11:34.963496 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Sep 5 00:11:34.963550 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Sep 5 00:11:34.963598 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Sep 5 00:11:34.963649 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Sep 5 00:11:34.963697 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 5 00:11:34.963749 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Sep 5 00:11:34.963801 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Sep 5 00:11:34.963851 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Sep 5 00:11:34.963899 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Sep 5 00:11:34.963952 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Sep 5 00:11:34.964001 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Sep 5 00:11:34.964055 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Sep 5 00:11:34.964106 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Sep 5 00:11:34.964157 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Sep 5 00:11:34.964207 kernel: pci 0000:01:00.0: PME# supported from D3cold Sep 5 00:11:34.964255 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Sep 5 00:11:34.964360 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Sep 5 00:11:34.964415 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Sep 5 00:11:34.964464 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Sep 5 00:11:34.964514 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Sep 5 00:11:34.964565 kernel: pci 0000:01:00.1: PME# supported from D3cold Sep 5 00:11:34.964614 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Sep 5 00:11:34.964662 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Sep 5 00:11:34.964712 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 5 00:11:34.964759 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Sep 5 00:11:34.964808 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Sep 5 00:11:34.964857 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Sep 5 00:11:34.964910 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Sep 5 00:11:34.964963 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Sep 5 00:11:34.965012 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Sep 5 00:11:34.965061 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Sep 5 00:11:34.965109 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Sep 5 00:11:34.965160 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Sep 5 00:11:34.965208 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Sep 5 00:11:34.965257 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Sep 5 00:11:34.965353 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Sep 5 00:11:34.965408 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Sep 5 00:11:34.965460 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Sep 5 00:11:34.965509 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Sep 5 00:11:34.965558 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Sep 5 00:11:34.965606 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Sep 5 00:11:34.965656 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Sep 5 00:11:34.965707 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Sep 5 00:11:34.965755 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Sep 5 00:11:34.965802 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Sep 5 00:11:34.965851 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Sep 5 00:11:34.965905 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Sep 5 00:11:34.965954 kernel: pci 0000:06:00.0: enabling Extended Tags Sep 5 00:11:34.966004 kernel: pci 0000:06:00.0: supports D1 D2 Sep 5 00:11:34.966053 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 5 00:11:34.966105 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Sep 5 00:11:34.966153 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Sep 5 00:11:34.966202 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Sep 5 00:11:34.966256 kernel: pci_bus 0000:07: extended config space not accessible Sep 5 00:11:34.966366 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Sep 5 00:11:34.966418 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Sep 5 00:11:34.966469 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Sep 5 00:11:34.966523 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Sep 5 00:11:34.966574 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 5 00:11:34.966626 kernel: pci 0000:07:00.0: supports D1 D2 Sep 5 00:11:34.966677 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 5 00:11:34.966727 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Sep 5 00:11:34.966776 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Sep 5 00:11:34.966825 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Sep 5 00:11:34.966834 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Sep 5 00:11:34.966842 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Sep 5 00:11:34.966848 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Sep 5 00:11:34.966854 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Sep 5 00:11:34.966860 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Sep 5 00:11:34.966866 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Sep 5 00:11:34.966871 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Sep 5 00:11:34.966877 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Sep 5 00:11:34.966883 kernel: iommu: Default domain type: Translated Sep 5 00:11:34.966889 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 5 00:11:34.966895 kernel: PCI: Using ACPI for IRQ routing Sep 5 00:11:34.966901 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 5 00:11:34.966907 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Sep 5 00:11:34.966912 kernel: e820: reserve RAM buffer [mem 0x81b2d000-0x83ffffff] Sep 5 00:11:34.966918 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Sep 5 00:11:34.966923 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Sep 5 00:11:34.966929 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Sep 5 00:11:34.966934 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Sep 5 00:11:34.966986 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Sep 5 00:11:34.967037 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Sep 5 00:11:34.967089 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 5 00:11:34.967098 kernel: vgaarb: loaded Sep 5 00:11:34.967104 kernel: clocksource: Switched to clocksource tsc-early Sep 5 00:11:34.967109 kernel: VFS: Disk quotas dquot_6.6.0 Sep 5 00:11:34.967115 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 5 00:11:34.967121 kernel: pnp: PnP ACPI init Sep 5 00:11:34.967171 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Sep 5 00:11:34.967221 kernel: pnp 00:02: [dma 0 disabled] Sep 5 00:11:34.967272 kernel: pnp 00:03: [dma 0 disabled] Sep 5 00:11:34.967366 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Sep 5 00:11:34.967412 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Sep 5 00:11:34.967458 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Sep 5 00:11:34.967506 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Sep 5 00:11:34.967552 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Sep 5 00:11:34.967598 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Sep 5 00:11:34.967640 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Sep 5 00:11:34.967684 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Sep 5 00:11:34.967726 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Sep 5 00:11:34.967771 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Sep 5 00:11:34.967814 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Sep 5 00:11:34.967865 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Sep 5 00:11:34.967910 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Sep 5 00:11:34.967956 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Sep 5 00:11:34.968001 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Sep 5 00:11:34.968044 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Sep 5 00:11:34.968088 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Sep 5 00:11:34.968131 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Sep 5 00:11:34.968182 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Sep 5 00:11:34.968191 kernel: pnp: PnP ACPI: found 10 devices Sep 5 00:11:34.968197 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 5 00:11:34.968203 kernel: NET: Registered PF_INET protocol family Sep 5 00:11:34.968208 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 00:11:34.968214 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Sep 5 00:11:34.968220 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 5 00:11:34.968226 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 00:11:34.968233 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 5 00:11:34.968239 kernel: TCP: Hash tables configured (established 262144 bind 65536) Sep 5 00:11:34.968245 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 5 00:11:34.968250 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 5 00:11:34.968256 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 5 00:11:34.968264 kernel: NET: Registered PF_XDP protocol family Sep 5 00:11:34.968359 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Sep 5 00:11:34.968410 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Sep 5 00:11:34.968461 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Sep 5 00:11:34.968512 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Sep 5 00:11:34.968562 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Sep 5 00:11:34.968613 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Sep 5 00:11:34.968664 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Sep 5 00:11:34.968712 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 5 00:11:34.968764 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Sep 5 00:11:34.968812 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Sep 5 00:11:34.968864 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Sep 5 00:11:34.968911 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Sep 5 00:11:34.968960 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Sep 5 00:11:34.969008 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Sep 5 00:11:34.969057 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Sep 5 00:11:34.969108 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Sep 5 00:11:34.969157 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Sep 5 00:11:34.969204 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Sep 5 00:11:34.969256 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Sep 5 00:11:34.969333 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Sep 5 00:11:34.969401 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Sep 5 00:11:34.969450 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Sep 5 00:11:34.969497 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Sep 5 00:11:34.969546 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Sep 5 00:11:34.969593 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Sep 5 00:11:34.969638 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 5 00:11:34.969682 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 5 00:11:34.969725 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 5 00:11:34.969768 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Sep 5 00:11:34.969811 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Sep 5 00:11:34.969859 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Sep 5 00:11:34.969907 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Sep 5 00:11:34.969957 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Sep 5 00:11:34.970002 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Sep 5 00:11:34.970049 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Sep 5 00:11:34.970093 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Sep 5 00:11:34.970140 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Sep 5 00:11:34.970187 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Sep 5 00:11:34.970233 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Sep 5 00:11:34.970283 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Sep 5 00:11:34.970291 kernel: PCI: CLS 64 bytes, default 64 Sep 5 00:11:34.970326 kernel: DMAR: No ATSR found Sep 5 00:11:34.970332 kernel: DMAR: No SATC found Sep 5 00:11:34.970358 kernel: DMAR: dmar0: Using Queued invalidation Sep 5 00:11:34.970408 kernel: pci 0000:00:00.0: Adding to iommu group 0 Sep 5 00:11:34.970456 kernel: pci 0000:00:01.0: Adding to iommu group 1 Sep 5 00:11:34.970508 kernel: pci 0000:00:08.0: Adding to iommu group 2 Sep 5 00:11:34.970556 kernel: pci 0000:00:12.0: Adding to iommu group 3 Sep 5 00:11:34.970605 kernel: pci 0000:00:14.0: Adding to iommu group 4 Sep 5 00:11:34.970653 kernel: pci 0000:00:14.2: Adding to iommu group 4 Sep 5 00:11:34.970701 kernel: pci 0000:00:15.0: Adding to iommu group 5 Sep 5 00:11:34.970749 kernel: pci 0000:00:15.1: Adding to iommu group 5 Sep 5 00:11:34.970799 kernel: pci 0000:00:16.0: Adding to iommu group 6 Sep 5 00:11:34.970847 kernel: pci 0000:00:16.1: Adding to iommu group 6 Sep 5 00:11:34.970897 kernel: pci 0000:00:16.4: Adding to iommu group 6 Sep 5 00:11:34.970945 kernel: pci 0000:00:17.0: Adding to iommu group 7 Sep 5 00:11:34.970994 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Sep 5 00:11:34.971042 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Sep 5 00:11:34.971090 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Sep 5 00:11:34.971139 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Sep 5 00:11:34.971187 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Sep 5 00:11:34.971237 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Sep 5 00:11:34.971290 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Sep 5 00:11:34.971380 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Sep 5 00:11:34.971428 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Sep 5 00:11:34.971478 kernel: pci 0000:01:00.0: Adding to iommu group 1 Sep 5 00:11:34.971527 kernel: pci 0000:01:00.1: Adding to iommu group 1 Sep 5 00:11:34.971578 kernel: pci 0000:03:00.0: Adding to iommu group 15 Sep 5 00:11:34.971627 kernel: pci 0000:04:00.0: Adding to iommu group 16 Sep 5 00:11:34.971677 kernel: pci 0000:06:00.0: Adding to iommu group 17 Sep 5 00:11:34.971732 kernel: pci 0000:07:00.0: Adding to iommu group 17 Sep 5 00:11:34.971740 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Sep 5 00:11:34.971747 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 5 00:11:34.971752 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Sep 5 00:11:34.971758 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Sep 5 00:11:34.971764 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Sep 5 00:11:34.971770 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Sep 5 00:11:34.971775 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Sep 5 00:11:34.971825 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Sep 5 00:11:34.971836 kernel: Initialise system trusted keyrings Sep 5 00:11:34.971842 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Sep 5 00:11:34.971847 kernel: Key type asymmetric registered Sep 5 00:11:34.971853 kernel: Asymmetric key parser 'x509' registered Sep 5 00:11:34.971859 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 5 00:11:34.971864 kernel: io scheduler mq-deadline registered Sep 5 00:11:34.971870 kernel: io scheduler kyber registered Sep 5 00:11:34.971876 kernel: io scheduler bfq registered Sep 5 00:11:34.971925 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Sep 5 00:11:34.971974 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Sep 5 00:11:34.972024 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Sep 5 00:11:34.972073 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Sep 5 00:11:34.972121 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Sep 5 00:11:34.972169 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Sep 5 00:11:34.972222 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Sep 5 00:11:34.972233 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Sep 5 00:11:34.972239 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Sep 5 00:11:34.972244 kernel: pstore: Using crash dump compression: deflate Sep 5 00:11:34.972250 kernel: pstore: Registered erst as persistent store backend Sep 5 00:11:34.972256 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 5 00:11:34.972264 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 5 00:11:34.972270 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 5 00:11:34.972276 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 5 00:11:34.972282 kernel: hpet_acpi_add: no address or irqs in _CRS Sep 5 00:11:34.972382 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Sep 5 00:11:34.972390 kernel: i8042: PNP: No PS/2 controller found. Sep 5 00:11:34.972434 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Sep 5 00:11:34.972478 kernel: rtc_cmos rtc_cmos: registered as rtc0 Sep 5 00:11:34.972523 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-09-05T00:11:33 UTC (1725495093) Sep 5 00:11:34.972568 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Sep 5 00:11:34.972576 kernel: intel_pstate: Intel P-state driver initializing Sep 5 00:11:34.972582 kernel: intel_pstate: Disabling energy efficiency optimization Sep 5 00:11:34.972589 kernel: intel_pstate: HWP enabled Sep 5 00:11:34.972595 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Sep 5 00:11:34.972601 kernel: vesafb: scrolling: redraw Sep 5 00:11:34.972607 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Sep 5 00:11:34.972612 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x00000000770abf58, using 768k, total 768k Sep 5 00:11:34.972618 kernel: Console: switching to colour frame buffer device 128x48 Sep 5 00:11:34.972624 kernel: fb0: VESA VGA frame buffer device Sep 5 00:11:34.972630 kernel: NET: Registered PF_INET6 protocol family Sep 5 00:11:34.972635 kernel: Segment Routing with IPv6 Sep 5 00:11:34.972642 kernel: In-situ OAM (IOAM) with IPv6 Sep 5 00:11:34.972648 kernel: NET: Registered PF_PACKET protocol family Sep 5 00:11:34.972653 kernel: Key type dns_resolver registered Sep 5 00:11:34.972659 kernel: microcode: Microcode Update Driver: v2.2. Sep 5 00:11:34.972665 kernel: IPI shorthand broadcast: enabled Sep 5 00:11:34.972670 kernel: sched_clock: Marking stable (2477000703, 1380667248)->(4395746327, -538078376) Sep 5 00:11:34.972676 kernel: registered taskstats version 1 Sep 5 00:11:34.972682 kernel: Loading compiled-in X.509 certificates Sep 5 00:11:34.972688 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: 8669771ab5e11f458b79e6634fe685dacc266b18' Sep 5 00:11:34.972694 kernel: Key type .fscrypt registered Sep 5 00:11:34.972700 kernel: Key type fscrypt-provisioning registered Sep 5 00:11:34.972706 kernel: ima: Allocated hash algorithm: sha1 Sep 5 00:11:34.972711 kernel: ima: No architecture policies found Sep 5 00:11:34.972717 kernel: clk: Disabling unused clocks Sep 5 00:11:34.972723 kernel: Freeing unused kernel image (initmem) memory: 42704K Sep 5 00:11:34.972728 kernel: Write protecting the kernel read-only data: 36864k Sep 5 00:11:34.972734 kernel: Freeing unused kernel image (rodata/data gap) memory: 1868K Sep 5 00:11:34.972740 kernel: Run /init as init process Sep 5 00:11:34.972747 kernel: with arguments: Sep 5 00:11:34.972752 kernel: /init Sep 5 00:11:34.972758 kernel: with environment: Sep 5 00:11:34.972763 kernel: HOME=/ Sep 5 00:11:34.972769 kernel: TERM=linux Sep 5 00:11:34.972775 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 5 00:11:34.972782 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 00:11:34.972790 systemd[1]: Detected architecture x86-64. Sep 5 00:11:34.972796 systemd[1]: Running in initrd. Sep 5 00:11:34.972802 systemd[1]: No hostname configured, using default hostname. Sep 5 00:11:34.972808 systemd[1]: Hostname set to . Sep 5 00:11:34.972813 systemd[1]: Initializing machine ID from random generator. Sep 5 00:11:34.972820 systemd[1]: Queued start job for default target initrd.target. Sep 5 00:11:34.972826 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:11:34.972832 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:11:34.972839 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 00:11:34.972845 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 5 00:11:34.972851 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-ROOT.device - /dev/disk/by-partlabel/ROOT... Sep 5 00:11:34.972857 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 5 00:11:34.972863 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 5 00:11:34.972870 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 5 00:11:34.972875 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz Sep 5 00:11:34.972882 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns Sep 5 00:11:34.972888 kernel: clocksource: Switched to clocksource tsc Sep 5 00:11:34.972894 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:11:34.972900 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:11:34.972906 systemd[1]: Reached target ignition-diskful-subsequent.target - Ignition Subsequent Boot Disk Setup. Sep 5 00:11:34.972912 systemd[1]: Reached target paths.target - Path Units. Sep 5 00:11:34.972918 systemd[1]: Reached target slices.target - Slice Units. Sep 5 00:11:34.972924 systemd[1]: Reached target swap.target - Swaps. Sep 5 00:11:34.972930 systemd[1]: Reached target timers.target - Timer Units. Sep 5 00:11:34.972937 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 00:11:34.972943 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 00:11:34.972949 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 5 00:11:34.972955 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 5 00:11:34.972961 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 00:11:34.972967 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:11:34.972973 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 00:11:34.972979 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 5 00:11:34.972986 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 00:11:34.972992 systemd[1]: Starting systemd-fsck-usr.service... Sep 5 00:11:34.972998 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 00:11:34.973014 systemd-journald[260]: Collecting audit messages is disabled. Sep 5 00:11:34.973030 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 00:11:34.973037 systemd-journald[260]: Journal started Sep 5 00:11:34.973050 systemd-journald[260]: Runtime Journal (/run/log/journal/f38516e809c0444699b368789a21880d) is 8.0M, max 639.9M, 631.9M free. Sep 5 00:11:35.006636 systemd-modules-load[262]: Inserted module 'overlay' Sep 5 00:11:35.031350 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:11:35.031362 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 00:11:35.037074 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 5 00:11:35.037164 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:11:35.037248 systemd[1]: Finished systemd-fsck-usr.service. Sep 5 00:11:35.038138 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 00:11:35.038540 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 00:11:35.080264 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 5 00:11:35.099177 systemd-modules-load[262]: Inserted module 'br_netfilter' Sep 5 00:11:35.155377 kernel: Bridge firewalling registered Sep 5 00:11:35.099578 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 00:11:35.165707 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:11:35.186920 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 00:11:35.208006 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:11:35.249579 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:11:35.260925 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:11:35.261341 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 00:11:35.267268 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:11:35.267615 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:11:35.276618 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:11:35.288314 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 5 00:11:35.337993 dracut-cmdline[299]: dracut-dracut-053 Sep 5 00:11:35.345378 dracut-cmdline[299]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.oem.id=packet flatcar.autologin verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 5 00:11:35.415360 kernel: SCSI subsystem initialized Sep 5 00:11:35.438313 kernel: Loading iSCSI transport class v2.0-870. Sep 5 00:11:35.461292 kernel: iscsi: registered transport (tcp) Sep 5 00:11:35.492768 kernel: iscsi: registered transport (qla4xxx) Sep 5 00:11:35.492790 kernel: QLogic iSCSI HBA Driver Sep 5 00:11:35.526009 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 5 00:11:35.545426 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 5 00:11:35.630440 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 5 00:11:35.630464 kernel: device-mapper: uevent: version 1.0.3 Sep 5 00:11:35.650205 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 5 00:11:35.709323 kernel: raid6: avx2x4 gen() 52088 MB/s Sep 5 00:11:35.741334 kernel: raid6: avx2x2 gen() 52656 MB/s Sep 5 00:11:35.777939 kernel: raid6: avx2x1 gen() 45164 MB/s Sep 5 00:11:35.777957 kernel: raid6: using algorithm avx2x2 gen() 52656 MB/s Sep 5 00:11:35.825881 kernel: raid6: .... xor() 31721 MB/s, rmw enabled Sep 5 00:11:35.825899 kernel: raid6: using avx2x2 recovery algorithm Sep 5 00:11:35.867306 kernel: xor: automatically using best checksumming function avx Sep 5 00:11:35.984300 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 5 00:11:35.989963 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 5 00:11:36.012614 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:11:36.019636 systemd-udevd[486]: Using default interface naming scheme 'v255'. Sep 5 00:11:36.023362 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:11:36.059512 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 5 00:11:36.086879 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 00:11:36.109498 dracut-pre-trigger[499]: rd.md=0: removing MD RAID activation Sep 5 00:11:36.109564 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 00:11:36.169379 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:11:36.194494 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 00:11:36.229228 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 5 00:11:36.229250 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 5 00:11:36.229270 kernel: cryptd: max_cpu_qlen set to 1000 Sep 5 00:11:36.194548 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:11:36.244726 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:11:36.310476 kernel: PTP clock support registered Sep 5 00:11:36.310492 kernel: ACPI: bus type USB registered Sep 5 00:11:36.310502 kernel: usbcore: registered new interface driver usbfs Sep 5 00:11:36.310511 kernel: usbcore: registered new interface driver hub Sep 5 00:11:36.310520 kernel: usbcore: registered new device driver usb Sep 5 00:11:36.273342 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:11:36.347405 kernel: libata version 3.00 loaded. Sep 5 00:11:36.347423 kernel: AVX2 version of gcm_enc/dec engaged. Sep 5 00:11:36.347436 kernel: AES CTR mode by8 optimization enabled Sep 5 00:11:36.273375 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:11:36.393831 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Sep 5 00:11:36.393846 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Sep 5 00:11:36.393853 kernel: ahci 0000:00:17.0: version 3.0 Sep 5 00:11:36.393946 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Sep 5 00:11:36.340439 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:11:37.023332 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Sep 5 00:11:37.023430 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Sep 5 00:11:37.023513 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Sep 5 00:11:37.023620 kernel: pps pps0: new PPS source ptp0 Sep 5 00:11:37.023731 kernel: igb 0000:03:00.0: added PHC on eth0 Sep 5 00:11:37.023846 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Sep 5 00:11:37.023927 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:e6:6a Sep 5 00:11:37.023993 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Sep 5 00:11:37.024058 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Sep 5 00:11:37.024120 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Sep 5 00:11:37.024181 kernel: scsi host0: ahci Sep 5 00:11:37.024244 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Sep 5 00:11:37.024314 kernel: pps pps1: new PPS source ptp1 Sep 5 00:11:37.024376 kernel: igb 0000:04:00.0: added PHC on eth1 Sep 5 00:11:37.024439 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Sep 5 00:11:37.024500 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:e6:6b Sep 5 00:11:37.024562 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Sep 5 00:11:37.024635 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Sep 5 00:11:37.024700 kernel: scsi host1: ahci Sep 5 00:11:37.024762 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Sep 5 00:11:37.024824 kernel: scsi host2: ahci Sep 5 00:11:37.024883 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Sep 5 00:11:37.024942 kernel: scsi host3: ahci Sep 5 00:11:37.025003 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Sep 5 00:11:37.025065 kernel: hub 1-0:1.0: USB hub found Sep 5 00:11:37.025130 kernel: scsi host4: ahci Sep 5 00:11:37.025189 kernel: hub 1-0:1.0: 16 ports detected Sep 5 00:11:37.025248 kernel: scsi host5: ahci Sep 5 00:11:37.025321 kernel: hub 2-0:1.0: USB hub found Sep 5 00:11:37.025391 kernel: scsi host6: ahci Sep 5 00:11:37.025455 kernel: hub 2-0:1.0: 10 ports detected Sep 5 00:11:37.025522 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 132 Sep 5 00:11:37.025537 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Sep 5 00:11:37.025608 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 132 Sep 5 00:11:37.025617 kernel: hub 1-14:1.0: USB hub found Sep 5 00:11:37.025680 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 132 Sep 5 00:11:37.025689 kernel: hub 1-14:1.0: 4 ports detected Sep 5 00:11:37.025749 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 132 Sep 5 00:11:37.025757 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 132 Sep 5 00:11:37.025764 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 132 Sep 5 00:11:37.025773 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 132 Sep 5 00:11:36.533440 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:11:37.074156 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Sep 5 00:11:37.074241 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Sep 5 00:11:37.074314 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Sep 5 00:11:37.083670 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:11:37.104435 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:11:37.141059 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:11:37.221294 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Sep 5 00:11:37.317274 kernel: ata7: SATA link down (SStatus 0 SControl 300) Sep 5 00:11:37.317334 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 5 00:11:37.332264 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Sep 5 00:11:37.332405 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Sep 5 00:11:37.348268 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Sep 5 00:11:37.348507 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 5 00:11:37.401335 kernel: ata3: SATA link down (SStatus 0 SControl 300) Sep 5 00:11:37.416304 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 5 00:11:37.431266 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Sep 5 00:11:37.447270 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Sep 5 00:11:37.463299 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Sep 5 00:11:37.499305 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Sep 5 00:11:37.499347 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Sep 5 00:11:37.534435 kernel: ata1.00: Features: NCQ-prio Sep 5 00:11:37.548291 kernel: ata2.00: Features: NCQ-prio Sep 5 00:11:37.548307 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Sep 5 00:11:37.565355 kernel: ata1.00: configured for UDMA/133 Sep 5 00:11:37.570302 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Sep 5 00:11:37.570391 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Sep 5 00:11:37.571302 kernel: ata2.00: configured for UDMA/133 Sep 5 00:11:37.583705 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Sep 5 00:11:37.584295 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Sep 5 00:11:37.676308 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 5 00:11:37.676347 kernel: ata2.00: Enabling discard_zeroes_data Sep 5 00:11:37.699765 kernel: ata1.00: Enabling discard_zeroes_data Sep 5 00:11:37.699781 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Sep 5 00:11:37.704479 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Sep 5 00:11:37.719453 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Sep 5 00:11:37.719532 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks Sep 5 00:11:37.724678 kernel: sd 1:0:0:0: [sda] Write Protect is off Sep 5 00:11:37.729906 kernel: sd 0:0:0:0: [sdb] Write Protect is off Sep 5 00:11:37.734689 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 Sep 5 00:11:37.734774 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 5 00:11:37.739521 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Sep 5 00:11:37.748524 kernel: sd 1:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Sep 5 00:11:37.757580 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 5 00:11:37.868713 kernel: ata2.00: Enabling discard_zeroes_data Sep 5 00:11:37.868730 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Sep 5 00:11:37.868820 kernel: sd 0:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Sep 5 00:11:37.869301 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Sep 5 00:11:37.884265 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Sep 5 00:11:37.884368 kernel: ata1.00: Enabling discard_zeroes_data Sep 5 00:11:37.969315 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Sep 5 00:11:37.969333 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk Sep 5 00:11:38.001653 kernel: usbcore: registered new interface driver usbhid Sep 5 00:11:38.001696 kernel: usbhid: USB HID core driver Sep 5 00:11:38.012325 kernel: BTRFS: device fsid 0dc40443-7f77-4fa7-b5e4-579d4bba0772 devid 1 transid 37 /dev/sdb3 scanned by (udev-worker) (542) Sep 5 00:11:38.022336 systemd[1]: Found device dev-disk-by\x2dpartlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Sep 5 00:11:38.063581 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/sdb6 scanned by (udev-worker) (566) Sep 5 00:11:38.062381 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Sep 5 00:11:38.118118 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Sep 5 00:11:38.118130 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Sep 5 00:11:38.099558 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Sep 5 00:11:38.231305 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Sep 5 00:11:38.231394 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Sep 5 00:11:38.231463 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Sep 5 00:11:38.231541 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Sep 5 00:11:38.231550 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Sep 5 00:11:38.144480 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Sep 5 00:11:38.148282 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Sep 5 00:11:38.267985 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 5 00:11:38.298594 systemd[1]: Starting decrypt-root.service - Generate and execute a systemd-cryptsetup service to decrypt the ROOT partition... Sep 5 00:11:38.352516 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 5 00:11:38.352923 systemd[1]: Finished decrypt-root.service - Generate and execute a systemd-cryptsetup service to decrypt the ROOT partition. Sep 5 00:11:38.383087 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 5 00:11:38.383279 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 5 00:11:38.412679 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 5 00:11:38.438420 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 00:11:38.438473 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 00:11:38.466450 systemd[1]: Reached target basic.target - Basic System. Sep 5 00:11:38.503622 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 5 00:11:38.505131 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 5 00:11:38.540615 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 5 00:11:38.552082 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 00:11:38.594357 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 5 00:11:38.580407 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:11:38.605393 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 00:11:38.633387 sh[708]: Success Sep 5 00:11:38.643391 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 5 00:11:38.655111 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 5 00:11:38.688174 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 5 00:11:38.699489 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 5 00:11:38.714178 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 5 00:11:38.721111 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 5 00:11:38.858421 kernel: BTRFS info (device dm-0): first mount of filesystem 0dc40443-7f77-4fa7-b5e4-579d4bba0772 Sep 5 00:11:38.858434 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:11:38.858441 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 5 00:11:38.858449 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 5 00:11:38.858456 kernel: BTRFS info (device dm-0): using free space tree Sep 5 00:11:38.858462 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 5 00:11:38.860595 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 5 00:11:38.896995 systemd-fsck[752]: ROOT: clean, 85/553520 files, 83083/553472 blocks Sep 5 00:11:38.906669 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 5 00:11:38.928486 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 5 00:11:39.028265 kernel: EXT4-fs (sdb9): mounted filesystem bdbe0f61-2675-40b7-b9ae-5653402e9b23 r/w with ordered data mode. Quota mode: none. Sep 5 00:11:39.028658 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 5 00:11:39.038793 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 5 00:11:39.080424 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 00:11:39.190224 kernel: BTRFS info (device sdb6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 5 00:11:39.190236 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:11:39.190243 kernel: BTRFS info (device sdb6): using free space tree Sep 5 00:11:39.190250 kernel: BTRFS info (device sdb6): enabling ssd optimizations Sep 5 00:11:39.190265 kernel: BTRFS info (device sdb6): auto enabling async discard Sep 5 00:11:39.089426 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 5 00:11:39.207578 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 00:11:39.217221 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 5 00:11:39.253629 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 5 00:11:39.318135 initrd-setup-root[785]: cut: /sysroot/etc/passwd: No such file or directory Sep 5 00:11:39.328350 initrd-setup-root[792]: cut: /sysroot/etc/group: No such file or directory Sep 5 00:11:39.339352 initrd-setup-root[799]: cut: /sysroot/etc/shadow: No such file or directory Sep 5 00:11:39.349381 initrd-setup-root[806]: cut: /sysroot/etc/gshadow: No such file or directory Sep 5 00:11:39.417019 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 5 00:11:39.438498 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 5 00:11:39.449603 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 00:11:39.460631 systemd[1]: Reached target ignition-subsequent.target - Subsequent (Not Ignition) boot complete. Sep 5 00:11:39.503462 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 5 00:11:39.530433 initrd-setup-root-after-ignition[951]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:11:39.530433 initrd-setup-root-after-ignition[951]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:11:39.544521 initrd-setup-root-after-ignition[955]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:11:39.578781 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 5 00:11:39.579015 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 5 00:11:39.600805 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 5 00:11:39.620527 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 5 00:11:39.640829 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 5 00:11:39.650515 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 5 00:11:39.704165 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 00:11:39.729576 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 5 00:11:39.744686 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:11:39.753582 systemd[1]: Stopped target timers.target - Timer Units. Sep 5 00:11:39.781798 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 5 00:11:39.782031 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 00:11:39.823713 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 5 00:11:39.833842 systemd[1]: Stopped target basic.target - Basic System. Sep 5 00:11:39.851948 systemd[1]: Stopped target ignition-subsequent.target - Subsequent (Not Ignition) boot complete. Sep 5 00:11:39.872849 systemd[1]: Stopped target ignition-diskful-subsequent.target - Ignition Subsequent Boot Disk Setup. Sep 5 00:11:39.896949 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 5 00:11:39.920962 systemd[1]: Stopped target paths.target - Path Units. Sep 5 00:11:39.938943 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 5 00:11:39.957843 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 00:11:39.978840 systemd[1]: Stopped target slices.target - Slice Units. Sep 5 00:11:39.997842 systemd[1]: Stopped target sockets.target - Socket Units. Sep 5 00:11:40.015860 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 5 00:11:40.035847 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 5 00:11:40.055827 systemd[1]: Stopped target swap.target - Swaps. Sep 5 00:11:40.073908 systemd[1]: iscsid.socket: Deactivated successfully. Sep 5 00:11:40.074177 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 00:11:40.090868 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 5 00:11:40.091136 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 00:11:40.108848 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 5 00:11:40.109193 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 5 00:11:40.136935 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:11:40.156721 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 5 00:11:40.157156 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:11:40.175855 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:11:40.196732 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 5 00:11:40.200511 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:11:40.218722 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 5 00:11:40.219080 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 5 00:11:40.251749 systemd[1]: decrypt-root.service: Deactivated successfully. Sep 5 00:11:40.252152 systemd[1]: Stopped decrypt-root.service - Generate and execute a systemd-cryptsetup service to decrypt the ROOT partition. Sep 5 00:11:40.274905 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 5 00:11:40.275249 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 5 00:11:40.293896 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 5 00:11:40.294252 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 00:11:40.317895 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 5 00:11:40.318234 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 5 00:11:40.335931 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 00:11:40.336297 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:11:40.355881 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 5 00:11:40.356228 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 5 00:11:40.373892 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 5 00:11:40.374235 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:11:40.398891 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 5 00:11:40.399231 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:11:40.419904 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 5 00:11:40.420242 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 00:11:40.451246 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:11:40.482506 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 5 00:11:40.482832 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:11:40.506820 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 5 00:11:40.507125 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 5 00:11:40.524645 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 5 00:11:40.524757 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:11:40.545598 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 5 00:11:40.545753 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 5 00:11:40.575763 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 5 00:11:40.575923 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 5 00:11:40.606735 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 00:11:40.606895 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:11:40.648556 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 5 00:11:40.907353 systemd-journald[260]: Received SIGTERM from PID 1 (systemd). Sep 5 00:11:40.667423 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 5 00:11:40.667460 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:11:40.690527 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 5 00:11:40.690596 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 00:11:40.710594 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 5 00:11:40.710710 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:11:40.733632 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:11:40.733784 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:11:40.755716 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 5 00:11:40.755948 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 5 00:11:40.776298 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 5 00:11:40.776528 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 5 00:11:40.800412 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 5 00:11:40.836645 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 5 00:11:40.856292 systemd[1]: Switching root. Sep 5 00:11:40.907703 systemd-journald[260]: Journal stopped Sep 5 00:11:43.535873 kernel: SELinux: policy capability network_peer_controls=1 Sep 5 00:11:43.535888 kernel: SELinux: policy capability open_perms=1 Sep 5 00:11:43.535895 kernel: SELinux: policy capability extended_socket_class=1 Sep 5 00:11:43.535902 kernel: SELinux: policy capability always_check_network=0 Sep 5 00:11:43.535907 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 5 00:11:43.535912 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 5 00:11:43.535918 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 5 00:11:43.535923 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 5 00:11:43.535928 kernel: audit: type=1403 audit(1725495101.119:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 5 00:11:43.535935 systemd[1]: Successfully loaded SELinux policy in 175.108ms. Sep 5 00:11:43.535943 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.550ms. Sep 5 00:11:43.535950 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 00:11:43.535956 systemd[1]: Detected architecture x86-64. Sep 5 00:11:43.535962 systemd[1]: Detected first boot. Sep 5 00:11:43.535968 systemd[1]: Hostname set to . Sep 5 00:11:43.535976 systemd[1]: Initializing machine ID from random generator. Sep 5 00:11:43.535982 zram_generator::config[996]: No configuration found. Sep 5 00:11:43.535989 systemd[1]: Populated /etc with preset unit settings. Sep 5 00:11:43.535995 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 5 00:11:43.536001 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 5 00:11:43.536007 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 5 00:11:43.536014 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 5 00:11:43.536021 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 5 00:11:43.536027 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 5 00:11:43.536034 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 5 00:11:43.536040 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 5 00:11:43.536047 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 5 00:11:43.536053 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 5 00:11:43.536059 systemd[1]: Created slice user.slice - User and Session Slice. Sep 5 00:11:43.536067 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:11:43.536074 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:11:43.536080 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 5 00:11:43.536086 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 5 00:11:43.536093 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 5 00:11:43.536099 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 00:11:43.536106 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... Sep 5 00:11:43.536112 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:11:43.536119 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 5 00:11:43.536126 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 5 00:11:43.536134 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 5 00:11:43.536142 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 5 00:11:43.536149 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:11:43.536155 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 00:11:43.536162 systemd[1]: Reached target slices.target - Slice Units. Sep 5 00:11:43.536169 systemd[1]: Reached target swap.target - Swaps. Sep 5 00:11:43.536176 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 5 00:11:43.536183 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 5 00:11:43.536190 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:11:43.536196 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 00:11:43.536203 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:11:43.536211 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 5 00:11:43.536218 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 5 00:11:43.536225 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 5 00:11:43.536231 systemd[1]: Mounting media.mount - External Media Directory... Sep 5 00:11:43.536238 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:11:43.536245 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 5 00:11:43.536252 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 5 00:11:43.536302 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 5 00:11:43.536311 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 5 00:11:43.536318 systemd[1]: Reached target machines.target - Containers. Sep 5 00:11:43.536325 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 5 00:11:43.536331 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:11:43.536338 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 00:11:43.536345 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 5 00:11:43.536352 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:11:43.536358 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 00:11:43.536367 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:11:43.536373 kernel: ACPI: bus type drm_connector registered Sep 5 00:11:43.536379 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 5 00:11:43.536386 kernel: fuse: init (API version 7.39) Sep 5 00:11:43.536392 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:11:43.536399 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 5 00:11:43.536406 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 5 00:11:43.536412 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 5 00:11:43.536420 kernel: loop: module loaded Sep 5 00:11:43.536426 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 5 00:11:43.536433 systemd[1]: Stopped systemd-fsck-usr.service. Sep 5 00:11:43.536440 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 00:11:43.536455 systemd-journald[1096]: Collecting audit messages is disabled. Sep 5 00:11:43.536470 systemd-journald[1096]: Journal started Sep 5 00:11:43.536484 systemd-journald[1096]: Runtime Journal (/run/log/journal/b2d188c8f7ee431eb8153256043a69f1) is 8.0M, max 639.9M, 631.9M free. Sep 5 00:11:41.656505 systemd[1]: Queued start job for default target multi-user.target. Sep 5 00:11:41.675728 systemd[1]: Unnecessary job was removed for dev-sdb6.device - /dev/sdb6. Sep 5 00:11:41.676048 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 5 00:11:43.564314 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 00:11:43.597297 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 00:11:43.631320 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 5 00:11:43.664292 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 00:11:43.697649 systemd[1]: verity-setup.service: Deactivated successfully. Sep 5 00:11:43.697679 systemd[1]: Stopped verity-setup.service. Sep 5 00:11:43.760301 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:11:43.781453 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 00:11:43.791832 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 5 00:11:43.802529 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 5 00:11:43.812528 systemd[1]: Mounted media.mount - External Media Directory. Sep 5 00:11:43.822523 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 5 00:11:43.832494 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 5 00:11:43.842494 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 5 00:11:43.852619 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 5 00:11:43.863669 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:11:43.874786 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 5 00:11:43.874987 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 5 00:11:43.887137 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:11:43.887511 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:11:43.899115 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 00:11:43.899481 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 00:11:43.910116 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:11:43.910493 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:11:43.922103 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 5 00:11:43.922499 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 5 00:11:43.933117 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:11:43.933475 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:11:43.944133 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 00:11:43.954325 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 00:11:43.967153 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 5 00:11:43.979122 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:11:43.999471 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 00:11:44.020516 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 5 00:11:44.031170 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 5 00:11:44.040468 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 5 00:11:44.040496 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 00:11:44.052526 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 5 00:11:44.079480 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 5 00:11:44.091143 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 5 00:11:44.100496 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:11:44.101948 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 5 00:11:44.111898 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 5 00:11:44.122393 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 00:11:44.123042 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 5 00:11:44.126536 systemd-journald[1096]: Time spent on flushing to /var/log/journal/b2d188c8f7ee431eb8153256043a69f1 is 10.635ms for 1139 entries. Sep 5 00:11:44.126536 systemd-journald[1096]: System Journal (/var/log/journal/b2d188c8f7ee431eb8153256043a69f1) is 8.0M, max 195.6M, 187.6M free. Sep 5 00:11:44.167485 systemd-journald[1096]: Received client request to flush runtime journal. Sep 5 00:11:44.140411 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 00:11:44.141068 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:11:44.150038 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 5 00:11:44.159219 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 00:11:44.170394 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 5 00:11:44.194319 kernel: loop0: detected capacity change from 0 to 89336 Sep 5 00:11:44.195191 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 5 00:11:44.206129 systemd-tmpfiles[1128]: ACLs are not supported, ignoring. Sep 5 00:11:44.206154 systemd-tmpfiles[1128]: ACLs are not supported, ignoring. Sep 5 00:11:44.224506 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 5 00:11:44.231324 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 5 00:11:44.243452 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 5 00:11:44.254438 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 5 00:11:44.265485 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 5 00:11:44.282434 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:11:44.292313 kernel: loop1: detected capacity change from 0 to 8 Sep 5 00:11:44.302476 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 00:11:44.316416 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 5 00:11:44.342269 kernel: loop2: detected capacity change from 0 to 140728 Sep 5 00:11:44.353450 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 5 00:11:44.364993 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 5 00:11:44.374836 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 5 00:11:44.375321 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 5 00:11:44.386968 udevadm[1132]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 5 00:11:44.395865 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 5 00:11:44.419309 kernel: loop3: detected capacity change from 0 to 211296 Sep 5 00:11:44.431426 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 00:11:44.439483 systemd-tmpfiles[1150]: ACLs are not supported, ignoring. Sep 5 00:11:44.439494 systemd-tmpfiles[1150]: ACLs are not supported, ignoring. Sep 5 00:11:44.443491 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:11:44.473838 ldconfig[1122]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 5 00:11:44.475024 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 5 00:11:44.488272 kernel: loop4: detected capacity change from 0 to 89336 Sep 5 00:11:44.514269 kernel: loop5: detected capacity change from 0 to 8 Sep 5 00:11:44.533266 kernel: loop6: detected capacity change from 0 to 140728 Sep 5 00:11:44.550104 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 5 00:11:44.563265 kernel: loop7: detected capacity change from 0 to 211296 Sep 5 00:11:44.576643 (sd-merge)[1154]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Sep 5 00:11:44.576907 (sd-merge)[1154]: Merged extensions into '/usr'. Sep 5 00:11:44.585522 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:11:44.597780 systemd-udevd[1157]: Using default interface naming scheme 'v255'. Sep 5 00:11:44.598006 systemd[1]: Reloading requested from client PID 1127 ('systemd-sysext') (unit systemd-sysext.service)... Sep 5 00:11:44.598012 systemd[1]: Reloading... Sep 5 00:11:44.630277 zram_generator::config[1180]: No configuration found. Sep 5 00:11:44.644380 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Sep 5 00:11:44.644469 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 37 scanned by (udev-worker) (1255) Sep 5 00:11:44.654277 kernel: ACPI: button: Sleep Button [SLPB] Sep 5 00:11:44.658289 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1195) Sep 5 00:11:44.669271 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 5 00:11:44.676270 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1195) Sep 5 00:11:44.698319 kernel: ACPI: button: Power Button [PWRF] Sep 5 00:11:44.782273 kernel: IPMI message handler: version 39.2 Sep 5 00:11:44.782331 kernel: mousedev: PS/2 mouse device common for all mice Sep 5 00:11:44.858003 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:11:44.879909 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Sep 5 00:11:44.880054 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Sep 5 00:11:44.892267 kernel: ipmi device interface Sep 5 00:11:44.892294 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Sep 5 00:11:44.892424 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Sep 5 00:11:44.893335 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Sep 5 00:11:44.911722 systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped. Sep 5 00:11:44.911862 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Sep 5 00:11:44.976675 systemd[1]: Reloading finished in 378 ms. Sep 5 00:11:44.981266 kernel: iTCO_vendor_support: vendor-support=0 Sep 5 00:11:44.981312 kernel: ipmi_si: IPMI System Interface driver Sep 5 00:11:45.003169 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Sep 5 00:11:45.021464 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Sep 5 00:11:45.033262 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Sep 5 00:11:45.055074 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Sep 5 00:11:45.073657 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Sep 5 00:11:45.093198 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Sep 5 00:11:45.109545 kernel: ipmi_si: Adding ACPI-specified kcs state machine Sep 5 00:11:45.129940 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Sep 5 00:11:45.195627 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Sep 5 00:11:45.195742 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Sep 5 00:11:45.195817 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Sep 5 00:11:45.224265 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Sep 5 00:11:45.288109 kernel: intel_rapl_common: Found RAPL domain package Sep 5 00:11:45.288146 kernel: intel_rapl_common: Found RAPL domain core Sep 5 00:11:45.304105 kernel: intel_rapl_common: Found RAPL domain dram Sep 5 00:11:45.319446 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:11:45.331471 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 5 00:11:45.381266 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Sep 5 00:11:45.382454 systemd[1]: Starting ensure-sysext.service... Sep 5 00:11:45.394927 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 5 00:11:45.399265 kernel: ipmi_ssif: IPMI SSIF Interface driver Sep 5 00:11:45.399955 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 00:11:45.411017 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 00:11:45.442684 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:11:45.450495 systemd-tmpfiles[1330]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 5 00:11:45.450698 systemd-tmpfiles[1330]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 5 00:11:45.451182 systemd-tmpfiles[1330]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 5 00:11:45.451355 systemd-tmpfiles[1330]: ACLs are not supported, ignoring. Sep 5 00:11:45.451389 systemd-tmpfiles[1330]: ACLs are not supported, ignoring. Sep 5 00:11:45.453002 systemd-tmpfiles[1330]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 00:11:45.453006 systemd-tmpfiles[1330]: Skipping /boot Sep 5 00:11:45.453631 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 5 00:11:45.457051 systemd-tmpfiles[1330]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 00:11:45.457055 systemd-tmpfiles[1330]: Skipping /boot Sep 5 00:11:45.464538 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 5 00:11:45.469508 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:11:45.470214 systemd[1]: Reloading requested from client PID 1327 ('systemctl') (unit ensure-sysext.service)... Sep 5 00:11:45.470220 systemd[1]: Reloading... Sep 5 00:11:45.506314 zram_generator::config[1362]: No configuration found. Sep 5 00:11:45.555666 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:11:45.609646 systemd[1]: Reloading finished in 139 ms. Sep 5 00:11:45.637580 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:11:45.652741 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 00:11:45.674830 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 5 00:11:45.686035 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 5 00:11:45.693104 augenrules[1435]: No rules Sep 5 00:11:45.698266 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 5 00:11:45.705381 lvm[1431]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 00:11:45.711541 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 00:11:45.723038 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 5 00:11:45.736228 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 5 00:11:45.746943 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 00:11:45.756524 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 5 00:11:45.766729 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 5 00:11:45.777600 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 5 00:11:45.788565 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 5 00:11:45.795286 systemd-networkd[1329]: lo: Link UP Sep 5 00:11:45.795289 systemd-networkd[1329]: lo: Gained carrier Sep 5 00:11:45.797690 systemd-networkd[1329]: bond0: netdev ready Sep 5 00:11:45.798064 systemd-networkd[1329]: Enumeration completed Sep 5 00:11:45.799090 systemd-networkd[1329]: enp1s0f0np0: Configuring with /etc/systemd/network/10-1c:34:da:5c:3e:c4.network. Sep 5 00:11:45.800733 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 00:11:45.814338 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:11:45.824420 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:11:45.824534 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:11:45.826974 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 5 00:11:45.838020 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:11:45.839912 lvm[1457]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 00:11:45.848033 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:11:45.856456 systemd-resolved[1441]: Positive Trust Anchors: Sep 5 00:11:45.856461 systemd-resolved[1441]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 00:11:45.856487 systemd-resolved[1441]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 00:11:45.859203 systemd-resolved[1441]: Using system hostname 'ci-4054.1.0-a-b942d58550'. Sep 5 00:11:45.861057 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:11:45.870520 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:11:45.871269 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 5 00:11:45.883156 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 5 00:11:45.892459 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 00:11:45.892527 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:11:45.893520 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 5 00:11:45.904765 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 5 00:11:45.914610 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:11:45.914729 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:11:45.925869 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:11:45.926011 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:11:45.939220 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:11:45.939493 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:11:45.950585 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 5 00:11:45.978284 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Sep 5 00:11:45.980931 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:11:45.981353 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:11:46.006313 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Sep 5 00:11:46.006843 systemd-networkd[1329]: enp1s0f1np1: Configuring with /etc/systemd/network/10-1c:34:da:5c:3e:c5.network. Sep 5 00:11:46.011572 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:11:46.024107 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:11:46.038066 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:11:46.047619 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:11:46.048052 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 00:11:46.048422 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:11:46.051831 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:11:46.052181 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:11:46.064904 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:11:46.065248 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:11:46.077745 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:11:46.078079 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:11:46.100055 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:11:46.100713 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:11:46.116916 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:11:46.129085 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 00:11:46.140036 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:11:46.164605 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:11:46.174527 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:11:46.174691 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 00:11:46.174801 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:11:46.176058 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:11:46.176205 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:11:46.189088 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 00:11:46.189354 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 00:11:46.205534 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:11:46.205874 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:11:46.222345 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Sep 5 00:11:46.241974 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:11:46.242156 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:11:46.249055 systemd-networkd[1329]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Sep 5 00:11:46.249269 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Sep 5 00:11:46.249622 systemd-networkd[1329]: enp1s0f0np0: Link UP Sep 5 00:11:46.251797 systemd-networkd[1329]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-1c:34:da:5c:3e:c4.network. Sep 5 00:11:46.252047 systemd-networkd[1329]: enp1s0f1np1: Link UP Sep 5 00:11:46.252309 systemd-networkd[1329]: bond0: Link UP Sep 5 00:11:46.268570 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 00:11:46.270292 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Sep 5 00:11:46.271731 systemd-networkd[1329]: enp1s0f0np0: Gained carrier Sep 5 00:11:46.281705 systemd[1]: Finished ensure-sysext.service. Sep 5 00:11:46.283402 systemd-networkd[1329]: enp1s0f1np1: Gained carrier Sep 5 00:11:46.291410 systemd[1]: Reached target network.target - Network. Sep 5 00:11:46.296470 systemd-networkd[1329]: bond0: Gained carrier Sep 5 00:11:46.300404 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:11:46.311380 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 00:11:46.311410 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 00:11:46.322377 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 5 00:11:46.356950 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 5 00:11:46.372308 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex Sep 5 00:11:46.372330 kernel: bond0: active interface up! Sep 5 00:11:46.397396 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 00:11:46.407371 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 5 00:11:46.418347 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 5 00:11:46.429335 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 5 00:11:46.440327 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 5 00:11:46.440343 systemd[1]: Reached target paths.target - Path Units. Sep 5 00:11:46.448328 systemd[1]: Reached target time-set.target - System Time Set. Sep 5 00:11:46.458401 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 5 00:11:46.468378 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 5 00:11:46.479337 systemd[1]: Reached target timers.target - Timer Units. Sep 5 00:11:46.496843 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 5 00:11:46.502304 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Sep 5 00:11:46.512064 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 5 00:11:46.521876 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 5 00:11:46.531632 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 5 00:11:46.541382 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 00:11:46.551333 systemd[1]: Reached target basic.target - Basic System. Sep 5 00:11:46.559351 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 5 00:11:46.559368 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 5 00:11:46.569361 systemd[1]: Starting containerd.service - containerd container runtime... Sep 5 00:11:46.579992 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 5 00:11:46.589911 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 5 00:11:46.599086 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 5 00:11:46.609186 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 5 00:11:46.611460 jq[1492]: false Sep 5 00:11:46.616114 dbus-daemon[1491]: [system] SELinux support is enabled Sep 5 00:11:46.618387 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 5 00:11:46.619045 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 5 00:11:46.620297 coreos-metadata[1490]: Sep 05 00:11:46.620 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Sep 5 00:11:46.626582 extend-filesystems[1495]: Found loop4 Sep 5 00:11:46.626582 extend-filesystems[1495]: Found loop5 Sep 5 00:11:46.635422 extend-filesystems[1495]: Found loop6 Sep 5 00:11:46.635422 extend-filesystems[1495]: Found loop7 Sep 5 00:11:46.635422 extend-filesystems[1495]: Found sda Sep 5 00:11:46.635422 extend-filesystems[1495]: Found sdb Sep 5 00:11:46.635422 extend-filesystems[1495]: Found sdb1 Sep 5 00:11:46.635422 extend-filesystems[1495]: Found sdb2 Sep 5 00:11:46.635422 extend-filesystems[1495]: Found sdb3 Sep 5 00:11:46.635422 extend-filesystems[1495]: Found usr Sep 5 00:11:46.635422 extend-filesystems[1495]: Found sdb4 Sep 5 00:11:46.635422 extend-filesystems[1495]: Found sdb6 Sep 5 00:11:46.635422 extend-filesystems[1495]: Found sdb7 Sep 5 00:11:46.635422 extend-filesystems[1495]: Found sdb9 Sep 5 00:11:46.635422 extend-filesystems[1495]: Checking size of /dev/sdb9 Sep 5 00:11:46.635422 extend-filesystems[1495]: Resized partition /dev/sdb9 Sep 5 00:11:46.811370 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks Sep 5 00:11:46.811389 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 37 scanned by (udev-worker) (1255) Sep 5 00:11:46.629168 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 5 00:11:46.811461 extend-filesystems[1504]: resize2fs 1.47.1 (20-May-2024) Sep 5 00:11:46.667642 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 5 00:11:46.685075 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 5 00:11:46.705773 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 5 00:11:46.727170 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... Sep 5 00:11:46.740780 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 5 00:11:46.835651 sshd_keygen[1519]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 5 00:11:46.741198 systemd[1]: Starting update-engine.service - Update Engine... Sep 5 00:11:46.835779 update_engine[1521]: I0905 00:11:46.798934 1521 main.cc:92] Flatcar Update Engine starting Sep 5 00:11:46.835779 update_engine[1521]: I0905 00:11:46.799734 1521 update_check_scheduler.cc:74] Next update check in 2m48s Sep 5 00:11:46.748423 systemd-logind[1516]: Watching system buttons on /dev/input/event3 (Power Button) Sep 5 00:11:46.836016 jq[1522]: true Sep 5 00:11:46.748434 systemd-logind[1516]: Watching system buttons on /dev/input/event2 (Sleep Button) Sep 5 00:11:46.748443 systemd-logind[1516]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Sep 5 00:11:46.748737 systemd-logind[1516]: New seat seat0. Sep 5 00:11:46.790829 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 5 00:11:46.795627 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 5 00:11:46.827700 systemd[1]: Started systemd-logind.service - User Login Management. Sep 5 00:11:46.858452 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 5 00:11:46.858556 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 5 00:11:46.858724 systemd[1]: motdgen.service: Deactivated successfully. Sep 5 00:11:46.858809 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 5 00:11:46.869712 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 5 00:11:46.869801 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 5 00:11:46.881435 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 5 00:11:46.895177 (ntainerd)[1534]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 5 00:11:46.896680 jq[1533]: false Sep 5 00:11:46.897780 systemd[1]: update-ssh-keys-after-ignition.service: Skipped due to 'exec-condition'. Sep 5 00:11:46.897876 systemd[1]: Condition check resulted in update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition being skipped. Sep 5 00:11:46.898558 dbus-daemon[1491]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 5 00:11:46.899571 tar[1531]: linux-amd64/helm Sep 5 00:11:46.905850 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Sep 5 00:11:46.905943 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. Sep 5 00:11:46.907960 systemd[1]: Started update-engine.service - Update Engine. Sep 5 00:11:46.944525 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 5 00:11:46.954265 systemd[1]: Starting sshkeys.service... Sep 5 00:11:46.961332 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 5 00:11:46.961440 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 5 00:11:46.972405 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 5 00:11:46.972484 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 5 00:11:46.998430 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 5 00:11:47.010576 systemd[1]: issuegen.service: Deactivated successfully. Sep 5 00:11:47.010697 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 5 00:11:47.022125 locksmithd[1560]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 5 00:11:47.023035 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 5 00:11:47.034253 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 5 00:11:47.046074 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 5 00:11:47.058670 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 5 00:11:47.060565 containerd[1534]: time="2024-09-05T00:11:47.060525048Z" level=info msg="starting containerd" revision=8ccfc03e4e2b73c22899202ae09d0caf906d3863 version=v1.7.20 Sep 5 00:11:47.070108 coreos-metadata[1567]: Sep 05 00:11:47.070 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Sep 5 00:11:47.071109 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 5 00:11:47.073459 containerd[1534]: time="2024-09-05T00:11:47.073435795Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:11:47.074182 containerd[1534]: time="2024-09-05T00:11:47.074163491Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:11:47.074220 containerd[1534]: time="2024-09-05T00:11:47.074180886Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 5 00:11:47.074220 containerd[1534]: time="2024-09-05T00:11:47.074194441Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 5 00:11:47.074321 containerd[1534]: time="2024-09-05T00:11:47.074310688Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 5 00:11:47.074354 containerd[1534]: time="2024-09-05T00:11:47.074323644Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 5 00:11:47.074388 containerd[1534]: time="2024-09-05T00:11:47.074372637Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:11:47.074416 containerd[1534]: time="2024-09-05T00:11:47.074387127Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:11:47.074506 containerd[1534]: time="2024-09-05T00:11:47.074494214Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:11:47.074544 containerd[1534]: time="2024-09-05T00:11:47.074505012Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 5 00:11:47.074544 containerd[1534]: time="2024-09-05T00:11:47.074517428Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:11:47.074544 containerd[1534]: time="2024-09-05T00:11:47.074527802Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 5 00:11:47.074616 containerd[1534]: time="2024-09-05T00:11:47.074587357Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:11:47.074729 containerd[1534]: time="2024-09-05T00:11:47.074718400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:11:47.074801 containerd[1534]: time="2024-09-05T00:11:47.074790194Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:11:47.074833 containerd[1534]: time="2024-09-05T00:11:47.074801285Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 5 00:11:47.074876 containerd[1534]: time="2024-09-05T00:11:47.074866355Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 5 00:11:47.074914 containerd[1534]: time="2024-09-05T00:11:47.074904607Z" level=info msg="metadata content store policy set" policy=shared Sep 5 00:11:47.080232 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. Sep 5 00:11:47.089507 systemd[1]: Reached target getty.target - Login Prompts. Sep 5 00:11:47.100687 containerd[1534]: time="2024-09-05T00:11:47.100666785Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 5 00:11:47.100727 containerd[1534]: time="2024-09-05T00:11:47.100704563Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 5 00:11:47.100727 containerd[1534]: time="2024-09-05T00:11:47.100721261Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 5 00:11:47.100785 containerd[1534]: time="2024-09-05T00:11:47.100735021Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 5 00:11:47.100785 containerd[1534]: time="2024-09-05T00:11:47.100750470Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 5 00:11:47.100872 containerd[1534]: time="2024-09-05T00:11:47.100860871Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 5 00:11:47.101000 containerd[1534]: time="2024-09-05T00:11:47.100989353Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 5 00:11:47.101069 containerd[1534]: time="2024-09-05T00:11:47.101058267Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 5 00:11:47.101101 containerd[1534]: time="2024-09-05T00:11:47.101070654Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 5 00:11:47.101101 containerd[1534]: time="2024-09-05T00:11:47.101082828Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 5 00:11:47.101101 containerd[1534]: time="2024-09-05T00:11:47.101096495Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 5 00:11:47.101180 containerd[1534]: time="2024-09-05T00:11:47.101108369Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 5 00:11:47.101180 containerd[1534]: time="2024-09-05T00:11:47.101121572Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 5 00:11:47.101180 containerd[1534]: time="2024-09-05T00:11:47.101134062Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 5 00:11:47.101180 containerd[1534]: time="2024-09-05T00:11:47.101148245Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 5 00:11:47.101180 containerd[1534]: time="2024-09-05T00:11:47.101160900Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 5 00:11:47.101180 containerd[1534]: time="2024-09-05T00:11:47.101174991Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 5 00:11:47.101329 containerd[1534]: time="2024-09-05T00:11:47.101185965Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 5 00:11:47.101329 containerd[1534]: time="2024-09-05T00:11:47.101204470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 5 00:11:47.101329 containerd[1534]: time="2024-09-05T00:11:47.101221182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 5 00:11:47.101329 containerd[1534]: time="2024-09-05T00:11:47.101234540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 5 00:11:47.101329 containerd[1534]: time="2024-09-05T00:11:47.101247967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 5 00:11:47.101329 containerd[1534]: time="2024-09-05T00:11:47.101264884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 5 00:11:47.101329 containerd[1534]: time="2024-09-05T00:11:47.101279437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 5 00:11:47.101329 containerd[1534]: time="2024-09-05T00:11:47.101293149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 5 00:11:47.101329 containerd[1534]: time="2024-09-05T00:11:47.101305789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 5 00:11:47.101329 containerd[1534]: time="2024-09-05T00:11:47.101319090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 5 00:11:47.101578 containerd[1534]: time="2024-09-05T00:11:47.101332781Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 5 00:11:47.101578 containerd[1534]: time="2024-09-05T00:11:47.101344465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 5 00:11:47.101578 containerd[1534]: time="2024-09-05T00:11:47.101356214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 5 00:11:47.101578 containerd[1534]: time="2024-09-05T00:11:47.101367617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 5 00:11:47.101578 containerd[1534]: time="2024-09-05T00:11:47.101382691Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 5 00:11:47.101578 containerd[1534]: time="2024-09-05T00:11:47.101400700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 5 00:11:47.101578 containerd[1534]: time="2024-09-05T00:11:47.101413244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 5 00:11:47.101578 containerd[1534]: time="2024-09-05T00:11:47.101424239Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 5 00:11:47.101578 containerd[1534]: time="2024-09-05T00:11:47.101460161Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 5 00:11:47.101578 containerd[1534]: time="2024-09-05T00:11:47.101476412Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 5 00:11:47.101578 containerd[1534]: time="2024-09-05T00:11:47.101487558Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 5 00:11:47.101578 containerd[1534]: time="2024-09-05T00:11:47.101499765Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 5 00:11:47.101578 containerd[1534]: time="2024-09-05T00:11:47.101510301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 5 00:11:47.101884 containerd[1534]: time="2024-09-05T00:11:47.101523460Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 5 00:11:47.101884 containerd[1534]: time="2024-09-05T00:11:47.101533768Z" level=info msg="NRI interface is disabled by configuration." Sep 5 00:11:47.101884 containerd[1534]: time="2024-09-05T00:11:47.101543859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 5 00:11:47.101958 containerd[1534]: time="2024-09-05T00:11:47.101801163Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 5 00:11:47.101958 containerd[1534]: time="2024-09-05T00:11:47.101858327Z" level=info msg="Connect containerd service" Sep 5 00:11:47.101958 containerd[1534]: time="2024-09-05T00:11:47.101884208Z" level=info msg="using legacy CRI server" Sep 5 00:11:47.101958 containerd[1534]: time="2024-09-05T00:11:47.101891341Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 5 00:11:47.102139 containerd[1534]: time="2024-09-05T00:11:47.101963481Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 5 00:11:47.102343 containerd[1534]: time="2024-09-05T00:11:47.102329349Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 00:11:47.102483 containerd[1534]: time="2024-09-05T00:11:47.102456860Z" level=info msg="Start subscribing containerd event" Sep 5 00:11:47.102513 containerd[1534]: time="2024-09-05T00:11:47.102494864Z" level=info msg="Start recovering state" Sep 5 00:11:47.102539 containerd[1534]: time="2024-09-05T00:11:47.102523204Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 5 00:11:47.102539 containerd[1534]: time="2024-09-05T00:11:47.102529409Z" level=info msg="Start event monitor" Sep 5 00:11:47.102595 containerd[1534]: time="2024-09-05T00:11:47.102549411Z" level=info msg="Start snapshots syncer" Sep 5 00:11:47.102595 containerd[1534]: time="2024-09-05T00:11:47.102559853Z" level=info msg="Start cni network conf syncer for default" Sep 5 00:11:47.102595 containerd[1534]: time="2024-09-05T00:11:47.102562767Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 5 00:11:47.102715 containerd[1534]: time="2024-09-05T00:11:47.102566970Z" level=info msg="Start streaming server" Sep 5 00:11:47.102769 containerd[1534]: time="2024-09-05T00:11:47.102751042Z" level=info msg="containerd successfully booted in 0.043012s" Sep 5 00:11:47.102799 systemd[1]: Started containerd.service - containerd container runtime. Sep 5 00:11:47.210050 tar[1531]: linux-amd64/LICENSE Sep 5 00:11:47.210134 tar[1531]: linux-amd64/README.md Sep 5 00:11:47.222032 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 5 00:11:47.239266 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 Sep 5 00:11:47.264385 extend-filesystems[1504]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required Sep 5 00:11:47.264385 extend-filesystems[1504]: old_desc_blocks = 1, new_desc_blocks = 56 Sep 5 00:11:47.264385 extend-filesystems[1504]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. Sep 5 00:11:47.306462 extend-filesystems[1495]: Resized filesystem in /dev/sdb9 Sep 5 00:11:47.264832 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 5 00:11:47.264919 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 5 00:11:48.215348 systemd-networkd[1329]: bond0: Gained IPv6LL Sep 5 00:11:48.217115 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 5 00:11:48.228985 systemd[1]: Reached target network-online.target - Network is Online. Sep 5 00:11:48.252447 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:11:48.262999 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 5 00:11:48.280810 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 5 00:11:48.902144 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:11:48.913964 (kubelet)[1608]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:11:48.964509 kernel: mlx5_core 0000:01:00.0: lag map: port 1:1 port 2:2 Sep 5 00:11:48.964894 kernel: mlx5_core 0000:01:00.0: shared_fdb:0 mode:queue_affinity Sep 5 00:11:49.428998 kubelet[1608]: E0905 00:11:49.428930 1608 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:11:49.430328 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:11:49.430406 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:11:49.626775 coreos-metadata[1567]: Sep 05 00:11:49.626 INFO Fetch successful Sep 5 00:11:49.685898 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 5 00:11:49.700529 systemd[1]: Started sshd@0-147.28.180.221:22-139.178.89.65:34240.service - OpenSSH per-connection server daemon (139.178.89.65:34240). Sep 5 00:11:49.707982 unknown[1567]: wrote ssh authorized keys file for user: core Sep 5 00:11:49.730421 update-ssh-keys[1632]: Updated "/home/core/.ssh/authorized_keys" Sep 5 00:11:49.730727 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 5 00:11:49.733060 systemd-timesyncd[1485]: Contacted time server 171.66.97.126:123 (0.flatcar.pool.ntp.org). Sep 5 00:11:49.733083 systemd-timesyncd[1485]: Initial clock synchronization to Thu 2024-09-05 00:11:49.486129 UTC. Sep 5 00:11:49.743387 systemd[1]: Finished sshkeys.service. Sep 5 00:11:49.799781 sshd[1630]: Accepted publickey for core from 139.178.89.65 port 34240 ssh2: RSA SHA256:4FMeK9BETRsOfh6qIKuAN/E0Ly9Zjkq+yHHcf+5tfaM Sep 5 00:11:49.801007 sshd[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:11:49.806692 systemd-logind[1516]: New session 1 of user core. Sep 5 00:11:49.807480 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 5 00:11:49.829557 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 5 00:11:49.843126 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 5 00:11:49.873545 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 5 00:11:49.884471 (systemd)[1638]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 5 00:11:49.958352 systemd[1638]: Queued start job for default target default.target. Sep 5 00:11:49.967920 systemd[1638]: Created slice app.slice - User Application Slice. Sep 5 00:11:49.967933 systemd[1638]: Reached target paths.target - Paths. Sep 5 00:11:49.967942 systemd[1638]: Reached target timers.target - Timers. Sep 5 00:11:49.968569 systemd[1638]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 5 00:11:49.973958 systemd[1638]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 5 00:11:49.973986 systemd[1638]: Reached target sockets.target - Sockets. Sep 5 00:11:49.973995 systemd[1638]: Reached target basic.target - Basic System. Sep 5 00:11:49.974016 systemd[1638]: Reached target default.target - Main User Target. Sep 5 00:11:49.974031 systemd[1638]: Startup finished in 85ms. Sep 5 00:11:49.974168 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 5 00:11:49.991543 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 5 00:11:50.056104 systemd[1]: Started sshd@1-147.28.180.221:22-139.178.89.65:59194.service - OpenSSH per-connection server daemon (139.178.89.65:59194). Sep 5 00:11:50.092639 sshd[1649]: Accepted publickey for core from 139.178.89.65 port 59194 ssh2: RSA SHA256:4FMeK9BETRsOfh6qIKuAN/E0Ly9Zjkq+yHHcf+5tfaM Sep 5 00:11:50.093347 sshd[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:11:50.095584 systemd-logind[1516]: New session 2 of user core. Sep 5 00:11:50.109509 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 5 00:11:50.168858 sshd[1649]: pam_unix(sshd:session): session closed for user core Sep 5 00:11:50.181957 systemd[1]: sshd@1-147.28.180.221:22-139.178.89.65:59194.service: Deactivated successfully. Sep 5 00:11:50.182744 systemd[1]: session-2.scope: Deactivated successfully. Sep 5 00:11:50.183472 systemd-logind[1516]: Session 2 logged out. Waiting for processes to exit. Sep 5 00:11:50.184288 systemd[1]: Started sshd@2-147.28.180.221:22-139.178.89.65:59202.service - OpenSSH per-connection server daemon (139.178.89.65:59202). Sep 5 00:11:50.196273 systemd-logind[1516]: Removed session 2. Sep 5 00:11:50.228431 sshd[1656]: Accepted publickey for core from 139.178.89.65 port 59202 ssh2: RSA SHA256:4FMeK9BETRsOfh6qIKuAN/E0Ly9Zjkq+yHHcf+5tfaM Sep 5 00:11:50.229035 sshd[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:11:50.231445 systemd-logind[1516]: New session 3 of user core. Sep 5 00:11:50.240477 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 5 00:11:50.307121 sshd[1656]: pam_unix(sshd:session): session closed for user core Sep 5 00:11:50.314635 systemd[1]: sshd@2-147.28.180.221:22-139.178.89.65:59202.service: Deactivated successfully. Sep 5 00:11:50.317500 systemd[1]: session-3.scope: Deactivated successfully. Sep 5 00:11:50.317863 systemd-logind[1516]: Session 3 logged out. Waiting for processes to exit. Sep 5 00:11:50.318449 systemd-logind[1516]: Removed session 3. Sep 5 00:11:50.366831 coreos-metadata[1490]: Sep 05 00:11:50.366 INFO Fetch successful Sep 5 00:11:50.463450 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 5 00:11:50.485551 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Sep 5 00:11:51.324335 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Sep 5 00:11:51.336526 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 5 00:11:51.346727 systemd[1]: Startup finished in 2.662s (kernel) + 7.103s (initrd) + 10.401s (userspace) = 20.166s. Sep 5 00:11:51.366834 login[1585]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 5 00:11:51.369989 systemd-logind[1516]: New session 4 of user core. Sep 5 00:11:51.370822 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 5 00:11:51.381289 login[1581]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 5 00:11:51.384001 systemd-logind[1516]: New session 5 of user core. Sep 5 00:11:51.399551 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 5 00:11:52.008182 systemd[1]: Started sshd@3-147.28.180.221:22-123.58.218.88:37038.service - OpenSSH per-connection server daemon (123.58.218.88:37038). Sep 5 00:11:53.332246 sshd[1694]: Invalid user matheus from 123.58.218.88 port 37038 Sep 5 00:11:53.491772 sshd[1694]: Received disconnect from 123.58.218.88 port 37038:11: Bye Bye [preauth] Sep 5 00:11:53.491772 sshd[1694]: Disconnected from invalid user matheus 123.58.218.88 port 37038 [preauth] Sep 5 00:11:53.494935 systemd[1]: sshd@3-147.28.180.221:22-123.58.218.88:37038.service: Deactivated successfully. Sep 5 00:11:59.682324 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 5 00:11:59.697603 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:11:59.897656 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:11:59.902216 (kubelet)[1707]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:11:59.939856 kubelet[1707]: E0905 00:11:59.939708 1707 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:11:59.941993 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:11:59.942066 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:12:00.147154 systemd[1]: Started sshd@4-147.28.180.221:22-139.178.89.65:44338.service - OpenSSH per-connection server daemon (139.178.89.65:44338). Sep 5 00:12:00.176564 sshd[1727]: Accepted publickey for core from 139.178.89.65 port 44338 ssh2: RSA SHA256:4FMeK9BETRsOfh6qIKuAN/E0Ly9Zjkq+yHHcf+5tfaM Sep 5 00:12:00.177225 sshd[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:12:00.179708 systemd-logind[1516]: New session 6 of user core. Sep 5 00:12:00.180418 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 5 00:12:00.230057 sshd[1727]: pam_unix(sshd:session): session closed for user core Sep 5 00:12:00.245829 systemd[1]: sshd@4-147.28.180.221:22-139.178.89.65:44338.service: Deactivated successfully. Sep 5 00:12:00.246573 systemd[1]: session-6.scope: Deactivated successfully. Sep 5 00:12:00.247248 systemd-logind[1516]: Session 6 logged out. Waiting for processes to exit. Sep 5 00:12:00.247955 systemd[1]: Started sshd@5-147.28.180.221:22-139.178.89.65:44346.service - OpenSSH per-connection server daemon (139.178.89.65:44346). Sep 5 00:12:00.248449 systemd-logind[1516]: Removed session 6. Sep 5 00:12:00.287896 sshd[1734]: Accepted publickey for core from 139.178.89.65 port 44346 ssh2: RSA SHA256:4FMeK9BETRsOfh6qIKuAN/E0Ly9Zjkq+yHHcf+5tfaM Sep 5 00:12:00.288825 sshd[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:12:00.292215 systemd-logind[1516]: New session 7 of user core. Sep 5 00:12:00.310578 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 5 00:12:00.363999 sshd[1734]: pam_unix(sshd:session): session closed for user core Sep 5 00:12:00.382109 systemd[1]: sshd@5-147.28.180.221:22-139.178.89.65:44346.service: Deactivated successfully. Sep 5 00:12:00.382826 systemd[1]: session-7.scope: Deactivated successfully. Sep 5 00:12:00.383492 systemd-logind[1516]: Session 7 logged out. Waiting for processes to exit. Sep 5 00:12:00.384119 systemd[1]: Started sshd@6-147.28.180.221:22-139.178.89.65:44354.service - OpenSSH per-connection server daemon (139.178.89.65:44354). Sep 5 00:12:00.384745 systemd-logind[1516]: Removed session 7. Sep 5 00:12:00.414579 sshd[1741]: Accepted publickey for core from 139.178.89.65 port 44354 ssh2: RSA SHA256:4FMeK9BETRsOfh6qIKuAN/E0Ly9Zjkq+yHHcf+5tfaM Sep 5 00:12:00.415377 sshd[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:12:00.418198 systemd-logind[1516]: New session 8 of user core. Sep 5 00:12:00.430506 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 5 00:12:00.482999 sshd[1741]: pam_unix(sshd:session): session closed for user core Sep 5 00:12:00.492906 systemd[1]: sshd@6-147.28.180.221:22-139.178.89.65:44354.service: Deactivated successfully. Sep 5 00:12:00.493650 systemd[1]: session-8.scope: Deactivated successfully. Sep 5 00:12:00.494350 systemd-logind[1516]: Session 8 logged out. Waiting for processes to exit. Sep 5 00:12:00.494966 systemd[1]: Started sshd@7-147.28.180.221:22-139.178.89.65:44362.service - OpenSSH per-connection server daemon (139.178.89.65:44362). Sep 5 00:12:00.495442 systemd-logind[1516]: Removed session 8. Sep 5 00:12:00.526698 sshd[1748]: Accepted publickey for core from 139.178.89.65 port 44362 ssh2: RSA SHA256:4FMeK9BETRsOfh6qIKuAN/E0Ly9Zjkq+yHHcf+5tfaM Sep 5 00:12:00.527437 sshd[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:12:00.530109 systemd-logind[1516]: New session 9 of user core. Sep 5 00:12:00.546464 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 5 00:12:00.609138 sudo[1751]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 5 00:12:00.609296 sudo[1751]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:12:00.623852 sudo[1751]: pam_unix(sudo:session): session closed for user root Sep 5 00:12:00.624818 sshd[1748]: pam_unix(sshd:session): session closed for user core Sep 5 00:12:00.634421 systemd[1]: sshd@7-147.28.180.221:22-139.178.89.65:44362.service: Deactivated successfully. Sep 5 00:12:00.635430 systemd[1]: session-9.scope: Deactivated successfully. Sep 5 00:12:00.636325 systemd-logind[1516]: Session 9 logged out. Waiting for processes to exit. Sep 5 00:12:00.637268 systemd[1]: Started sshd@8-147.28.180.221:22-139.178.89.65:44368.service - OpenSSH per-connection server daemon (139.178.89.65:44368). Sep 5 00:12:00.638102 systemd-logind[1516]: Removed session 9. Sep 5 00:12:00.668019 sshd[1756]: Accepted publickey for core from 139.178.89.65 port 44368 ssh2: RSA SHA256:4FMeK9BETRsOfh6qIKuAN/E0Ly9Zjkq+yHHcf+5tfaM Sep 5 00:12:00.668667 sshd[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:12:00.671052 systemd-logind[1516]: New session 10 of user core. Sep 5 00:12:00.685808 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 5 00:12:00.755386 sudo[1760]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 5 00:12:00.756197 sudo[1760]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:12:00.764397 sudo[1760]: pam_unix(sudo:session): session closed for user root Sep 5 00:12:00.778214 sudo[1759]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 5 00:12:00.779102 sudo[1759]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:12:00.811725 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 5 00:12:00.812916 auditctl[1763]: No rules Sep 5 00:12:00.813122 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 00:12:00.813232 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 5 00:12:00.814633 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 00:12:00.831761 augenrules[1781]: No rules Sep 5 00:12:00.832217 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 00:12:00.832958 sudo[1759]: pam_unix(sudo:session): session closed for user root Sep 5 00:12:00.834183 sshd[1756]: pam_unix(sshd:session): session closed for user core Sep 5 00:12:00.849529 systemd[1]: sshd@8-147.28.180.221:22-139.178.89.65:44368.service: Deactivated successfully. Sep 5 00:12:00.851109 systemd[1]: session-10.scope: Deactivated successfully. Sep 5 00:12:00.852502 systemd-logind[1516]: Session 10 logged out. Waiting for processes to exit. Sep 5 00:12:00.854117 systemd[1]: Started sshd@9-147.28.180.221:22-139.178.89.65:44382.service - OpenSSH per-connection server daemon (139.178.89.65:44382). Sep 5 00:12:00.855520 systemd-logind[1516]: Removed session 10. Sep 5 00:12:00.946325 sshd[1790]: Accepted publickey for core from 139.178.89.65 port 44382 ssh2: RSA SHA256:4FMeK9BETRsOfh6qIKuAN/E0Ly9Zjkq+yHHcf+5tfaM Sep 5 00:12:00.947906 sshd[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:12:00.953065 systemd-logind[1516]: New session 11 of user core. Sep 5 00:12:00.971780 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 5 00:12:01.037623 sudo[1793]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 5 00:12:01.038596 sudo[1793]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:12:01.292564 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 5 00:12:01.292627 (dockerd)[1806]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 5 00:12:01.558848 dockerd[1806]: time="2024-09-05T00:12:01.558756680Z" level=info msg="Starting up" Sep 5 00:12:01.657265 dockerd[1806]: time="2024-09-05T00:12:01.657208833Z" level=info msg="Loading containers: start." Sep 5 00:12:01.738321 kernel: Initializing XFRM netlink socket Sep 5 00:12:01.801423 systemd-networkd[1329]: docker0: Link UP Sep 5 00:12:01.827184 dockerd[1806]: time="2024-09-05T00:12:01.827144945Z" level=info msg="Loading containers: done." Sep 5 00:12:01.834171 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3060070942-merged.mount: Deactivated successfully. Sep 5 00:12:01.834440 dockerd[1806]: time="2024-09-05T00:12:01.834395067Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 5 00:12:01.834475 dockerd[1806]: time="2024-09-05T00:12:01.834444953Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 5 00:12:01.834505 dockerd[1806]: time="2024-09-05T00:12:01.834494388Z" level=info msg="Daemon has completed initialization" Sep 5 00:12:01.849331 dockerd[1806]: time="2024-09-05T00:12:01.849252538Z" level=info msg="API listen on /run/docker.sock" Sep 5 00:12:01.849379 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 5 00:12:02.786163 containerd[1534]: time="2024-09-05T00:12:02.786116367Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.8\"" Sep 5 00:12:03.554361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1381148489.mount: Deactivated successfully. Sep 5 00:12:04.696669 containerd[1534]: time="2024-09-05T00:12:04.696616723Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:12:04.696872 containerd[1534]: time="2024-09-05T00:12:04.696756562Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.8: active requests=0, bytes read=35232949" Sep 5 00:12:04.697283 containerd[1534]: time="2024-09-05T00:12:04.697238221Z" level=info msg="ImageCreate event name:\"sha256:ea7e9c4af6a6f4f2fc0b86f81d102bf60167b3cbd4ce7d1545833b0283ab80b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:12:04.698805 containerd[1534]: time="2024-09-05T00:12:04.698764189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6f72fa926c9b05e10629fe1a092fd28dcd65b4fdfd0cc7bd55f85a57a6ba1fa5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:12:04.699426 containerd[1534]: time="2024-09-05T00:12:04.699385743Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.8\" with image id \"sha256:ea7e9c4af6a6f4f2fc0b86f81d102bf60167b3cbd4ce7d1545833b0283ab80b7\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6f72fa926c9b05e10629fe1a092fd28dcd65b4fdfd0cc7bd55f85a57a6ba1fa5\", size \"35229749\" in 1.913246863s" Sep 5 00:12:04.699426 containerd[1534]: time="2024-09-05T00:12:04.699402160Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.8\" returns image reference \"sha256:ea7e9c4af6a6f4f2fc0b86f81d102bf60167b3cbd4ce7d1545833b0283ab80b7\"" Sep 5 00:12:04.710171 containerd[1534]: time="2024-09-05T00:12:04.710150859Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.8\"" Sep 5 00:12:06.287619 containerd[1534]: time="2024-09-05T00:12:06.287522165Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:12:06.287825 containerd[1534]: time="2024-09-05T00:12:06.287697026Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.8: active requests=0, bytes read=32206206" Sep 5 00:12:06.288088 containerd[1534]: time="2024-09-05T00:12:06.288068500Z" level=info msg="ImageCreate event name:\"sha256:b469e8ed7312f97f28340218ee5884606f9998ad73d3692a6078a2692253589a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:12:06.290729 containerd[1534]: time="2024-09-05T00:12:06.290686868Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6f27d63ded20614c68554b477cd7a78eda78a498a92bfe8935cf964ca5b74d0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:12:06.291301 containerd[1534]: time="2024-09-05T00:12:06.291237025Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.8\" with image id \"sha256:b469e8ed7312f97f28340218ee5884606f9998ad73d3692a6078a2692253589a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6f27d63ded20614c68554b477cd7a78eda78a498a92bfe8935cf964ca5b74d0b\", size \"33756152\" in 1.581063707s" Sep 5 00:12:06.291301 containerd[1534]: time="2024-09-05T00:12:06.291254343Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.8\" returns image reference \"sha256:b469e8ed7312f97f28340218ee5884606f9998ad73d3692a6078a2692253589a\"" Sep 5 00:12:06.303243 containerd[1534]: time="2024-09-05T00:12:06.303193799Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.8\"" Sep 5 00:12:07.430598 containerd[1534]: time="2024-09-05T00:12:07.430573732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:12:07.430902 containerd[1534]: time="2024-09-05T00:12:07.430811488Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.8: active requests=0, bytes read=17321507" Sep 5 00:12:07.431263 containerd[1534]: time="2024-09-05T00:12:07.431245130Z" level=info msg="ImageCreate event name:\"sha256:e932331104a0d08ad33e8c298f0c2a9a23378869c8fc0915df299b611c196f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:12:07.432746 containerd[1534]: time="2024-09-05T00:12:07.432733226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:da74a66675d95e39ec25da5e70729da746d0fa0b15ee0da872ac980519bc28bd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:12:07.433378 containerd[1534]: time="2024-09-05T00:12:07.433341505Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.8\" with image id \"sha256:e932331104a0d08ad33e8c298f0c2a9a23378869c8fc0915df299b611c196f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:da74a66675d95e39ec25da5e70729da746d0fa0b15ee0da872ac980519bc28bd\", size \"18871471\" in 1.130126908s" Sep 5 00:12:07.433378 containerd[1534]: time="2024-09-05T00:12:07.433357537Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.8\" returns image reference \"sha256:e932331104a0d08ad33e8c298f0c2a9a23378869c8fc0915df299b611c196f21\"" Sep 5 00:12:07.443965 containerd[1534]: time="2024-09-05T00:12:07.443946065Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.8\"" Sep 5 00:12:08.227197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3723624920.mount: Deactivated successfully. Sep 5 00:12:08.388655 containerd[1534]: time="2024-09-05T00:12:08.388600203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:12:08.388873 containerd[1534]: time="2024-09-05T00:12:08.388826199Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.8: active requests=0, bytes read=28600380" Sep 5 00:12:08.389182 containerd[1534]: time="2024-09-05T00:12:08.389141771Z" level=info msg="ImageCreate event name:\"sha256:b6e10835ec72a48862d901a23b7c4c924300c3f6cfe89cd6031533b67e1f4e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:12:08.390164 containerd[1534]: time="2024-09-05T00:12:08.390124267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:559a093080f70ca863922f5e4bb90d6926d52653a91edb5b72c685ebb65f1858\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:12:08.390443 containerd[1534]: time="2024-09-05T00:12:08.390403012Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.8\" with image id \"sha256:b6e10835ec72a48862d901a23b7c4c924300c3f6cfe89cd6031533b67e1f4e54\", repo tag \"registry.k8s.io/kube-proxy:v1.29.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:559a093080f70ca863922f5e4bb90d6926d52653a91edb5b72c685ebb65f1858\", size \"28599399\" in 946.436087ms" Sep 5 00:12:08.390443 containerd[1534]: time="2024-09-05T00:12:08.390417483Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.8\" returns image reference \"sha256:b6e10835ec72a48862d901a23b7c4c924300c3f6cfe89cd6031533b67e1f4e54\"" Sep 5 00:12:08.402000 containerd[1534]: time="2024-09-05T00:12:08.401977618Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Sep 5 00:12:08.908728 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2286744834.mount: Deactivated successfully. Sep 5 00:12:09.413637 containerd[1534]: time="2024-09-05T00:12:09.413616509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:12:09.413846 containerd[1534]: time="2024-09-05T00:12:09.413825523Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Sep 5 00:12:09.414198 containerd[1534]: time="2024-09-05T00:12:09.414187447Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:12:09.415774 containerd[1534]: time="2024-09-05T00:12:09.415732785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:12:09.416445 containerd[1534]: time="2024-09-05T00:12:09.416401579Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.014400619s" Sep 5 00:12:09.416445 containerd[1534]: time="2024-09-05T00:12:09.416420714Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Sep 5 00:12:09.427495 containerd[1534]: time="2024-09-05T00:12:09.427470312Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Sep 5 00:12:09.896901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1853500000.mount: Deactivated successfully. Sep 5 00:12:09.898621 containerd[1534]: time="2024-09-05T00:12:09.898574984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:12:09.898810 containerd[1534]: time="2024-09-05T00:12:09.898767911Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Sep 5 00:12:09.899211 containerd[1534]: time="2024-09-05T00:12:09.899173097Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:12:09.900238 containerd[1534]: time="2024-09-05T00:12:09.900187547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:12:09.900743 containerd[1534]: time="2024-09-05T00:12:09.900703092Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 473.207718ms" Sep 5 00:12:09.900743 containerd[1534]: time="2024-09-05T00:12:09.900718795Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Sep 5 00:12:09.912392 containerd[1534]: time="2024-09-05T00:12:09.912340378Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Sep 5 00:12:10.192483 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 5 00:12:10.206509 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:12:10.414250 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:12:10.416547 (kubelet)[2181]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:12:10.447602 kubelet[2181]: E0905 00:12:10.447493 2181 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:12:10.448968 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:12:10.449061 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:12:10.576292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4062704173.mount: Deactivated successfully. Sep 5 00:12:11.872536 containerd[1534]: time="2024-09-05T00:12:11.872506311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:12:11.872798 containerd[1534]: time="2024-09-05T00:12:11.872722762Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Sep 5 00:12:11.873113 containerd[1534]: time="2024-09-05T00:12:11.873101487Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:12:11.874727 containerd[1534]: time="2024-09-05T00:12:11.874707015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:12:11.875885 containerd[1534]: time="2024-09-05T00:12:11.875870828Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 1.963496015s" Sep 5 00:12:11.875919 containerd[1534]: time="2024-09-05T00:12:11.875887327Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Sep 5 00:12:13.441698 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:12:13.455623 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:12:13.464367 systemd[1]: Reloading requested from client PID 2402 ('systemctl') (unit session-11.scope)... Sep 5 00:12:13.464373 systemd[1]: Reloading... Sep 5 00:12:13.507321 zram_generator::config[2439]: No configuration found. Sep 5 00:12:13.574650 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:12:13.643651 systemd[1]: Reloading finished in 179 ms. Sep 5 00:12:13.676447 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:12:13.678191 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:12:13.678716 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 00:12:13.678819 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:12:13.679719 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:12:13.871239 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:12:13.876196 (kubelet)[2506]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 00:12:13.899457 kubelet[2506]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:12:13.899457 kubelet[2506]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 5 00:12:13.899457 kubelet[2506]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:12:13.899670 kubelet[2506]: I0905 00:12:13.899460 2506 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 00:12:14.170781 kubelet[2506]: I0905 00:12:14.170692 2506 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Sep 5 00:12:14.170781 kubelet[2506]: I0905 00:12:14.170709 2506 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 00:12:14.170870 kubelet[2506]: I0905 00:12:14.170857 2506 server.go:919] "Client rotation is on, will bootstrap in background" Sep 5 00:12:14.183103 kubelet[2506]: E0905 00:12:14.183074 2506 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://147.28.180.221:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 147.28.180.221:6443: connect: connection refused Sep 5 00:12:14.183670 kubelet[2506]: I0905 00:12:14.183639 2506 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:12:14.198121 kubelet[2506]: I0905 00:12:14.198078 2506 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 00:12:14.199340 kubelet[2506]: I0905 00:12:14.199262 2506 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 00:12:14.199376 kubelet[2506]: I0905 00:12:14.199371 2506 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 5 00:12:14.199431 kubelet[2506]: I0905 00:12:14.199385 2506 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 00:12:14.199431 kubelet[2506]: I0905 00:12:14.199391 2506 container_manager_linux.go:301] "Creating device plugin manager" Sep 5 00:12:14.199468 kubelet[2506]: I0905 00:12:14.199452 2506 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:12:14.199544 kubelet[2506]: I0905 00:12:14.199504 2506 kubelet.go:396] "Attempting to sync node with API server" Sep 5 00:12:14.199544 kubelet[2506]: I0905 00:12:14.199512 2506 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 00:12:14.199544 kubelet[2506]: I0905 00:12:14.199527 2506 kubelet.go:312] "Adding apiserver pod source" Sep 5 00:12:14.199544 kubelet[2506]: I0905 00:12:14.199535 2506 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 00:12:14.200333 kubelet[2506]: I0905 00:12:14.200297 2506 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.20" apiVersion="v1" Sep 5 00:12:14.200633 kubelet[2506]: W0905 00:12:14.200585 2506 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://147.28.180.221:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.28.180.221:6443: connect: connection refused Sep 5 00:12:14.200633 kubelet[2506]: E0905 00:12:14.200611 2506 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://147.28.180.221:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.28.180.221:6443: connect: connection refused Sep 5 00:12:14.202065 kubelet[2506]: W0905 00:12:14.202007 2506 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://147.28.180.221:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4054.1.0-a-b942d58550&limit=500&resourceVersion=0": dial tcp 147.28.180.221:6443: connect: connection refused Sep 5 00:12:14.202065 kubelet[2506]: E0905 00:12:14.202036 2506 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://147.28.180.221:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4054.1.0-a-b942d58550&limit=500&resourceVersion=0": dial tcp 147.28.180.221:6443: connect: connection refused Sep 5 00:12:14.202829 kubelet[2506]: I0905 00:12:14.202792 2506 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 5 00:12:14.203656 kubelet[2506]: W0905 00:12:14.203619 2506 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 5 00:12:14.203915 kubelet[2506]: I0905 00:12:14.203870 2506 server.go:1256] "Started kubelet" Sep 5 00:12:14.203915 kubelet[2506]: I0905 00:12:14.203905 2506 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 00:12:14.203915 kubelet[2506]: I0905 00:12:14.203910 2506 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 00:12:14.204165 kubelet[2506]: I0905 00:12:14.204129 2506 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 00:12:14.204752 kubelet[2506]: I0905 00:12:14.204701 2506 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 00:12:14.204793 kubelet[2506]: I0905 00:12:14.204772 2506 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 5 00:12:14.204826 kubelet[2506]: I0905 00:12:14.204818 2506 reconciler_new.go:29] "Reconciler: start to sync state" Sep 5 00:12:14.205012 kubelet[2506]: W0905 00:12:14.204975 2506 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://147.28.180.221:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.180.221:6443: connect: connection refused Sep 5 00:12:14.205064 kubelet[2506]: E0905 00:12:14.205018 2506 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://147.28.180.221:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.180.221:6443: connect: connection refused Sep 5 00:12:14.205112 kubelet[2506]: I0905 00:12:14.205100 2506 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 5 00:12:14.205236 kubelet[2506]: E0905 00:12:14.205227 2506 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.180.221:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4054.1.0-a-b942d58550?timeout=10s\": dial tcp 147.28.180.221:6443: connect: connection refused" interval="200ms" Sep 5 00:12:14.205314 kubelet[2506]: I0905 00:12:14.205305 2506 server.go:461] "Adding debug handlers to kubelet server" Sep 5 00:12:14.208511 kubelet[2506]: I0905 00:12:14.208475 2506 factory.go:221] Registration of the systemd container factory successfully Sep 5 00:12:14.208695 kubelet[2506]: I0905 00:12:14.208679 2506 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 00:12:14.209397 kubelet[2506]: I0905 00:12:14.209347 2506 factory.go:221] Registration of the containerd container factory successfully Sep 5 00:12:14.209567 kubelet[2506]: E0905 00:12:14.209464 2506 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 00:12:14.210809 kubelet[2506]: E0905 00:12:14.210798 2506 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://147.28.180.221:6443/api/v1/namespaces/default/events\": dial tcp 147.28.180.221:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4054.1.0-a-b942d58550.17f230aeb7e2d598 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4054.1.0-a-b942d58550,UID:ci-4054.1.0-a-b942d58550,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4054.1.0-a-b942d58550,},FirstTimestamp:2024-09-05 00:12:14.203860376 +0000 UTC m=+0.325132656,LastTimestamp:2024-09-05 00:12:14.203860376 +0000 UTC m=+0.325132656,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4054.1.0-a-b942d58550,}" Sep 5 00:12:14.215920 kubelet[2506]: I0905 00:12:14.215908 2506 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 00:12:14.216453 kubelet[2506]: I0905 00:12:14.216446 2506 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 00:12:14.216493 kubelet[2506]: I0905 00:12:14.216475 2506 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 5 00:12:14.216493 kubelet[2506]: I0905 00:12:14.216485 2506 kubelet.go:2329] "Starting kubelet main sync loop" Sep 5 00:12:14.216528 kubelet[2506]: E0905 00:12:14.216509 2506 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 00:12:14.216768 kubelet[2506]: W0905 00:12:14.216745 2506 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://147.28.180.221:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.180.221:6443: connect: connection refused Sep 5 00:12:14.216792 kubelet[2506]: E0905 00:12:14.216776 2506 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://147.28.180.221:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.180.221:6443: connect: connection refused Sep 5 00:12:14.317724 kubelet[2506]: E0905 00:12:14.317606 2506 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 5 00:12:14.374212 kubelet[2506]: I0905 00:12:14.374114 2506 kubelet_node_status.go:73] "Attempting to register node" node="ci-4054.1.0-a-b942d58550" Sep 5 00:12:14.375008 kubelet[2506]: E0905 00:12:14.374929 2506 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.28.180.221:6443/api/v1/nodes\": dial tcp 147.28.180.221:6443: connect: connection refused" node="ci-4054.1.0-a-b942d58550" Sep 5 00:12:14.375433 kubelet[2506]: I0905 00:12:14.375353 2506 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 5 00:12:14.375433 kubelet[2506]: I0905 00:12:14.375395 2506 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 5 00:12:14.375433 kubelet[2506]: I0905 00:12:14.375434 2506 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:12:14.377684 kubelet[2506]: I0905 00:12:14.377643 2506 policy_none.go:49] "None policy: Start" Sep 5 00:12:14.378064 kubelet[2506]: I0905 00:12:14.378024 2506 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 5 00:12:14.378064 kubelet[2506]: I0905 00:12:14.378039 2506 state_mem.go:35] "Initializing new in-memory state store" Sep 5 00:12:14.380456 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 5 00:12:14.390825 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 5 00:12:14.392653 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 5 00:12:14.404880 kubelet[2506]: I0905 00:12:14.404836 2506 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 00:12:14.405051 kubelet[2506]: I0905 00:12:14.405004 2506 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 00:12:14.405435 kubelet[2506]: E0905 00:12:14.405420 2506 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.180.221:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4054.1.0-a-b942d58550?timeout=10s\": dial tcp 147.28.180.221:6443: connect: connection refused" interval="400ms" Sep 5 00:12:14.405665 kubelet[2506]: E0905 00:12:14.405623 2506 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4054.1.0-a-b942d58550\" not found" Sep 5 00:12:14.518830 kubelet[2506]: I0905 00:12:14.518587 2506 topology_manager.go:215] "Topology Admit Handler" podUID="2ba2f6691a180c7e98ca09026ac3ea2d" podNamespace="kube-system" podName="kube-apiserver-ci-4054.1.0-a-b942d58550" Sep 5 00:12:14.522758 kubelet[2506]: I0905 00:12:14.522694 2506 topology_manager.go:215] "Topology Admit Handler" podUID="19c651e6df37a31f50fbaf49be3d8ea2" podNamespace="kube-system" podName="kube-controller-manager-ci-4054.1.0-a-b942d58550" Sep 5 00:12:14.526593 kubelet[2506]: I0905 00:12:14.526512 2506 topology_manager.go:215] "Topology Admit Handler" podUID="43fe3eade92be0eb5718c8d50800f51a" podNamespace="kube-system" podName="kube-scheduler-ci-4054.1.0-a-b942d58550" Sep 5 00:12:14.540350 systemd[1]: Created slice kubepods-burstable-pod2ba2f6691a180c7e98ca09026ac3ea2d.slice - libcontainer container kubepods-burstable-pod2ba2f6691a180c7e98ca09026ac3ea2d.slice. Sep 5 00:12:14.560064 systemd[1]: Created slice kubepods-burstable-pod19c651e6df37a31f50fbaf49be3d8ea2.slice - libcontainer container kubepods-burstable-pod19c651e6df37a31f50fbaf49be3d8ea2.slice. Sep 5 00:12:14.579999 kubelet[2506]: I0905 00:12:14.579939 2506 kubelet_node_status.go:73] "Attempting to register node" node="ci-4054.1.0-a-b942d58550" Sep 5 00:12:14.580712 kubelet[2506]: E0905 00:12:14.580661 2506 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.28.180.221:6443/api/v1/nodes\": dial tcp 147.28.180.221:6443: connect: connection refused" node="ci-4054.1.0-a-b942d58550" Sep 5 00:12:14.585790 systemd[1]: Created slice kubepods-burstable-pod43fe3eade92be0eb5718c8d50800f51a.slice - libcontainer container kubepods-burstable-pod43fe3eade92be0eb5718c8d50800f51a.slice. Sep 5 00:12:14.706671 kubelet[2506]: I0905 00:12:14.706561 2506 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2ba2f6691a180c7e98ca09026ac3ea2d-k8s-certs\") pod \"kube-apiserver-ci-4054.1.0-a-b942d58550\" (UID: \"2ba2f6691a180c7e98ca09026ac3ea2d\") " pod="kube-system/kube-apiserver-ci-4054.1.0-a-b942d58550" Sep 5 00:12:14.706671 kubelet[2506]: I0905 00:12:14.706674 2506 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/19c651e6df37a31f50fbaf49be3d8ea2-ca-certs\") pod \"kube-controller-manager-ci-4054.1.0-a-b942d58550\" (UID: \"19c651e6df37a31f50fbaf49be3d8ea2\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-b942d58550" Sep 5 00:12:14.707020 kubelet[2506]: I0905 00:12:14.706831 2506 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/19c651e6df37a31f50fbaf49be3d8ea2-k8s-certs\") pod \"kube-controller-manager-ci-4054.1.0-a-b942d58550\" (UID: \"19c651e6df37a31f50fbaf49be3d8ea2\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-b942d58550" Sep 5 00:12:14.707020 kubelet[2506]: I0905 00:12:14.706987 2506 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/19c651e6df37a31f50fbaf49be3d8ea2-kubeconfig\") pod \"kube-controller-manager-ci-4054.1.0-a-b942d58550\" (UID: \"19c651e6df37a31f50fbaf49be3d8ea2\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-b942d58550" Sep 5 00:12:14.707225 kubelet[2506]: I0905 00:12:14.707124 2506 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2ba2f6691a180c7e98ca09026ac3ea2d-ca-certs\") pod \"kube-apiserver-ci-4054.1.0-a-b942d58550\" (UID: \"2ba2f6691a180c7e98ca09026ac3ea2d\") " pod="kube-system/kube-apiserver-ci-4054.1.0-a-b942d58550" Sep 5 00:12:14.707225 kubelet[2506]: I0905 00:12:14.707200 2506 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2ba2f6691a180c7e98ca09026ac3ea2d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4054.1.0-a-b942d58550\" (UID: \"2ba2f6691a180c7e98ca09026ac3ea2d\") " pod="kube-system/kube-apiserver-ci-4054.1.0-a-b942d58550" Sep 5 00:12:14.707473 kubelet[2506]: I0905 00:12:14.707295 2506 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/19c651e6df37a31f50fbaf49be3d8ea2-flexvolume-dir\") pod \"kube-controller-manager-ci-4054.1.0-a-b942d58550\" (UID: \"19c651e6df37a31f50fbaf49be3d8ea2\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-b942d58550" Sep 5 00:12:14.707473 kubelet[2506]: I0905 00:12:14.707365 2506 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/19c651e6df37a31f50fbaf49be3d8ea2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4054.1.0-a-b942d58550\" (UID: \"19c651e6df37a31f50fbaf49be3d8ea2\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-b942d58550" Sep 5 00:12:14.707666 kubelet[2506]: I0905 00:12:14.707500 2506 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/43fe3eade92be0eb5718c8d50800f51a-kubeconfig\") pod \"kube-scheduler-ci-4054.1.0-a-b942d58550\" (UID: \"43fe3eade92be0eb5718c8d50800f51a\") " pod="kube-system/kube-scheduler-ci-4054.1.0-a-b942d58550" Sep 5 00:12:14.807238 kubelet[2506]: E0905 00:12:14.807014 2506 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.180.221:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4054.1.0-a-b942d58550?timeout=10s\": dial tcp 147.28.180.221:6443: connect: connection refused" interval="800ms" Sep 5 00:12:14.857168 containerd[1534]: time="2024-09-05T00:12:14.857039766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4054.1.0-a-b942d58550,Uid:2ba2f6691a180c7e98ca09026ac3ea2d,Namespace:kube-system,Attempt:0,}" Sep 5 00:12:14.880304 containerd[1534]: time="2024-09-05T00:12:14.880171784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4054.1.0-a-b942d58550,Uid:19c651e6df37a31f50fbaf49be3d8ea2,Namespace:kube-system,Attempt:0,}" Sep 5 00:12:14.890635 containerd[1534]: time="2024-09-05T00:12:14.890587435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4054.1.0-a-b942d58550,Uid:43fe3eade92be0eb5718c8d50800f51a,Namespace:kube-system,Attempt:0,}" Sep 5 00:12:14.983122 kubelet[2506]: I0905 00:12:14.983102 2506 kubelet_node_status.go:73] "Attempting to register node" node="ci-4054.1.0-a-b942d58550" Sep 5 00:12:14.983373 kubelet[2506]: E0905 00:12:14.983361 2506 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.28.180.221:6443/api/v1/nodes\": dial tcp 147.28.180.221:6443: connect: connection refused" node="ci-4054.1.0-a-b942d58550" Sep 5 00:12:15.371381 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount825764039.mount: Deactivated successfully. Sep 5 00:12:15.372589 containerd[1534]: time="2024-09-05T00:12:15.372515308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:12:15.372719 containerd[1534]: time="2024-09-05T00:12:15.372697348Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 5 00:12:15.373742 containerd[1534]: time="2024-09-05T00:12:15.373713128Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:12:15.374724 containerd[1534]: time="2024-09-05T00:12:15.374702541Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 00:12:15.374724 containerd[1534]: time="2024-09-05T00:12:15.374714418Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:12:15.374960 containerd[1534]: time="2024-09-05T00:12:15.374944066Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 00:12:15.375188 containerd[1534]: time="2024-09-05T00:12:15.375177165Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:12:15.377200 containerd[1534]: time="2024-09-05T00:12:15.377184697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:12:15.377696 containerd[1534]: time="2024-09-05T00:12:15.377651015Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 520.461101ms" Sep 5 00:12:15.378044 containerd[1534]: time="2024-09-05T00:12:15.378031126Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 497.689496ms" Sep 5 00:12:15.379388 containerd[1534]: time="2024-09-05T00:12:15.379346132Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 488.733206ms" Sep 5 00:12:15.473523 containerd[1534]: time="2024-09-05T00:12:15.473432293Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:12:15.473523 containerd[1534]: time="2024-09-05T00:12:15.473467181Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:12:15.473523 containerd[1534]: time="2024-09-05T00:12:15.473475267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:12:15.473693 containerd[1534]: time="2024-09-05T00:12:15.473525693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:12:15.473693 containerd[1534]: time="2024-09-05T00:12:15.473515050Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:12:15.473693 containerd[1534]: time="2024-09-05T00:12:15.473311135Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:12:15.473693 containerd[1534]: time="2024-09-05T00:12:15.473546126Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:12:15.473693 containerd[1534]: time="2024-09-05T00:12:15.473558808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:12:15.473693 containerd[1534]: time="2024-09-05T00:12:15.473565756Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:12:15.473693 containerd[1534]: time="2024-09-05T00:12:15.473576789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:12:15.473693 containerd[1534]: time="2024-09-05T00:12:15.473607598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:12:15.473693 containerd[1534]: time="2024-09-05T00:12:15.473620831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:12:15.494581 systemd[1]: Started cri-containerd-372f00fc6b5865aa35c3a8c29602ba917d5f939b67767b211b3b32aa12d8edc4.scope - libcontainer container 372f00fc6b5865aa35c3a8c29602ba917d5f939b67767b211b3b32aa12d8edc4. Sep 5 00:12:15.495486 systemd[1]: Started cri-containerd-42b60bee9f7f23b0e84990ade76a62cd3f92bd02c283989a3423fa338df76175.scope - libcontainer container 42b60bee9f7f23b0e84990ade76a62cd3f92bd02c283989a3423fa338df76175. Sep 5 00:12:15.496229 systemd[1]: Started cri-containerd-f6b552e2a47ef8b30112aeb9040b1efd3f32882420d188e3432fb96b8aac0e30.scope - libcontainer container f6b552e2a47ef8b30112aeb9040b1efd3f32882420d188e3432fb96b8aac0e30. Sep 5 00:12:15.521103 containerd[1534]: time="2024-09-05T00:12:15.521067368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4054.1.0-a-b942d58550,Uid:19c651e6df37a31f50fbaf49be3d8ea2,Namespace:kube-system,Attempt:0,} returns sandbox id \"42b60bee9f7f23b0e84990ade76a62cd3f92bd02c283989a3423fa338df76175\"" Sep 5 00:12:15.521355 containerd[1534]: time="2024-09-05T00:12:15.521339452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4054.1.0-a-b942d58550,Uid:43fe3eade92be0eb5718c8d50800f51a,Namespace:kube-system,Attempt:0,} returns sandbox id \"372f00fc6b5865aa35c3a8c29602ba917d5f939b67767b211b3b32aa12d8edc4\"" Sep 5 00:12:15.523214 containerd[1534]: time="2024-09-05T00:12:15.523199837Z" level=info msg="CreateContainer within sandbox \"42b60bee9f7f23b0e84990ade76a62cd3f92bd02c283989a3423fa338df76175\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 5 00:12:15.523262 containerd[1534]: time="2024-09-05T00:12:15.523198466Z" level=info msg="CreateContainer within sandbox \"372f00fc6b5865aa35c3a8c29602ba917d5f939b67767b211b3b32aa12d8edc4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 5 00:12:15.524150 containerd[1534]: time="2024-09-05T00:12:15.524120426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4054.1.0-a-b942d58550,Uid:2ba2f6691a180c7e98ca09026ac3ea2d,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6b552e2a47ef8b30112aeb9040b1efd3f32882420d188e3432fb96b8aac0e30\"" Sep 5 00:12:15.525640 containerd[1534]: time="2024-09-05T00:12:15.525627790Z" level=info msg="CreateContainer within sandbox \"f6b552e2a47ef8b30112aeb9040b1efd3f32882420d188e3432fb96b8aac0e30\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 5 00:12:15.529988 containerd[1534]: time="2024-09-05T00:12:15.529949973Z" level=info msg="CreateContainer within sandbox \"372f00fc6b5865aa35c3a8c29602ba917d5f939b67767b211b3b32aa12d8edc4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"530f50b7657ac375bdc16b101f7ded083224219e7184f77da190ce14bf2585cc\"" Sep 5 00:12:15.530194 containerd[1534]: time="2024-09-05T00:12:15.530182928Z" level=info msg="StartContainer for \"530f50b7657ac375bdc16b101f7ded083224219e7184f77da190ce14bf2585cc\"" Sep 5 00:12:15.530919 containerd[1534]: time="2024-09-05T00:12:15.530877767Z" level=info msg="CreateContainer within sandbox \"42b60bee9f7f23b0e84990ade76a62cd3f92bd02c283989a3423fa338df76175\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"828f3a23570230472ac57164cb2cba1f32358b9d73d47ed0205c1759fd2b9bc3\"" Sep 5 00:12:15.531080 containerd[1534]: time="2024-09-05T00:12:15.531047307Z" level=info msg="StartContainer for \"828f3a23570230472ac57164cb2cba1f32358b9d73d47ed0205c1759fd2b9bc3\"" Sep 5 00:12:15.532088 containerd[1534]: time="2024-09-05T00:12:15.532074474Z" level=info msg="CreateContainer within sandbox \"f6b552e2a47ef8b30112aeb9040b1efd3f32882420d188e3432fb96b8aac0e30\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2bd16c27980bfd2630e2e1f61ca685989c11620a200797a66986c1f1e85d95c1\"" Sep 5 00:12:15.532280 containerd[1534]: time="2024-09-05T00:12:15.532267194Z" level=info msg="StartContainer for \"2bd16c27980bfd2630e2e1f61ca685989c11620a200797a66986c1f1e85d95c1\"" Sep 5 00:12:15.558582 systemd[1]: Started cri-containerd-2bd16c27980bfd2630e2e1f61ca685989c11620a200797a66986c1f1e85d95c1.scope - libcontainer container 2bd16c27980bfd2630e2e1f61ca685989c11620a200797a66986c1f1e85d95c1. Sep 5 00:12:15.559134 systemd[1]: Started cri-containerd-530f50b7657ac375bdc16b101f7ded083224219e7184f77da190ce14bf2585cc.scope - libcontainer container 530f50b7657ac375bdc16b101f7ded083224219e7184f77da190ce14bf2585cc. Sep 5 00:12:15.559666 systemd[1]: Started cri-containerd-828f3a23570230472ac57164cb2cba1f32358b9d73d47ed0205c1759fd2b9bc3.scope - libcontainer container 828f3a23570230472ac57164cb2cba1f32358b9d73d47ed0205c1759fd2b9bc3. Sep 5 00:12:15.568402 kubelet[2506]: W0905 00:12:15.568342 2506 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://147.28.180.221:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.180.221:6443: connect: connection refused Sep 5 00:12:15.568402 kubelet[2506]: E0905 00:12:15.568378 2506 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://147.28.180.221:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.180.221:6443: connect: connection refused Sep 5 00:12:15.582911 containerd[1534]: time="2024-09-05T00:12:15.582887507Z" level=info msg="StartContainer for \"828f3a23570230472ac57164cb2cba1f32358b9d73d47ed0205c1759fd2b9bc3\" returns successfully" Sep 5 00:12:15.582988 containerd[1534]: time="2024-09-05T00:12:15.582911933Z" level=info msg="StartContainer for \"530f50b7657ac375bdc16b101f7ded083224219e7184f77da190ce14bf2585cc\" returns successfully" Sep 5 00:12:15.582988 containerd[1534]: time="2024-09-05T00:12:15.582940461Z" level=info msg="StartContainer for \"2bd16c27980bfd2630e2e1f61ca685989c11620a200797a66986c1f1e85d95c1\" returns successfully" Sep 5 00:12:15.603517 kubelet[2506]: W0905 00:12:15.603449 2506 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://147.28.180.221:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4054.1.0-a-b942d58550&limit=500&resourceVersion=0": dial tcp 147.28.180.221:6443: connect: connection refused Sep 5 00:12:15.603517 kubelet[2506]: E0905 00:12:15.603493 2506 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://147.28.180.221:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4054.1.0-a-b942d58550&limit=500&resourceVersion=0": dial tcp 147.28.180.221:6443: connect: connection refused Sep 5 00:12:15.608015 kubelet[2506]: E0905 00:12:15.607999 2506 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.180.221:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4054.1.0-a-b942d58550?timeout=10s\": dial tcp 147.28.180.221:6443: connect: connection refused" interval="1.6s" Sep 5 00:12:15.784614 kubelet[2506]: I0905 00:12:15.784560 2506 kubelet_node_status.go:73] "Attempting to register node" node="ci-4054.1.0-a-b942d58550" Sep 5 00:12:16.250812 kubelet[2506]: I0905 00:12:16.250741 2506 kubelet_node_status.go:76] "Successfully registered node" node="ci-4054.1.0-a-b942d58550" Sep 5 00:12:16.254749 kubelet[2506]: E0905 00:12:16.254736 2506 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4054.1.0-a-b942d58550\" not found" Sep 5 00:12:16.355773 kubelet[2506]: E0905 00:12:16.355726 2506 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4054.1.0-a-b942d58550\" not found" Sep 5 00:12:16.456862 kubelet[2506]: E0905 00:12:16.456734 2506 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4054.1.0-a-b942d58550\" not found" Sep 5 00:12:16.558195 kubelet[2506]: E0905 00:12:16.557952 2506 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4054.1.0-a-b942d58550\" not found" Sep 5 00:12:16.659160 kubelet[2506]: E0905 00:12:16.659078 2506 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4054.1.0-a-b942d58550\" not found" Sep 5 00:12:16.759409 kubelet[2506]: E0905 00:12:16.759329 2506 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4054.1.0-a-b942d58550\" not found" Sep 5 00:12:16.860778 kubelet[2506]: E0905 00:12:16.860550 2506 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4054.1.0-a-b942d58550\" not found" Sep 5 00:12:16.960870 kubelet[2506]: E0905 00:12:16.960766 2506 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4054.1.0-a-b942d58550\" not found" Sep 5 00:12:17.061301 kubelet[2506]: E0905 00:12:17.061152 2506 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4054.1.0-a-b942d58550\" not found" Sep 5 00:12:17.161763 kubelet[2506]: E0905 00:12:17.161586 2506 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4054.1.0-a-b942d58550\" not found" Sep 5 00:12:17.262885 kubelet[2506]: E0905 00:12:17.262798 2506 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4054.1.0-a-b942d58550\" not found" Sep 5 00:12:17.363577 kubelet[2506]: E0905 00:12:17.363532 2506 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4054.1.0-a-b942d58550\" not found" Sep 5 00:12:17.464179 kubelet[2506]: E0905 00:12:17.464085 2506 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4054.1.0-a-b942d58550\" not found" Sep 5 00:12:17.565432 kubelet[2506]: E0905 00:12:17.565319 2506 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4054.1.0-a-b942d58550\" not found" Sep 5 00:12:17.666571 kubelet[2506]: E0905 00:12:17.666469 2506 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4054.1.0-a-b942d58550\" not found" Sep 5 00:12:17.767607 kubelet[2506]: E0905 00:12:17.767399 2506 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4054.1.0-a-b942d58550\" not found" Sep 5 00:12:17.868475 kubelet[2506]: E0905 00:12:17.868389 2506 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4054.1.0-a-b942d58550\" not found" Sep 5 00:12:17.969165 kubelet[2506]: E0905 00:12:17.969044 2506 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4054.1.0-a-b942d58550\" not found" Sep 5 00:12:18.069813 kubelet[2506]: E0905 00:12:18.069580 2506 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4054.1.0-a-b942d58550\" not found" Sep 5 00:12:18.169920 kubelet[2506]: E0905 00:12:18.169809 2506 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4054.1.0-a-b942d58550\" not found" Sep 5 00:12:18.270609 kubelet[2506]: E0905 00:12:18.270499 2506 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4054.1.0-a-b942d58550\" not found" Sep 5 00:12:18.663764 kubelet[2506]: W0905 00:12:18.663693 2506 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 5 00:12:19.025643 systemd[1]: Reloading requested from client PID 2821 ('systemctl') (unit session-11.scope)... Sep 5 00:12:19.025676 systemd[1]: Reloading... Sep 5 00:12:19.083321 zram_generator::config[2858]: No configuration found. Sep 5 00:12:19.144780 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:12:19.202653 kubelet[2506]: I0905 00:12:19.202633 2506 apiserver.go:52] "Watching apiserver" Sep 5 00:12:19.205968 kubelet[2506]: I0905 00:12:19.205929 2506 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 5 00:12:19.213404 systemd[1]: Reloading finished in 186 ms. Sep 5 00:12:19.269791 kubelet[2506]: I0905 00:12:19.269742 2506 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:12:19.269802 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:12:19.279910 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 00:12:19.280036 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:12:19.291068 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:12:19.550889 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:12:19.557069 (kubelet)[2919]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 00:12:19.581275 kubelet[2919]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:12:19.581275 kubelet[2919]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 5 00:12:19.581275 kubelet[2919]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:12:19.581489 kubelet[2919]: I0905 00:12:19.581282 2919 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 00:12:19.583703 kubelet[2919]: I0905 00:12:19.583691 2919 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Sep 5 00:12:19.583734 kubelet[2919]: I0905 00:12:19.583705 2919 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 00:12:19.583823 kubelet[2919]: I0905 00:12:19.583818 2919 server.go:919] "Client rotation is on, will bootstrap in background" Sep 5 00:12:19.584697 kubelet[2919]: I0905 00:12:19.584659 2919 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 5 00:12:19.586365 kubelet[2919]: I0905 00:12:19.586352 2919 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:12:19.594703 kubelet[2919]: I0905 00:12:19.594666 2919 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 00:12:19.594810 kubelet[2919]: I0905 00:12:19.594776 2919 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 00:12:19.594905 kubelet[2919]: I0905 00:12:19.594869 2919 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 5 00:12:19.594905 kubelet[2919]: I0905 00:12:19.594883 2919 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 00:12:19.594905 kubelet[2919]: I0905 00:12:19.594889 2919 container_manager_linux.go:301] "Creating device plugin manager" Sep 5 00:12:19.594905 kubelet[2919]: I0905 00:12:19.594904 2919 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:12:19.595012 kubelet[2919]: I0905 00:12:19.594951 2919 kubelet.go:396] "Attempting to sync node with API server" Sep 5 00:12:19.595012 kubelet[2919]: I0905 00:12:19.594959 2919 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 00:12:19.595012 kubelet[2919]: I0905 00:12:19.594972 2919 kubelet.go:312] "Adding apiserver pod source" Sep 5 00:12:19.595012 kubelet[2919]: I0905 00:12:19.594978 2919 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 00:12:19.595501 kubelet[2919]: I0905 00:12:19.595457 2919 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.20" apiVersion="v1" Sep 5 00:12:19.595651 kubelet[2919]: I0905 00:12:19.595618 2919 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 5 00:12:19.596386 kubelet[2919]: I0905 00:12:19.596162 2919 server.go:1256] "Started kubelet" Sep 5 00:12:19.596386 kubelet[2919]: I0905 00:12:19.596359 2919 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 00:12:19.596954 kubelet[2919]: I0905 00:12:19.596371 2919 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 00:12:19.597146 kubelet[2919]: I0905 00:12:19.597135 2919 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 00:12:19.597646 kubelet[2919]: I0905 00:12:19.597636 2919 server.go:461] "Adding debug handlers to kubelet server" Sep 5 00:12:19.597741 kubelet[2919]: I0905 00:12:19.597637 2919 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 00:12:19.597785 kubelet[2919]: I0905 00:12:19.597766 2919 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 5 00:12:19.597785 kubelet[2919]: E0905 00:12:19.597774 2919 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4054.1.0-a-b942d58550\" not found" Sep 5 00:12:19.597884 kubelet[2919]: E0905 00:12:19.597839 2919 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 00:12:19.597884 kubelet[2919]: I0905 00:12:19.597848 2919 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 5 00:12:19.597973 kubelet[2919]: I0905 00:12:19.597966 2919 reconciler_new.go:29] "Reconciler: start to sync state" Sep 5 00:12:19.598345 kubelet[2919]: I0905 00:12:19.598335 2919 factory.go:221] Registration of the systemd container factory successfully Sep 5 00:12:19.598393 kubelet[2919]: I0905 00:12:19.598377 2919 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 00:12:19.598957 kubelet[2919]: I0905 00:12:19.598948 2919 factory.go:221] Registration of the containerd container factory successfully Sep 5 00:12:19.602882 kubelet[2919]: I0905 00:12:19.602867 2919 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 00:12:19.603381 kubelet[2919]: I0905 00:12:19.603372 2919 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 00:12:19.603416 kubelet[2919]: I0905 00:12:19.603386 2919 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 5 00:12:19.603416 kubelet[2919]: I0905 00:12:19.603395 2919 kubelet.go:2329] "Starting kubelet main sync loop" Sep 5 00:12:19.603454 kubelet[2919]: E0905 00:12:19.603418 2919 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 00:12:19.614526 kubelet[2919]: I0905 00:12:19.614481 2919 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 5 00:12:19.614526 kubelet[2919]: I0905 00:12:19.614495 2919 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 5 00:12:19.614526 kubelet[2919]: I0905 00:12:19.614504 2919 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:12:19.614630 kubelet[2919]: I0905 00:12:19.614590 2919 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 5 00:12:19.614630 kubelet[2919]: I0905 00:12:19.614603 2919 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 5 00:12:19.614630 kubelet[2919]: I0905 00:12:19.614607 2919 policy_none.go:49] "None policy: Start" Sep 5 00:12:19.614872 kubelet[2919]: I0905 00:12:19.614836 2919 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 5 00:12:19.614872 kubelet[2919]: I0905 00:12:19.614847 2919 state_mem.go:35] "Initializing new in-memory state store" Sep 5 00:12:19.614985 kubelet[2919]: I0905 00:12:19.614939 2919 state_mem.go:75] "Updated machine memory state" Sep 5 00:12:19.616890 kubelet[2919]: I0905 00:12:19.616856 2919 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 00:12:19.616997 kubelet[2919]: I0905 00:12:19.616990 2919 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 00:12:19.700965 kubelet[2919]: I0905 00:12:19.700950 2919 kubelet_node_status.go:73] "Attempting to register node" node="ci-4054.1.0-a-b942d58550" Sep 5 00:12:19.703812 kubelet[2919]: I0905 00:12:19.703775 2919 topology_manager.go:215] "Topology Admit Handler" podUID="2ba2f6691a180c7e98ca09026ac3ea2d" podNamespace="kube-system" podName="kube-apiserver-ci-4054.1.0-a-b942d58550" Sep 5 00:12:19.703812 kubelet[2919]: I0905 00:12:19.703814 2919 topology_manager.go:215] "Topology Admit Handler" podUID="19c651e6df37a31f50fbaf49be3d8ea2" podNamespace="kube-system" podName="kube-controller-manager-ci-4054.1.0-a-b942d58550" Sep 5 00:12:19.703883 kubelet[2919]: I0905 00:12:19.703834 2919 topology_manager.go:215] "Topology Admit Handler" podUID="43fe3eade92be0eb5718c8d50800f51a" podNamespace="kube-system" podName="kube-scheduler-ci-4054.1.0-a-b942d58550" Sep 5 00:12:19.704924 kubelet[2919]: I0905 00:12:19.704913 2919 kubelet_node_status.go:112] "Node was previously registered" node="ci-4054.1.0-a-b942d58550" Sep 5 00:12:19.704962 kubelet[2919]: I0905 00:12:19.704956 2919 kubelet_node_status.go:76] "Successfully registered node" node="ci-4054.1.0-a-b942d58550" Sep 5 00:12:19.705735 kubelet[2919]: W0905 00:12:19.705724 2919 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 5 00:12:19.705792 kubelet[2919]: E0905 00:12:19.705772 2919 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4054.1.0-a-b942d58550\" already exists" pod="kube-system/kube-controller-manager-ci-4054.1.0-a-b942d58550" Sep 5 00:12:19.705792 kubelet[2919]: W0905 00:12:19.705783 2919 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 5 00:12:19.705859 kubelet[2919]: W0905 00:12:19.705806 2919 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 5 00:12:19.798867 kubelet[2919]: I0905 00:12:19.798848 2919 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/19c651e6df37a31f50fbaf49be3d8ea2-flexvolume-dir\") pod \"kube-controller-manager-ci-4054.1.0-a-b942d58550\" (UID: \"19c651e6df37a31f50fbaf49be3d8ea2\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-b942d58550" Sep 5 00:12:19.798964 kubelet[2919]: I0905 00:12:19.798879 2919 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/19c651e6df37a31f50fbaf49be3d8ea2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4054.1.0-a-b942d58550\" (UID: \"19c651e6df37a31f50fbaf49be3d8ea2\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-b942d58550" Sep 5 00:12:19.798964 kubelet[2919]: I0905 00:12:19.798898 2919 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2ba2f6691a180c7e98ca09026ac3ea2d-ca-certs\") pod \"kube-apiserver-ci-4054.1.0-a-b942d58550\" (UID: \"2ba2f6691a180c7e98ca09026ac3ea2d\") " pod="kube-system/kube-apiserver-ci-4054.1.0-a-b942d58550" Sep 5 00:12:19.798964 kubelet[2919]: I0905 00:12:19.798921 2919 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2ba2f6691a180c7e98ca09026ac3ea2d-k8s-certs\") pod \"kube-apiserver-ci-4054.1.0-a-b942d58550\" (UID: \"2ba2f6691a180c7e98ca09026ac3ea2d\") " pod="kube-system/kube-apiserver-ci-4054.1.0-a-b942d58550" Sep 5 00:12:19.798964 kubelet[2919]: I0905 00:12:19.798943 2919 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/19c651e6df37a31f50fbaf49be3d8ea2-k8s-certs\") pod \"kube-controller-manager-ci-4054.1.0-a-b942d58550\" (UID: \"19c651e6df37a31f50fbaf49be3d8ea2\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-b942d58550" Sep 5 00:12:19.798964 kubelet[2919]: I0905 00:12:19.798961 2919 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/19c651e6df37a31f50fbaf49be3d8ea2-kubeconfig\") pod \"kube-controller-manager-ci-4054.1.0-a-b942d58550\" (UID: \"19c651e6df37a31f50fbaf49be3d8ea2\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-b942d58550" Sep 5 00:12:19.799104 kubelet[2919]: I0905 00:12:19.798995 2919 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/43fe3eade92be0eb5718c8d50800f51a-kubeconfig\") pod \"kube-scheduler-ci-4054.1.0-a-b942d58550\" (UID: \"43fe3eade92be0eb5718c8d50800f51a\") " pod="kube-system/kube-scheduler-ci-4054.1.0-a-b942d58550" Sep 5 00:12:19.799104 kubelet[2919]: I0905 00:12:19.799018 2919 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2ba2f6691a180c7e98ca09026ac3ea2d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4054.1.0-a-b942d58550\" (UID: \"2ba2f6691a180c7e98ca09026ac3ea2d\") " pod="kube-system/kube-apiserver-ci-4054.1.0-a-b942d58550" Sep 5 00:12:19.799104 kubelet[2919]: I0905 00:12:19.799039 2919 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/19c651e6df37a31f50fbaf49be3d8ea2-ca-certs\") pod \"kube-controller-manager-ci-4054.1.0-a-b942d58550\" (UID: \"19c651e6df37a31f50fbaf49be3d8ea2\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-b942d58550" Sep 5 00:12:20.595534 kubelet[2919]: I0905 00:12:20.595511 2919 apiserver.go:52] "Watching apiserver" Sep 5 00:12:20.598049 kubelet[2919]: I0905 00:12:20.598007 2919 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 5 00:12:20.609240 kubelet[2919]: W0905 00:12:20.609224 2919 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 5 00:12:20.609333 kubelet[2919]: E0905 00:12:20.609268 2919 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4054.1.0-a-b942d58550\" already exists" pod="kube-system/kube-scheduler-ci-4054.1.0-a-b942d58550" Sep 5 00:12:20.609892 kubelet[2919]: W0905 00:12:20.609881 2919 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 5 00:12:20.609945 kubelet[2919]: W0905 00:12:20.609910 2919 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 5 00:12:20.609945 kubelet[2919]: E0905 00:12:20.609916 2919 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4054.1.0-a-b942d58550\" already exists" pod="kube-system/kube-apiserver-ci-4054.1.0-a-b942d58550" Sep 5 00:12:20.609945 kubelet[2919]: E0905 00:12:20.609933 2919 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4054.1.0-a-b942d58550\" already exists" pod="kube-system/kube-controller-manager-ci-4054.1.0-a-b942d58550" Sep 5 00:12:20.617081 kubelet[2919]: I0905 00:12:20.617040 2919 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4054.1.0-a-b942d58550" podStartSLOduration=2.617001923 podStartE2EDuration="2.617001923s" podCreationTimestamp="2024-09-05 00:12:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-05 00:12:20.616894584 +0000 UTC m=+1.057726776" watchObservedRunningTime="2024-09-05 00:12:20.617001923 +0000 UTC m=+1.057834111" Sep 5 00:12:20.625817 kubelet[2919]: I0905 00:12:20.625784 2919 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4054.1.0-a-b942d58550" podStartSLOduration=1.625745444 podStartE2EDuration="1.625745444s" podCreationTimestamp="2024-09-05 00:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-05 00:12:20.625543484 +0000 UTC m=+1.066375681" watchObservedRunningTime="2024-09-05 00:12:20.625745444 +0000 UTC m=+1.066577632" Sep 5 00:12:20.631524 kubelet[2919]: I0905 00:12:20.631502 2919 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4054.1.0-a-b942d58550" podStartSLOduration=1.631472133 podStartE2EDuration="1.631472133s" podCreationTimestamp="2024-09-05 00:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-05 00:12:20.631391214 +0000 UTC m=+1.072223405" watchObservedRunningTime="2024-09-05 00:12:20.631472133 +0000 UTC m=+1.072304325" Sep 5 00:12:23.252933 sudo[1793]: pam_unix(sudo:session): session closed for user root Sep 5 00:12:23.253822 sshd[1790]: pam_unix(sshd:session): session closed for user core Sep 5 00:12:23.255347 systemd[1]: sshd@9-147.28.180.221:22-139.178.89.65:44382.service: Deactivated successfully. Sep 5 00:12:23.256223 systemd[1]: session-11.scope: Deactivated successfully. Sep 5 00:12:23.256328 systemd[1]: session-11.scope: Consumed 3.162s CPU time, 150.2M memory peak, 0B memory swap peak. Sep 5 00:12:23.257043 systemd-logind[1516]: Session 11 logged out. Waiting for processes to exit. Sep 5 00:12:23.257642 systemd-logind[1516]: Removed session 11. Sep 5 00:12:32.189742 update_engine[1521]: I0905 00:12:32.189611 1521 update_attempter.cc:509] Updating boot flags... Sep 5 00:12:32.228297 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 37 scanned by (udev-worker) (3091) Sep 5 00:12:32.255272 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 37 scanned by (udev-worker) (3087) Sep 5 00:12:32.969293 kubelet[2919]: I0905 00:12:32.969207 2919 topology_manager.go:215] "Topology Admit Handler" podUID="8c808708-c230-4c1c-811a-9f8d11915e7d" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-vgpkt" Sep 5 00:12:32.982002 systemd[1]: Created slice kubepods-besteffort-pod8c808708_c230_4c1c_811a_9f8d11915e7d.slice - libcontainer container kubepods-besteffort-pod8c808708_c230_4c1c_811a_9f8d11915e7d.slice. Sep 5 00:12:32.998604 kubelet[2919]: I0905 00:12:32.998507 2919 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8c808708-c230-4c1c-811a-9f8d11915e7d-var-lib-calico\") pod \"tigera-operator-5d56685c77-vgpkt\" (UID: \"8c808708-c230-4c1c-811a-9f8d11915e7d\") " pod="tigera-operator/tigera-operator-5d56685c77-vgpkt" Sep 5 00:12:32.998824 kubelet[2919]: I0905 00:12:32.998694 2919 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8rkd\" (UniqueName: \"kubernetes.io/projected/8c808708-c230-4c1c-811a-9f8d11915e7d-kube-api-access-h8rkd\") pod \"tigera-operator-5d56685c77-vgpkt\" (UID: \"8c808708-c230-4c1c-811a-9f8d11915e7d\") " pod="tigera-operator/tigera-operator-5d56685c77-vgpkt" Sep 5 00:12:33.044783 kubelet[2919]: I0905 00:12:33.044724 2919 topology_manager.go:215] "Topology Admit Handler" podUID="e85cfd46-c493-47c6-8b17-87607f8f8fda" podNamespace="kube-system" podName="kube-proxy-xsk49" Sep 5 00:12:33.057611 systemd[1]: Created slice kubepods-besteffort-pode85cfd46_c493_47c6_8b17_87607f8f8fda.slice - libcontainer container kubepods-besteffort-pode85cfd46_c493_47c6_8b17_87607f8f8fda.slice. Sep 5 00:12:33.088026 kubelet[2919]: I0905 00:12:33.087930 2919 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 5 00:12:33.088666 containerd[1534]: time="2024-09-05T00:12:33.088557022Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 5 00:12:33.089414 kubelet[2919]: I0905 00:12:33.088981 2919 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 5 00:12:33.099619 kubelet[2919]: I0905 00:12:33.099536 2919 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e85cfd46-c493-47c6-8b17-87607f8f8fda-xtables-lock\") pod \"kube-proxy-xsk49\" (UID: \"e85cfd46-c493-47c6-8b17-87607f8f8fda\") " pod="kube-system/kube-proxy-xsk49" Sep 5 00:12:33.099619 kubelet[2919]: I0905 00:12:33.099621 2919 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e85cfd46-c493-47c6-8b17-87607f8f8fda-lib-modules\") pod \"kube-proxy-xsk49\" (UID: \"e85cfd46-c493-47c6-8b17-87607f8f8fda\") " pod="kube-system/kube-proxy-xsk49" Sep 5 00:12:33.099946 kubelet[2919]: I0905 00:12:33.099717 2919 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgwzz\" (UniqueName: \"kubernetes.io/projected/e85cfd46-c493-47c6-8b17-87607f8f8fda-kube-api-access-kgwzz\") pod \"kube-proxy-xsk49\" (UID: \"e85cfd46-c493-47c6-8b17-87607f8f8fda\") " pod="kube-system/kube-proxy-xsk49" Sep 5 00:12:33.100057 kubelet[2919]: I0905 00:12:33.099999 2919 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e85cfd46-c493-47c6-8b17-87607f8f8fda-kube-proxy\") pod \"kube-proxy-xsk49\" (UID: \"e85cfd46-c493-47c6-8b17-87607f8f8fda\") " pod="kube-system/kube-proxy-xsk49" Sep 5 00:12:33.304935 containerd[1534]: time="2024-09-05T00:12:33.304725125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-vgpkt,Uid:8c808708-c230-4c1c-811a-9f8d11915e7d,Namespace:tigera-operator,Attempt:0,}" Sep 5 00:12:33.316338 containerd[1534]: time="2024-09-05T00:12:33.316299101Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:12:33.316338 containerd[1534]: time="2024-09-05T00:12:33.316329278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:12:33.316338 containerd[1534]: time="2024-09-05T00:12:33.316336473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:12:33.316454 containerd[1534]: time="2024-09-05T00:12:33.316374656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:12:33.340529 systemd[1]: Started cri-containerd-c9a311d954a5615c156cee20932395e028f71b02202d9697d829267095477d3d.scope - libcontainer container c9a311d954a5615c156cee20932395e028f71b02202d9697d829267095477d3d. Sep 5 00:12:33.362614 containerd[1534]: time="2024-09-05T00:12:33.362577640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xsk49,Uid:e85cfd46-c493-47c6-8b17-87607f8f8fda,Namespace:kube-system,Attempt:0,}" Sep 5 00:12:33.370205 containerd[1534]: time="2024-09-05T00:12:33.370135159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-vgpkt,Uid:8c808708-c230-4c1c-811a-9f8d11915e7d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c9a311d954a5615c156cee20932395e028f71b02202d9697d829267095477d3d\"" Sep 5 00:12:33.371004 containerd[1534]: time="2024-09-05T00:12:33.370965921Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Sep 5 00:12:33.372144 containerd[1534]: time="2024-09-05T00:12:33.372108924Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:12:33.372183 containerd[1534]: time="2024-09-05T00:12:33.372141928Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:12:33.372183 containerd[1534]: time="2024-09-05T00:12:33.372154634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:12:33.372222 containerd[1534]: time="2024-09-05T00:12:33.372211647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:12:33.387364 systemd[1]: Started cri-containerd-f141d31cf6fb402a525a23d6d190f6ffae02dc6dfd1cf306b8a32b7a0449cecd.scope - libcontainer container f141d31cf6fb402a525a23d6d190f6ffae02dc6dfd1cf306b8a32b7a0449cecd. Sep 5 00:12:33.397474 containerd[1534]: time="2024-09-05T00:12:33.397415328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xsk49,Uid:e85cfd46-c493-47c6-8b17-87607f8f8fda,Namespace:kube-system,Attempt:0,} returns sandbox id \"f141d31cf6fb402a525a23d6d190f6ffae02dc6dfd1cf306b8a32b7a0449cecd\"" Sep 5 00:12:33.398741 containerd[1534]: time="2024-09-05T00:12:33.398696148Z" level=info msg="CreateContainer within sandbox \"f141d31cf6fb402a525a23d6d190f6ffae02dc6dfd1cf306b8a32b7a0449cecd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 5 00:12:33.404091 containerd[1534]: time="2024-09-05T00:12:33.404075130Z" level=info msg="CreateContainer within sandbox \"f141d31cf6fb402a525a23d6d190f6ffae02dc6dfd1cf306b8a32b7a0449cecd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6383aadfd38c057eaf050e8ec51adaba86f07211de4addbb2542470cadd6247f\"" Sep 5 00:12:33.404338 containerd[1534]: time="2024-09-05T00:12:33.404327037Z" level=info msg="StartContainer for \"6383aadfd38c057eaf050e8ec51adaba86f07211de4addbb2542470cadd6247f\"" Sep 5 00:12:33.433454 systemd[1]: Started cri-containerd-6383aadfd38c057eaf050e8ec51adaba86f07211de4addbb2542470cadd6247f.scope - libcontainer container 6383aadfd38c057eaf050e8ec51adaba86f07211de4addbb2542470cadd6247f. Sep 5 00:12:33.454550 containerd[1534]: time="2024-09-05T00:12:33.454481696Z" level=info msg="StartContainer for \"6383aadfd38c057eaf050e8ec51adaba86f07211de4addbb2542470cadd6247f\" returns successfully" Sep 5 00:12:33.644477 kubelet[2919]: I0905 00:12:33.644442 2919 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-xsk49" podStartSLOduration=0.644394106 podStartE2EDuration="644.394106ms" podCreationTimestamp="2024-09-05 00:12:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-05 00:12:33.644156622 +0000 UTC m=+14.084988833" watchObservedRunningTime="2024-09-05 00:12:33.644394106 +0000 UTC m=+14.085226306" Sep 5 00:12:34.696283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3961352849.mount: Deactivated successfully. Sep 5 00:12:35.042189 containerd[1534]: time="2024-09-05T00:12:35.042103541Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:12:35.042394 containerd[1534]: time="2024-09-05T00:12:35.042266453Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136557" Sep 5 00:12:35.042662 containerd[1534]: time="2024-09-05T00:12:35.042649854Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:12:35.043767 containerd[1534]: time="2024-09-05T00:12:35.043718900Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:12:35.044216 containerd[1534]: time="2024-09-05T00:12:35.044178973Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 1.673191875s" Sep 5 00:12:35.044216 containerd[1534]: time="2024-09-05T00:12:35.044195380Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Sep 5 00:12:35.045087 containerd[1534]: time="2024-09-05T00:12:35.045075099Z" level=info msg="CreateContainer within sandbox \"c9a311d954a5615c156cee20932395e028f71b02202d9697d829267095477d3d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 5 00:12:35.048872 containerd[1534]: time="2024-09-05T00:12:35.048856968Z" level=info msg="CreateContainer within sandbox \"c9a311d954a5615c156cee20932395e028f71b02202d9697d829267095477d3d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"33ff2af12b9bf08490bf0d043616531deac48f1a1678d344cd3754fb02bebfd4\"" Sep 5 00:12:35.049025 containerd[1534]: time="2024-09-05T00:12:35.049013295Z" level=info msg="StartContainer for \"33ff2af12b9bf08490bf0d043616531deac48f1a1678d344cd3754fb02bebfd4\"" Sep 5 00:12:35.077561 systemd[1]: Started cri-containerd-33ff2af12b9bf08490bf0d043616531deac48f1a1678d344cd3754fb02bebfd4.scope - libcontainer container 33ff2af12b9bf08490bf0d043616531deac48f1a1678d344cd3754fb02bebfd4. Sep 5 00:12:35.089933 containerd[1534]: time="2024-09-05T00:12:35.089874095Z" level=info msg="StartContainer for \"33ff2af12b9bf08490bf0d043616531deac48f1a1678d344cd3754fb02bebfd4\" returns successfully" Sep 5 00:12:35.655015 kubelet[2919]: I0905 00:12:35.654995 2919 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-vgpkt" podStartSLOduration=1.9813381639999998 podStartE2EDuration="3.654963844s" podCreationTimestamp="2024-09-05 00:12:32 +0000 UTC" firstStartedPulling="2024-09-05 00:12:33.370759895 +0000 UTC m=+13.811592086" lastFinishedPulling="2024-09-05 00:12:35.044385575 +0000 UTC m=+15.485217766" observedRunningTime="2024-09-05 00:12:35.654797749 +0000 UTC m=+16.095629941" watchObservedRunningTime="2024-09-05 00:12:35.654963844 +0000 UTC m=+16.095796037" Sep 5 00:12:37.852331 kubelet[2919]: I0905 00:12:37.852246 2919 topology_manager.go:215] "Topology Admit Handler" podUID="9a4c367d-c72b-449d-8ab6-ad92aa22c1bb" podNamespace="calico-system" podName="calico-typha-84f7df598f-s7jfz" Sep 5 00:12:37.862768 systemd[1]: Created slice kubepods-besteffort-pod9a4c367d_c72b_449d_8ab6_ad92aa22c1bb.slice - libcontainer container kubepods-besteffort-pod9a4c367d_c72b_449d_8ab6_ad92aa22c1bb.slice. Sep 5 00:12:37.876742 kubelet[2919]: I0905 00:12:37.876701 2919 topology_manager.go:215] "Topology Admit Handler" podUID="2505fe54-53ef-405a-a1f9-6fc11526db57" podNamespace="calico-system" podName="calico-node-dcnnv" Sep 5 00:12:37.880927 systemd[1]: Created slice kubepods-besteffort-pod2505fe54_53ef_405a_a1f9_6fc11526db57.slice - libcontainer container kubepods-besteffort-pod2505fe54_53ef_405a_a1f9_6fc11526db57.slice. Sep 5 00:12:37.937364 kubelet[2919]: I0905 00:12:37.937294 2919 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2505fe54-53ef-405a-a1f9-6fc11526db57-xtables-lock\") pod \"calico-node-dcnnv\" (UID: \"2505fe54-53ef-405a-a1f9-6fc11526db57\") " pod="calico-system/calico-node-dcnnv" Sep 5 00:12:37.937697 kubelet[2919]: I0905 00:12:37.937429 2919 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2505fe54-53ef-405a-a1f9-6fc11526db57-cni-net-dir\") pod \"calico-node-dcnnv\" (UID: \"2505fe54-53ef-405a-a1f9-6fc11526db57\") " pod="calico-system/calico-node-dcnnv" Sep 5 00:12:37.937697 kubelet[2919]: I0905 00:12:37.937587 2919 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2505fe54-53ef-405a-a1f9-6fc11526db57-cni-log-dir\") pod \"calico-node-dcnnv\" (UID: \"2505fe54-53ef-405a-a1f9-6fc11526db57\") " pod="calico-system/calico-node-dcnnv" Sep 5 00:12:37.938091 kubelet[2919]: I0905 00:12:37.937712 2919 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2505fe54-53ef-405a-a1f9-6fc11526db57-var-lib-calico\") pod \"calico-node-dcnnv\" (UID: \"2505fe54-53ef-405a-a1f9-6fc11526db57\") " pod="calico-system/calico-node-dcnnv" Sep 5 00:12:37.938091 kubelet[2919]: I0905 00:12:37.937898 2919 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2505fe54-53ef-405a-a1f9-6fc11526db57-tigera-ca-bundle\") pod \"calico-node-dcnnv\" (UID: \"2505fe54-53ef-405a-a1f9-6fc11526db57\") " pod="calico-system/calico-node-dcnnv" Sep 5 00:12:37.938091 kubelet[2919]: I0905 00:12:37.938041 2919 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a4c367d-c72b-449d-8ab6-ad92aa22c1bb-tigera-ca-bundle\") pod \"calico-typha-84f7df598f-s7jfz\" (UID: \"9a4c367d-c72b-449d-8ab6-ad92aa22c1bb\") " pod="calico-system/calico-typha-84f7df598f-s7jfz" Sep 5 00:12:37.938601 kubelet[2919]: I0905 00:12:37.938180 2919 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9a4c367d-c72b-449d-8ab6-ad92aa22c1bb-typha-certs\") pod \"calico-typha-84f7df598f-s7jfz\" (UID: \"9a4c367d-c72b-449d-8ab6-ad92aa22c1bb\") " pod="calico-system/calico-typha-84f7df598f-s7jfz" Sep 5 00:12:37.938601 kubelet[2919]: I0905 00:12:37.938308 2919 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2505fe54-53ef-405a-a1f9-6fc11526db57-lib-modules\") pod \"calico-node-dcnnv\" (UID: \"2505fe54-53ef-405a-a1f9-6fc11526db57\") " pod="calico-system/calico-node-dcnnv" Sep 5 00:12:37.938601 kubelet[2919]: I0905 00:12:37.938456 2919 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2505fe54-53ef-405a-a1f9-6fc11526db57-node-certs\") pod \"calico-node-dcnnv\" (UID: \"2505fe54-53ef-405a-a1f9-6fc11526db57\") " pod="calico-system/calico-node-dcnnv" Sep 5 00:12:37.939200 kubelet[2919]: I0905 00:12:37.938611 2919 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ql9b\" (UniqueName: \"kubernetes.io/projected/2505fe54-53ef-405a-a1f9-6fc11526db57-kube-api-access-8ql9b\") pod \"calico-node-dcnnv\" (UID: \"2505fe54-53ef-405a-a1f9-6fc11526db57\") " pod="calico-system/calico-node-dcnnv" Sep 5 00:12:37.939200 kubelet[2919]: I0905 00:12:37.938707 2919 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw7dg\" (UniqueName: \"kubernetes.io/projected/9a4c367d-c72b-449d-8ab6-ad92aa22c1bb-kube-api-access-cw7dg\") pod \"calico-typha-84f7df598f-s7jfz\" (UID: \"9a4c367d-c72b-449d-8ab6-ad92aa22c1bb\") " pod="calico-system/calico-typha-84f7df598f-s7jfz" Sep 5 00:12:37.939200 kubelet[2919]: I0905 00:12:37.938771 2919 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2505fe54-53ef-405a-a1f9-6fc11526db57-policysync\") pod \"calico-node-dcnnv\" (UID: \"2505fe54-53ef-405a-a1f9-6fc11526db57\") " pod="calico-system/calico-node-dcnnv" Sep 5 00:12:37.939200 kubelet[2919]: I0905 00:12:37.938936 2919 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2505fe54-53ef-405a-a1f9-6fc11526db57-cni-bin-dir\") pod \"calico-node-dcnnv\" (UID: \"2505fe54-53ef-405a-a1f9-6fc11526db57\") " pod="calico-system/calico-node-dcnnv" Sep 5 00:12:37.939200 kubelet[2919]: I0905 00:12:37.939104 2919 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2505fe54-53ef-405a-a1f9-6fc11526db57-var-run-calico\") pod \"calico-node-dcnnv\" (UID: \"2505fe54-53ef-405a-a1f9-6fc11526db57\") " pod="calico-system/calico-node-dcnnv" Sep 5 00:12:37.939990 kubelet[2919]: I0905 00:12:37.939227 2919 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2505fe54-53ef-405a-a1f9-6fc11526db57-flexvol-driver-host\") pod \"calico-node-dcnnv\" (UID: \"2505fe54-53ef-405a-a1f9-6fc11526db57\") " pod="calico-system/calico-node-dcnnv" Sep 5 00:12:38.006536 kubelet[2919]: I0905 00:12:38.006439 2919 topology_manager.go:215] "Topology Admit Handler" podUID="3109814d-a4ad-47dd-9273-9920f3f0d86d" podNamespace="calico-system" podName="csi-node-driver-26b8j" Sep 5 00:12:38.007978 kubelet[2919]: E0905 00:12:38.007895 2919 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-26b8j" podUID="3109814d-a4ad-47dd-9273-9920f3f0d86d" Sep 5 00:12:38.040004 kubelet[2919]: I0905 00:12:38.039981 2919 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3109814d-a4ad-47dd-9273-9920f3f0d86d-socket-dir\") pod \"csi-node-driver-26b8j\" (UID: \"3109814d-a4ad-47dd-9273-9920f3f0d86d\") " pod="calico-system/csi-node-driver-26b8j" Sep 5 00:12:38.040103 kubelet[2919]: I0905 00:12:38.040043 2919 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g8ws\" (UniqueName: \"kubernetes.io/projected/3109814d-a4ad-47dd-9273-9920f3f0d86d-kube-api-access-2g8ws\") pod \"csi-node-driver-26b8j\" (UID: \"3109814d-a4ad-47dd-9273-9920f3f0d86d\") " pod="calico-system/csi-node-driver-26b8j" Sep 5 00:12:38.040103 kubelet[2919]: I0905 00:12:38.040077 2919 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3109814d-a4ad-47dd-9273-9920f3f0d86d-registration-dir\") pod \"csi-node-driver-26b8j\" (UID: \"3109814d-a4ad-47dd-9273-9920f3f0d86d\") " pod="calico-system/csi-node-driver-26b8j" Sep 5 00:12:38.040261 kubelet[2919]: I0905 00:12:38.040244 2919 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3109814d-a4ad-47dd-9273-9920f3f0d86d-kubelet-dir\") pod \"csi-node-driver-26b8j\" (UID: \"3109814d-a4ad-47dd-9273-9920f3f0d86d\") " pod="calico-system/csi-node-driver-26b8j" Sep 5 00:12:38.040486 kubelet[2919]: E0905 00:12:38.040474 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.040526 kubelet[2919]: W0905 00:12:38.040486 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.040526 kubelet[2919]: E0905 00:12:38.040504 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.040684 kubelet[2919]: E0905 00:12:38.040673 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.040723 kubelet[2919]: W0905 00:12:38.040684 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.040723 kubelet[2919]: E0905 00:12:38.040700 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.040850 kubelet[2919]: E0905 00:12:38.040842 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.040850 kubelet[2919]: W0905 00:12:38.040849 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.040913 kubelet[2919]: E0905 00:12:38.040861 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.040993 kubelet[2919]: E0905 00:12:38.040986 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.041020 kubelet[2919]: W0905 00:12:38.040993 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.041020 kubelet[2919]: E0905 00:12:38.041004 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.041167 kubelet[2919]: E0905 00:12:38.041161 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.041167 kubelet[2919]: W0905 00:12:38.041167 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.041227 kubelet[2919]: E0905 00:12:38.041179 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.041333 kubelet[2919]: E0905 00:12:38.041324 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.041333 kubelet[2919]: W0905 00:12:38.041331 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.041399 kubelet[2919]: E0905 00:12:38.041343 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.041497 kubelet[2919]: E0905 00:12:38.041489 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.041497 kubelet[2919]: W0905 00:12:38.041496 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.041550 kubelet[2919]: E0905 00:12:38.041519 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.041658 kubelet[2919]: E0905 00:12:38.041652 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.041686 kubelet[2919]: W0905 00:12:38.041658 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.041714 kubelet[2919]: E0905 00:12:38.041694 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.041807 kubelet[2919]: E0905 00:12:38.041800 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.041836 kubelet[2919]: W0905 00:12:38.041807 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.041836 kubelet[2919]: E0905 00:12:38.041819 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.041936 kubelet[2919]: E0905 00:12:38.041929 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.041963 kubelet[2919]: W0905 00:12:38.041936 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.041963 kubelet[2919]: E0905 00:12:38.041948 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.042073 kubelet[2919]: E0905 00:12:38.042066 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.042073 kubelet[2919]: W0905 00:12:38.042072 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.042122 kubelet[2919]: E0905 00:12:38.042084 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.042220 kubelet[2919]: E0905 00:12:38.042214 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.042247 kubelet[2919]: W0905 00:12:38.042220 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.042247 kubelet[2919]: E0905 00:12:38.042239 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.042339 kubelet[2919]: E0905 00:12:38.042332 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.042365 kubelet[2919]: W0905 00:12:38.042339 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.042365 kubelet[2919]: E0905 00:12:38.042355 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.042453 kubelet[2919]: E0905 00:12:38.042447 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.042477 kubelet[2919]: W0905 00:12:38.042453 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.042477 kubelet[2919]: E0905 00:12:38.042467 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.042569 kubelet[2919]: E0905 00:12:38.042562 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.042569 kubelet[2919]: W0905 00:12:38.042569 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.042619 kubelet[2919]: E0905 00:12:38.042602 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.042716 kubelet[2919]: E0905 00:12:38.042709 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.042744 kubelet[2919]: W0905 00:12:38.042716 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.042744 kubelet[2919]: E0905 00:12:38.042738 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.042842 kubelet[2919]: E0905 00:12:38.042835 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.042869 kubelet[2919]: W0905 00:12:38.042842 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.042869 kubelet[2919]: E0905 00:12:38.042863 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.042952 kubelet[2919]: E0905 00:12:38.042946 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.042979 kubelet[2919]: W0905 00:12:38.042952 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.042979 kubelet[2919]: E0905 00:12:38.042973 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.043065 kubelet[2919]: E0905 00:12:38.043059 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.043092 kubelet[2919]: W0905 00:12:38.043065 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.043092 kubelet[2919]: E0905 00:12:38.043074 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.043185 kubelet[2919]: E0905 00:12:38.043173 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.043185 kubelet[2919]: W0905 00:12:38.043181 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.043297 kubelet[2919]: E0905 00:12:38.043193 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.043333 kubelet[2919]: E0905 00:12:38.043313 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.043333 kubelet[2919]: W0905 00:12:38.043318 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.043333 kubelet[2919]: E0905 00:12:38.043332 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.043542 kubelet[2919]: E0905 00:12:38.043531 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.043582 kubelet[2919]: W0905 00:12:38.043541 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.043582 kubelet[2919]: E0905 00:12:38.043557 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.043677 kubelet[2919]: E0905 00:12:38.043668 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.043721 kubelet[2919]: W0905 00:12:38.043677 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.043721 kubelet[2919]: E0905 00:12:38.043689 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.043830 kubelet[2919]: E0905 00:12:38.043822 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.043830 kubelet[2919]: W0905 00:12:38.043828 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.043888 kubelet[2919]: E0905 00:12:38.043845 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.043944 kubelet[2919]: E0905 00:12:38.043937 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.043944 kubelet[2919]: W0905 00:12:38.043942 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.043996 kubelet[2919]: E0905 00:12:38.043964 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.044057 kubelet[2919]: E0905 00:12:38.044051 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.044082 kubelet[2919]: W0905 00:12:38.044057 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.044082 kubelet[2919]: E0905 00:12:38.044073 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.044132 kubelet[2919]: I0905 00:12:38.044096 2919 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3109814d-a4ad-47dd-9273-9920f3f0d86d-varrun\") pod \"csi-node-driver-26b8j\" (UID: \"3109814d-a4ad-47dd-9273-9920f3f0d86d\") " pod="calico-system/csi-node-driver-26b8j" Sep 5 00:12:38.044171 kubelet[2919]: E0905 00:12:38.044164 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.044197 kubelet[2919]: W0905 00:12:38.044170 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.044197 kubelet[2919]: E0905 00:12:38.044189 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.044285 kubelet[2919]: E0905 00:12:38.044278 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.044285 kubelet[2919]: W0905 00:12:38.044284 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.044345 kubelet[2919]: E0905 00:12:38.044305 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.044391 kubelet[2919]: E0905 00:12:38.044384 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.044391 kubelet[2919]: W0905 00:12:38.044390 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.044486 kubelet[2919]: E0905 00:12:38.044427 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.044528 kubelet[2919]: E0905 00:12:38.044495 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.044528 kubelet[2919]: W0905 00:12:38.044499 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.044528 kubelet[2919]: E0905 00:12:38.044508 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.044642 kubelet[2919]: E0905 00:12:38.044609 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.044642 kubelet[2919]: W0905 00:12:38.044614 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.044642 kubelet[2919]: E0905 00:12:38.044623 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.044795 kubelet[2919]: E0905 00:12:38.044784 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.044795 kubelet[2919]: W0905 00:12:38.044793 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.044874 kubelet[2919]: E0905 00:12:38.044804 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.044924 kubelet[2919]: E0905 00:12:38.044916 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.044924 kubelet[2919]: W0905 00:12:38.044923 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.045003 kubelet[2919]: E0905 00:12:38.044933 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.045087 kubelet[2919]: E0905 00:12:38.045078 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.045087 kubelet[2919]: W0905 00:12:38.045085 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.045169 kubelet[2919]: E0905 00:12:38.045098 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.045223 kubelet[2919]: E0905 00:12:38.045216 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.045223 kubelet[2919]: W0905 00:12:38.045222 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.045320 kubelet[2919]: E0905 00:12:38.045232 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.045362 kubelet[2919]: E0905 00:12:38.045348 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.045362 kubelet[2919]: W0905 00:12:38.045354 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.045441 kubelet[2919]: E0905 00:12:38.045390 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.045478 kubelet[2919]: E0905 00:12:38.045472 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.045504 kubelet[2919]: W0905 00:12:38.045478 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.045504 kubelet[2919]: E0905 00:12:38.045498 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.045596 kubelet[2919]: E0905 00:12:38.045588 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.045633 kubelet[2919]: W0905 00:12:38.045595 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.045633 kubelet[2919]: E0905 00:12:38.045606 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.045731 kubelet[2919]: E0905 00:12:38.045724 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.045731 kubelet[2919]: W0905 00:12:38.045731 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.045794 kubelet[2919]: E0905 00:12:38.045740 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.045850 kubelet[2919]: E0905 00:12:38.045843 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.045850 kubelet[2919]: W0905 00:12:38.045849 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.045910 kubelet[2919]: E0905 00:12:38.045868 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.045959 kubelet[2919]: E0905 00:12:38.045953 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.045959 kubelet[2919]: W0905 00:12:38.045959 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.046027 kubelet[2919]: E0905 00:12:38.045976 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.046068 kubelet[2919]: E0905 00:12:38.046062 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.046097 kubelet[2919]: W0905 00:12:38.046068 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.046097 kubelet[2919]: E0905 00:12:38.046085 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.046179 kubelet[2919]: E0905 00:12:38.046172 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.046210 kubelet[2919]: W0905 00:12:38.046181 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.046210 kubelet[2919]: E0905 00:12:38.046200 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.046339 kubelet[2919]: E0905 00:12:38.046331 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.046371 kubelet[2919]: W0905 00:12:38.046339 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.046371 kubelet[2919]: E0905 00:12:38.046354 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.046499 kubelet[2919]: E0905 00:12:38.046491 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.046499 kubelet[2919]: W0905 00:12:38.046499 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.046552 kubelet[2919]: E0905 00:12:38.046513 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.046642 kubelet[2919]: E0905 00:12:38.046635 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.046666 kubelet[2919]: W0905 00:12:38.046644 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.046666 kubelet[2919]: E0905 00:12:38.046659 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.046788 kubelet[2919]: E0905 00:12:38.046782 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.046811 kubelet[2919]: W0905 00:12:38.046788 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.046811 kubelet[2919]: E0905 00:12:38.046798 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.046909 kubelet[2919]: E0905 00:12:38.046903 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.046931 kubelet[2919]: W0905 00:12:38.046909 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.046931 kubelet[2919]: E0905 00:12:38.046918 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.047019 kubelet[2919]: E0905 00:12:38.047013 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.047043 kubelet[2919]: W0905 00:12:38.047019 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.047043 kubelet[2919]: E0905 00:12:38.047027 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.047217 kubelet[2919]: E0905 00:12:38.047207 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.047271 kubelet[2919]: W0905 00:12:38.047217 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.047271 kubelet[2919]: E0905 00:12:38.047233 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.047383 kubelet[2919]: E0905 00:12:38.047374 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.047429 kubelet[2919]: W0905 00:12:38.047383 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.047429 kubelet[2919]: E0905 00:12:38.047395 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.047557 kubelet[2919]: E0905 00:12:38.047547 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.047597 kubelet[2919]: W0905 00:12:38.047557 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.047597 kubelet[2919]: E0905 00:12:38.047570 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.049222 kubelet[2919]: E0905 00:12:38.049184 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.049222 kubelet[2919]: W0905 00:12:38.049196 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.049222 kubelet[2919]: E0905 00:12:38.049211 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.050034 kubelet[2919]: E0905 00:12:38.050002 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.050034 kubelet[2919]: W0905 00:12:38.050008 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.050034 kubelet[2919]: E0905 00:12:38.050017 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.053104 kubelet[2919]: E0905 00:12:38.053070 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.053104 kubelet[2919]: W0905 00:12:38.053078 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.053104 kubelet[2919]: E0905 00:12:38.053087 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.146375 kubelet[2919]: E0905 00:12:38.146277 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.146375 kubelet[2919]: W0905 00:12:38.146295 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.146375 kubelet[2919]: E0905 00:12:38.146316 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.146607 kubelet[2919]: E0905 00:12:38.146591 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.146607 kubelet[2919]: W0905 00:12:38.146607 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.146734 kubelet[2919]: E0905 00:12:38.146631 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.146896 kubelet[2919]: E0905 00:12:38.146881 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.146947 kubelet[2919]: W0905 00:12:38.146894 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.146947 kubelet[2919]: E0905 00:12:38.146919 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.147158 kubelet[2919]: E0905 00:12:38.147144 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.147199 kubelet[2919]: W0905 00:12:38.147158 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.147199 kubelet[2919]: E0905 00:12:38.147184 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.147436 kubelet[2919]: E0905 00:12:38.147423 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.147482 kubelet[2919]: W0905 00:12:38.147436 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.147482 kubelet[2919]: E0905 00:12:38.147459 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.147715 kubelet[2919]: E0905 00:12:38.147703 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.147760 kubelet[2919]: W0905 00:12:38.147719 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.147760 kubelet[2919]: E0905 00:12:38.147745 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.147937 kubelet[2919]: E0905 00:12:38.147927 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.147975 kubelet[2919]: W0905 00:12:38.147937 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.147975 kubelet[2919]: E0905 00:12:38.147965 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.148126 kubelet[2919]: E0905 00:12:38.148116 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.148166 kubelet[2919]: W0905 00:12:38.148128 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.148166 kubelet[2919]: E0905 00:12:38.148156 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.148337 kubelet[2919]: E0905 00:12:38.148296 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.148337 kubelet[2919]: W0905 00:12:38.148308 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.148337 kubelet[2919]: E0905 00:12:38.148337 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.148523 kubelet[2919]: E0905 00:12:38.148467 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.148523 kubelet[2919]: W0905 00:12:38.148476 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.148523 kubelet[2919]: E0905 00:12:38.148502 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.148655 kubelet[2919]: E0905 00:12:38.148638 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.148655 kubelet[2919]: W0905 00:12:38.148648 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.148734 kubelet[2919]: E0905 00:12:38.148669 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.148876 kubelet[2919]: E0905 00:12:38.148865 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.148914 kubelet[2919]: W0905 00:12:38.148876 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.148914 kubelet[2919]: E0905 00:12:38.148897 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.149113 kubelet[2919]: E0905 00:12:38.149080 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.149113 kubelet[2919]: W0905 00:12:38.149094 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.149210 kubelet[2919]: E0905 00:12:38.149118 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.149357 kubelet[2919]: E0905 00:12:38.149346 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.149357 kubelet[2919]: W0905 00:12:38.149355 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.149456 kubelet[2919]: E0905 00:12:38.149389 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.149585 kubelet[2919]: E0905 00:12:38.149573 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.149626 kubelet[2919]: W0905 00:12:38.149585 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.149662 kubelet[2919]: E0905 00:12:38.149625 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.149806 kubelet[2919]: E0905 00:12:38.149796 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.149842 kubelet[2919]: W0905 00:12:38.149806 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.149884 kubelet[2919]: E0905 00:12:38.149847 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.150015 kubelet[2919]: E0905 00:12:38.149983 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.150015 kubelet[2919]: W0905 00:12:38.149993 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.150097 kubelet[2919]: E0905 00:12:38.150022 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.150166 kubelet[2919]: E0905 00:12:38.150157 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.150207 kubelet[2919]: W0905 00:12:38.150166 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.150248 kubelet[2919]: E0905 00:12:38.150207 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.150395 kubelet[2919]: E0905 00:12:38.150380 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.150395 kubelet[2919]: W0905 00:12:38.150391 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.150512 kubelet[2919]: E0905 00:12:38.150409 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.150624 kubelet[2919]: E0905 00:12:38.150610 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.150624 kubelet[2919]: W0905 00:12:38.150619 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.150766 kubelet[2919]: E0905 00:12:38.150640 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.150864 kubelet[2919]: E0905 00:12:38.150853 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.150902 kubelet[2919]: W0905 00:12:38.150863 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.150902 kubelet[2919]: E0905 00:12:38.150890 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.151153 kubelet[2919]: E0905 00:12:38.151136 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.151153 kubelet[2919]: W0905 00:12:38.151149 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.151299 kubelet[2919]: E0905 00:12:38.151171 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.151431 kubelet[2919]: E0905 00:12:38.151415 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.151431 kubelet[2919]: W0905 00:12:38.151429 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.151540 kubelet[2919]: E0905 00:12:38.151448 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.151722 kubelet[2919]: E0905 00:12:38.151710 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.151722 kubelet[2919]: W0905 00:12:38.151721 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.151868 kubelet[2919]: E0905 00:12:38.151737 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.151971 kubelet[2919]: E0905 00:12:38.151945 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.151971 kubelet[2919]: W0905 00:12:38.151968 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.152103 kubelet[2919]: E0905 00:12:38.151981 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.159179 kubelet[2919]: E0905 00:12:38.159154 2919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:12:38.159179 kubelet[2919]: W0905 00:12:38.159169 2919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:12:38.159374 kubelet[2919]: E0905 00:12:38.159186 2919 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:12:38.166743 containerd[1534]: time="2024-09-05T00:12:38.166672023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84f7df598f-s7jfz,Uid:9a4c367d-c72b-449d-8ab6-ad92aa22c1bb,Namespace:calico-system,Attempt:0,}" Sep 5 00:12:38.177032 containerd[1534]: time="2024-09-05T00:12:38.176821235Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:12:38.177032 containerd[1534]: time="2024-09-05T00:12:38.177021591Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:12:38.177032 containerd[1534]: time="2024-09-05T00:12:38.177029084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:12:38.177126 containerd[1534]: time="2024-09-05T00:12:38.177070055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:12:38.183234 containerd[1534]: time="2024-09-05T00:12:38.183210073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dcnnv,Uid:2505fe54-53ef-405a-a1f9-6fc11526db57,Namespace:calico-system,Attempt:0,}" Sep 5 00:12:38.192236 containerd[1534]: time="2024-09-05T00:12:38.192192855Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:12:38.192236 containerd[1534]: time="2024-09-05T00:12:38.192222727Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:12:38.192236 containerd[1534]: time="2024-09-05T00:12:38.192230090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:12:38.192343 containerd[1534]: time="2024-09-05T00:12:38.192276875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:12:38.194404 systemd[1]: Started cri-containerd-829ad81c9d17b322412fcf01ed69c77ba0553abab277aae91eb2cc654a416e8e.scope - libcontainer container 829ad81c9d17b322412fcf01ed69c77ba0553abab277aae91eb2cc654a416e8e. Sep 5 00:12:38.198045 systemd[1]: Started cri-containerd-5a3b9df237fe0cf7c56bec636db46b42ea2f76f5e0727e79de54263f8e151790.scope - libcontainer container 5a3b9df237fe0cf7c56bec636db46b42ea2f76f5e0727e79de54263f8e151790. Sep 5 00:12:38.209504 containerd[1534]: time="2024-09-05T00:12:38.209481923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dcnnv,Uid:2505fe54-53ef-405a-a1f9-6fc11526db57,Namespace:calico-system,Attempt:0,} returns sandbox id \"5a3b9df237fe0cf7c56bec636db46b42ea2f76f5e0727e79de54263f8e151790\"" Sep 5 00:12:38.210200 containerd[1534]: time="2024-09-05T00:12:38.210189630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Sep 5 00:12:38.217535 containerd[1534]: time="2024-09-05T00:12:38.217510640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84f7df598f-s7jfz,Uid:9a4c367d-c72b-449d-8ab6-ad92aa22c1bb,Namespace:calico-system,Attempt:0,} returns sandbox id \"829ad81c9d17b322412fcf01ed69c77ba0553abab277aae91eb2cc654a416e8e\"" Sep 5 00:12:39.604548 kubelet[2919]: E0905 00:12:39.604449 2919 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-26b8j" podUID="3109814d-a4ad-47dd-9273-9920f3f0d86d" Sep 5 00:12:39.733100 containerd[1534]: time="2024-09-05T00:12:39.733077812Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:12:39.733302 containerd[1534]: time="2024-09-05T00:12:39.733266103Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Sep 5 00:12:39.733622 containerd[1534]: time="2024-09-05T00:12:39.733610685Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:12:39.734663 containerd[1534]: time="2024-09-05T00:12:39.734623063Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:12:39.735382 containerd[1534]: time="2024-09-05T00:12:39.735346020Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.52514112s" Sep 5 00:12:39.735382 containerd[1534]: time="2024-09-05T00:12:39.735365671Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Sep 5 00:12:39.735695 containerd[1534]: time="2024-09-05T00:12:39.735659986Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Sep 5 00:12:39.736192 containerd[1534]: time="2024-09-05T00:12:39.736157447Z" level=info msg="CreateContainer within sandbox \"5a3b9df237fe0cf7c56bec636db46b42ea2f76f5e0727e79de54263f8e151790\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 5 00:12:39.741538 containerd[1534]: time="2024-09-05T00:12:39.741488177Z" level=info msg="CreateContainer within sandbox \"5a3b9df237fe0cf7c56bec636db46b42ea2f76f5e0727e79de54263f8e151790\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"95f586d867e1e808e14a138872be3ffc36ec65c4057dd803126d8ec723cc6a6e\"" Sep 5 00:12:39.741753 containerd[1534]: time="2024-09-05T00:12:39.741710358Z" level=info msg="StartContainer for \"95f586d867e1e808e14a138872be3ffc36ec65c4057dd803126d8ec723cc6a6e\"" Sep 5 00:12:39.767783 systemd[1]: Started cri-containerd-95f586d867e1e808e14a138872be3ffc36ec65c4057dd803126d8ec723cc6a6e.scope - libcontainer container 95f586d867e1e808e14a138872be3ffc36ec65c4057dd803126d8ec723cc6a6e. Sep 5 00:12:39.795926 containerd[1534]: time="2024-09-05T00:12:39.795872380Z" level=info msg="StartContainer for \"95f586d867e1e808e14a138872be3ffc36ec65c4057dd803126d8ec723cc6a6e\" returns successfully" Sep 5 00:12:39.801929 systemd[1]: cri-containerd-95f586d867e1e808e14a138872be3ffc36ec65c4057dd803126d8ec723cc6a6e.scope: Deactivated successfully. Sep 5 00:12:39.816238 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-95f586d867e1e808e14a138872be3ffc36ec65c4057dd803126d8ec723cc6a6e-rootfs.mount: Deactivated successfully. Sep 5 00:12:40.051776 containerd[1534]: time="2024-09-05T00:12:40.051693508Z" level=info msg="shim disconnected" id=95f586d867e1e808e14a138872be3ffc36ec65c4057dd803126d8ec723cc6a6e namespace=k8s.io Sep 5 00:12:40.051776 containerd[1534]: time="2024-09-05T00:12:40.051733364Z" level=warning msg="cleaning up after shim disconnected" id=95f586d867e1e808e14a138872be3ffc36ec65c4057dd803126d8ec723cc6a6e namespace=k8s.io Sep 5 00:12:40.051776 containerd[1534]: time="2024-09-05T00:12:40.051740525Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:12:41.604631 kubelet[2919]: E0905 00:12:41.604608 2919 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-26b8j" podUID="3109814d-a4ad-47dd-9273-9920f3f0d86d" Sep 5 00:12:41.783865 containerd[1534]: time="2024-09-05T00:12:41.783843917Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:12:41.784072 containerd[1534]: time="2024-09-05T00:12:41.784053508Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Sep 5 00:12:41.784393 containerd[1534]: time="2024-09-05T00:12:41.784351965Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:12:41.785325 containerd[1534]: time="2024-09-05T00:12:41.785271428Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:12:41.785693 containerd[1534]: time="2024-09-05T00:12:41.785648643Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 2.049973414s" Sep 5 00:12:41.785693 containerd[1534]: time="2024-09-05T00:12:41.785666851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Sep 5 00:12:41.785943 containerd[1534]: time="2024-09-05T00:12:41.785897863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Sep 5 00:12:41.789005 containerd[1534]: time="2024-09-05T00:12:41.788960498Z" level=info msg="CreateContainer within sandbox \"829ad81c9d17b322412fcf01ed69c77ba0553abab277aae91eb2cc654a416e8e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 5 00:12:41.793082 containerd[1534]: time="2024-09-05T00:12:41.793043430Z" level=info msg="CreateContainer within sandbox \"829ad81c9d17b322412fcf01ed69c77ba0553abab277aae91eb2cc654a416e8e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"eed44130099c2777435981c56b8fc801b5a1ba1cf6e121544c7ab5a199576159\"" Sep 5 00:12:41.793332 containerd[1534]: time="2024-09-05T00:12:41.793286703Z" level=info msg="StartContainer for \"eed44130099c2777435981c56b8fc801b5a1ba1cf6e121544c7ab5a199576159\"" Sep 5 00:12:41.812507 systemd[1]: Started cri-containerd-eed44130099c2777435981c56b8fc801b5a1ba1cf6e121544c7ab5a199576159.scope - libcontainer container eed44130099c2777435981c56b8fc801b5a1ba1cf6e121544c7ab5a199576159. Sep 5 00:12:41.835922 containerd[1534]: time="2024-09-05T00:12:41.835869851Z" level=info msg="StartContainer for \"eed44130099c2777435981c56b8fc801b5a1ba1cf6e121544c7ab5a199576159\" returns successfully" Sep 5 00:12:42.670346 kubelet[2919]: I0905 00:12:42.670293 2919 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-84f7df598f-s7jfz" podStartSLOduration=2.102471353 podStartE2EDuration="5.670256793s" podCreationTimestamp="2024-09-05 00:12:37 +0000 UTC" firstStartedPulling="2024-09-05 00:12:38.21801979 +0000 UTC m=+18.658851980" lastFinishedPulling="2024-09-05 00:12:41.785805229 +0000 UTC m=+22.226637420" observedRunningTime="2024-09-05 00:12:42.669927093 +0000 UTC m=+23.110759284" watchObservedRunningTime="2024-09-05 00:12:42.670256793 +0000 UTC m=+23.111088981" Sep 5 00:12:43.604698 kubelet[2919]: E0905 00:12:43.604670 2919 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-26b8j" podUID="3109814d-a4ad-47dd-9273-9920f3f0d86d" Sep 5 00:12:43.666668 kubelet[2919]: I0905 00:12:43.666655 2919 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:12:45.359407 containerd[1534]: time="2024-09-05T00:12:45.359384718Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:12:45.359612 containerd[1534]: time="2024-09-05T00:12:45.359591853Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Sep 5 00:12:45.359918 containerd[1534]: time="2024-09-05T00:12:45.359907233Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:12:45.360872 containerd[1534]: time="2024-09-05T00:12:45.360830549Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:12:45.361306 containerd[1534]: time="2024-09-05T00:12:45.361252516Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 3.575334697s" Sep 5 00:12:45.361306 containerd[1534]: time="2024-09-05T00:12:45.361272237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Sep 5 00:12:45.362059 containerd[1534]: time="2024-09-05T00:12:45.362045995Z" level=info msg="CreateContainer within sandbox \"5a3b9df237fe0cf7c56bec636db46b42ea2f76f5e0727e79de54263f8e151790\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 5 00:12:45.366493 containerd[1534]: time="2024-09-05T00:12:45.366448972Z" level=info msg="CreateContainer within sandbox \"5a3b9df237fe0cf7c56bec636db46b42ea2f76f5e0727e79de54263f8e151790\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8562bb7e4417c76f0d7f3789e9b1d2c51e9f9b3b709931209a84a1955ff6ccd0\"" Sep 5 00:12:45.366683 containerd[1534]: time="2024-09-05T00:12:45.366670214Z" level=info msg="StartContainer for \"8562bb7e4417c76f0d7f3789e9b1d2c51e9f9b3b709931209a84a1955ff6ccd0\"" Sep 5 00:12:45.399569 systemd[1]: Started cri-containerd-8562bb7e4417c76f0d7f3789e9b1d2c51e9f9b3b709931209a84a1955ff6ccd0.scope - libcontainer container 8562bb7e4417c76f0d7f3789e9b1d2c51e9f9b3b709931209a84a1955ff6ccd0. Sep 5 00:12:45.411597 containerd[1534]: time="2024-09-05T00:12:45.411547511Z" level=info msg="StartContainer for \"8562bb7e4417c76f0d7f3789e9b1d2c51e9f9b3b709931209a84a1955ff6ccd0\" returns successfully" Sep 5 00:12:45.604888 kubelet[2919]: E0905 00:12:45.604813 2919 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-26b8j" podUID="3109814d-a4ad-47dd-9273-9920f3f0d86d" Sep 5 00:12:45.919898 systemd[1]: cri-containerd-8562bb7e4417c76f0d7f3789e9b1d2c51e9f9b3b709931209a84a1955ff6ccd0.scope: Deactivated successfully. Sep 5 00:12:45.930944 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8562bb7e4417c76f0d7f3789e9b1d2c51e9f9b3b709931209a84a1955ff6ccd0-rootfs.mount: Deactivated successfully. Sep 5 00:12:45.959477 kubelet[2919]: I0905 00:12:45.959425 2919 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Sep 5 00:12:45.989889 kubelet[2919]: I0905 00:12:45.989812 2919 topology_manager.go:215] "Topology Admit Handler" podUID="76079e94-a1fe-4931-9c5f-c490f8c55dd8" podNamespace="calico-system" podName="calico-kube-controllers-5996bfb947-4hzkj" Sep 5 00:12:45.991190 kubelet[2919]: I0905 00:12:45.991139 2919 topology_manager.go:215] "Topology Admit Handler" podUID="e7b785e3-e901-481a-96a4-597b35c1db1f" podNamespace="kube-system" podName="coredns-76f75df574-xr8b9" Sep 5 00:12:45.992452 kubelet[2919]: I0905 00:12:45.992390 2919 topology_manager.go:215] "Topology Admit Handler" podUID="b66a1d66-183c-460e-a689-dc92fc0f3c54" podNamespace="kube-system" podName="coredns-76f75df574-46mhs" Sep 5 00:12:46.006450 systemd[1]: Created slice kubepods-besteffort-pod76079e94_a1fe_4931_9c5f_c490f8c55dd8.slice - libcontainer container kubepods-besteffort-pod76079e94_a1fe_4931_9c5f_c490f8c55dd8.slice. Sep 5 00:12:46.017969 systemd[1]: Created slice kubepods-burstable-pode7b785e3_e901_481a_96a4_597b35c1db1f.slice - libcontainer container kubepods-burstable-pode7b785e3_e901_481a_96a4_597b35c1db1f.slice. Sep 5 00:12:46.031749 systemd[1]: Created slice kubepods-burstable-podb66a1d66_183c_460e_a689_dc92fc0f3c54.slice - libcontainer container kubepods-burstable-podb66a1d66_183c_460e_a689_dc92fc0f3c54.slice. Sep 5 00:12:46.105708 kubelet[2919]: I0905 00:12:46.105641 2919 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhndp\" (UniqueName: \"kubernetes.io/projected/76079e94-a1fe-4931-9c5f-c490f8c55dd8-kube-api-access-dhndp\") pod \"calico-kube-controllers-5996bfb947-4hzkj\" (UID: \"76079e94-a1fe-4931-9c5f-c490f8c55dd8\") " pod="calico-system/calico-kube-controllers-5996bfb947-4hzkj" Sep 5 00:12:46.105708 kubelet[2919]: I0905 00:12:46.105699 2919 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b66a1d66-183c-460e-a689-dc92fc0f3c54-config-volume\") pod \"coredns-76f75df574-46mhs\" (UID: \"b66a1d66-183c-460e-a689-dc92fc0f3c54\") " pod="kube-system/coredns-76f75df574-46mhs" Sep 5 00:12:46.105984 kubelet[2919]: I0905 00:12:46.105731 2919 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xv58\" (UniqueName: \"kubernetes.io/projected/b66a1d66-183c-460e-a689-dc92fc0f3c54-kube-api-access-6xv58\") pod \"coredns-76f75df574-46mhs\" (UID: \"b66a1d66-183c-460e-a689-dc92fc0f3c54\") " pod="kube-system/coredns-76f75df574-46mhs" Sep 5 00:12:46.105984 kubelet[2919]: I0905 00:12:46.105811 2919 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76079e94-a1fe-4931-9c5f-c490f8c55dd8-tigera-ca-bundle\") pod \"calico-kube-controllers-5996bfb947-4hzkj\" (UID: \"76079e94-a1fe-4931-9c5f-c490f8c55dd8\") " pod="calico-system/calico-kube-controllers-5996bfb947-4hzkj" Sep 5 00:12:46.105984 kubelet[2919]: I0905 00:12:46.105938 2919 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e7b785e3-e901-481a-96a4-597b35c1db1f-config-volume\") pod \"coredns-76f75df574-xr8b9\" (UID: \"e7b785e3-e901-481a-96a4-597b35c1db1f\") " pod="kube-system/coredns-76f75df574-xr8b9" Sep 5 00:12:46.105984 kubelet[2919]: I0905 00:12:46.105977 2919 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mfh8\" (UniqueName: \"kubernetes.io/projected/e7b785e3-e901-481a-96a4-597b35c1db1f-kube-api-access-6mfh8\") pod \"coredns-76f75df574-xr8b9\" (UID: \"e7b785e3-e901-481a-96a4-597b35c1db1f\") " pod="kube-system/coredns-76f75df574-xr8b9" Sep 5 00:12:46.314035 containerd[1534]: time="2024-09-05T00:12:46.313807513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5996bfb947-4hzkj,Uid:76079e94-a1fe-4931-9c5f-c490f8c55dd8,Namespace:calico-system,Attempt:0,}" Sep 5 00:12:46.327100 containerd[1534]: time="2024-09-05T00:12:46.326986885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xr8b9,Uid:e7b785e3-e901-481a-96a4-597b35c1db1f,Namespace:kube-system,Attempt:0,}" Sep 5 00:12:46.335204 containerd[1534]: time="2024-09-05T00:12:46.335132482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-46mhs,Uid:b66a1d66-183c-460e-a689-dc92fc0f3c54,Namespace:kube-system,Attempt:0,}" Sep 5 00:12:46.583367 containerd[1534]: time="2024-09-05T00:12:46.583300548Z" level=info msg="shim disconnected" id=8562bb7e4417c76f0d7f3789e9b1d2c51e9f9b3b709931209a84a1955ff6ccd0 namespace=k8s.io Sep 5 00:12:46.583367 containerd[1534]: time="2024-09-05T00:12:46.583365734Z" level=warning msg="cleaning up after shim disconnected" id=8562bb7e4417c76f0d7f3789e9b1d2c51e9f9b3b709931209a84a1955ff6ccd0 namespace=k8s.io Sep 5 00:12:46.583663 containerd[1534]: time="2024-09-05T00:12:46.583371831Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:12:46.589664 containerd[1534]: time="2024-09-05T00:12:46.589594828Z" level=warning msg="cleanup warnings time=\"2024-09-05T00:12:46Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 5 00:12:46.616622 containerd[1534]: time="2024-09-05T00:12:46.616558915Z" level=error msg="Failed to destroy network for sandbox \"e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:12:46.616762 containerd[1534]: time="2024-09-05T00:12:46.616625088Z" level=error msg="Failed to destroy network for sandbox \"3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:12:46.616879 containerd[1534]: time="2024-09-05T00:12:46.616866639Z" level=error msg="encountered an error cleaning up failed sandbox \"3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:12:46.616906 containerd[1534]: time="2024-09-05T00:12:46.616882971Z" level=error msg="encountered an error cleaning up failed sandbox \"e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:12:46.616923 containerd[1534]: time="2024-09-05T00:12:46.616900943Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5996bfb947-4hzkj,Uid:76079e94-a1fe-4931-9c5f-c490f8c55dd8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:12:46.616963 containerd[1534]: time="2024-09-05T00:12:46.616918110Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xr8b9,Uid:e7b785e3-e901-481a-96a4-597b35c1db1f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:12:46.617072 kubelet[2919]: E0905 00:12:46.617061 2919 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:12:46.617234 kubelet[2919]: E0905 00:12:46.617103 2919 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5996bfb947-4hzkj" Sep 5 00:12:46.617234 kubelet[2919]: E0905 00:12:46.617118 2919 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5996bfb947-4hzkj" Sep 5 00:12:46.617234 kubelet[2919]: E0905 00:12:46.617061 2919 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:12:46.617234 kubelet[2919]: E0905 00:12:46.617152 2919 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-xr8b9" Sep 5 00:12:46.617359 kubelet[2919]: E0905 00:12:46.617165 2919 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-xr8b9" Sep 5 00:12:46.617359 kubelet[2919]: E0905 00:12:46.617189 2919 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-xr8b9_kube-system(e7b785e3-e901-481a-96a4-597b35c1db1f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-xr8b9_kube-system(e7b785e3-e901-481a-96a4-597b35c1db1f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-xr8b9" podUID="e7b785e3-e901-481a-96a4-597b35c1db1f" Sep 5 00:12:46.617359 kubelet[2919]: E0905 00:12:46.617153 2919 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5996bfb947-4hzkj_calico-system(76079e94-a1fe-4931-9c5f-c490f8c55dd8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5996bfb947-4hzkj_calico-system(76079e94-a1fe-4931-9c5f-c490f8c55dd8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5996bfb947-4hzkj" podUID="76079e94-a1fe-4931-9c5f-c490f8c55dd8" Sep 5 00:12:46.618105 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3-shm.mount: Deactivated successfully. Sep 5 00:12:46.618160 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655-shm.mount: Deactivated successfully. Sep 5 00:12:46.626505 containerd[1534]: time="2024-09-05T00:12:46.626451907Z" level=error msg="Failed to destroy network for sandbox \"46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:12:46.626690 containerd[1534]: time="2024-09-05T00:12:46.626649695Z" level=error msg="encountered an error cleaning up failed sandbox \"46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:12:46.626690 containerd[1534]: time="2024-09-05T00:12:46.626675674Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-46mhs,Uid:b66a1d66-183c-460e-a689-dc92fc0f3c54,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:12:46.626844 kubelet[2919]: E0905 00:12:46.626804 2919 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:12:46.626844 kubelet[2919]: E0905 00:12:46.626841 2919 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-46mhs" Sep 5 00:12:46.626907 kubelet[2919]: E0905 00:12:46.626855 2919 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-46mhs" Sep 5 00:12:46.626907 kubelet[2919]: E0905 00:12:46.626885 2919 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-46mhs_kube-system(b66a1d66-183c-460e-a689-dc92fc0f3c54)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-46mhs_kube-system(b66a1d66-183c-460e-a689-dc92fc0f3c54)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-46mhs" podUID="b66a1d66-183c-460e-a689-dc92fc0f3c54" Sep 5 00:12:46.673469 kubelet[2919]: I0905 00:12:46.673421 2919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4" Sep 5 00:12:46.673869 containerd[1534]: time="2024-09-05T00:12:46.673817402Z" level=info msg="StopPodSandbox for \"46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4\"" Sep 5 00:12:46.673996 kubelet[2919]: I0905 00:12:46.673951 2919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3" Sep 5 00:12:46.674049 containerd[1534]: time="2024-09-05T00:12:46.674020375Z" level=info msg="Ensure that sandbox 46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4 in task-service has been cleanup successfully" Sep 5 00:12:46.674307 containerd[1534]: time="2024-09-05T00:12:46.674280445Z" level=info msg="StopPodSandbox for \"e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3\"" Sep 5 00:12:46.674446 containerd[1534]: time="2024-09-05T00:12:46.674424131Z" level=info msg="Ensure that sandbox e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3 in task-service has been cleanup successfully" Sep 5 00:12:46.674610 kubelet[2919]: I0905 00:12:46.674596 2919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655" Sep 5 00:12:46.675005 containerd[1534]: time="2024-09-05T00:12:46.674986414Z" level=info msg="StopPodSandbox for \"3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655\"" Sep 5 00:12:46.675147 containerd[1534]: time="2024-09-05T00:12:46.675128586Z" level=info msg="Ensure that sandbox 3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655 in task-service has been cleanup successfully" Sep 5 00:12:46.676804 containerd[1534]: time="2024-09-05T00:12:46.676776024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Sep 5 00:12:46.688133 containerd[1534]: time="2024-09-05T00:12:46.688105502Z" level=error msg="StopPodSandbox for \"3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655\" failed" error="failed to destroy network for sandbox \"3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:12:46.688218 containerd[1534]: time="2024-09-05T00:12:46.688155796Z" level=error msg="StopPodSandbox for \"46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4\" failed" error="failed to destroy network for sandbox \"46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:12:46.688218 containerd[1534]: time="2024-09-05T00:12:46.688105254Z" level=error msg="StopPodSandbox for \"e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3\" failed" error="failed to destroy network for sandbox \"e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:12:46.688273 kubelet[2919]: E0905 00:12:46.688251 2919 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4" Sep 5 00:12:46.688273 kubelet[2919]: E0905 00:12:46.688251 2919 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655" Sep 5 00:12:46.688321 kubelet[2919]: E0905 00:12:46.688276 2919 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3" Sep 5 00:12:46.688321 kubelet[2919]: E0905 00:12:46.688304 2919 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4"} Sep 5 00:12:46.688360 kubelet[2919]: E0905 00:12:46.688327 2919 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b66a1d66-183c-460e-a689-dc92fc0f3c54\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:12:46.688360 kubelet[2919]: E0905 00:12:46.688339 2919 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655"} Sep 5 00:12:46.688360 kubelet[2919]: E0905 00:12:46.688344 2919 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b66a1d66-183c-460e-a689-dc92fc0f3c54\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-46mhs" podUID="b66a1d66-183c-460e-a689-dc92fc0f3c54" Sep 5 00:12:46.688360 kubelet[2919]: E0905 00:12:46.688351 2919 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3"} Sep 5 00:12:46.688360 kubelet[2919]: E0905 00:12:46.688360 2919 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"76079e94-a1fe-4931-9c5f-c490f8c55dd8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:12:46.688484 kubelet[2919]: E0905 00:12:46.688370 2919 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e7b785e3-e901-481a-96a4-597b35c1db1f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:12:46.688484 kubelet[2919]: E0905 00:12:46.688375 2919 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"76079e94-a1fe-4931-9c5f-c490f8c55dd8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5996bfb947-4hzkj" podUID="76079e94-a1fe-4931-9c5f-c490f8c55dd8" Sep 5 00:12:46.688484 kubelet[2919]: E0905 00:12:46.688384 2919 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e7b785e3-e901-481a-96a4-597b35c1db1f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-xr8b9" podUID="e7b785e3-e901-481a-96a4-597b35c1db1f" Sep 5 00:12:47.367346 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4-shm.mount: Deactivated successfully. Sep 5 00:12:47.619670 systemd[1]: Created slice kubepods-besteffort-pod3109814d_a4ad_47dd_9273_9920f3f0d86d.slice - libcontainer container kubepods-besteffort-pod3109814d_a4ad_47dd_9273_9920f3f0d86d.slice. Sep 5 00:12:47.625211 containerd[1534]: time="2024-09-05T00:12:47.625122580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-26b8j,Uid:3109814d-a4ad-47dd-9273-9920f3f0d86d,Namespace:calico-system,Attempt:0,}" Sep 5 00:12:47.657614 containerd[1534]: time="2024-09-05T00:12:47.657588361Z" level=error msg="Failed to destroy network for sandbox \"02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:12:47.657761 containerd[1534]: time="2024-09-05T00:12:47.657749512Z" level=error msg="encountered an error cleaning up failed sandbox \"02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:12:47.657786 containerd[1534]: time="2024-09-05T00:12:47.657777570Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-26b8j,Uid:3109814d-a4ad-47dd-9273-9920f3f0d86d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:12:47.657956 kubelet[2919]: E0905 00:12:47.657942 2919 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:12:47.658157 kubelet[2919]: E0905 00:12:47.657971 2919 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-26b8j" Sep 5 00:12:47.658157 kubelet[2919]: E0905 00:12:47.657989 2919 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-26b8j" Sep 5 00:12:47.658157 kubelet[2919]: E0905 00:12:47.658033 2919 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-26b8j_calico-system(3109814d-a4ad-47dd-9273-9920f3f0d86d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-26b8j_calico-system(3109814d-a4ad-47dd-9273-9920f3f0d86d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-26b8j" podUID="3109814d-a4ad-47dd-9273-9920f3f0d86d" Sep 5 00:12:47.659834 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8-shm.mount: Deactivated successfully. Sep 5 00:12:47.677351 kubelet[2919]: I0905 00:12:47.677341 2919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8" Sep 5 00:12:47.677630 containerd[1534]: time="2024-09-05T00:12:47.677614146Z" level=info msg="StopPodSandbox for \"02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8\"" Sep 5 00:12:47.677730 containerd[1534]: time="2024-09-05T00:12:47.677718219Z" level=info msg="Ensure that sandbox 02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8 in task-service has been cleanup successfully" Sep 5 00:12:47.692682 containerd[1534]: time="2024-09-05T00:12:47.692648125Z" level=error msg="StopPodSandbox for \"02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8\" failed" error="failed to destroy network for sandbox \"02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:12:47.692885 kubelet[2919]: E0905 00:12:47.692844 2919 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8" Sep 5 00:12:47.692885 kubelet[2919]: E0905 00:12:47.692878 2919 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8"} Sep 5 00:12:47.692965 kubelet[2919]: E0905 00:12:47.692914 2919 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3109814d-a4ad-47dd-9273-9920f3f0d86d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:12:47.692965 kubelet[2919]: E0905 00:12:47.692937 2919 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3109814d-a4ad-47dd-9273-9920f3f0d86d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-26b8j" podUID="3109814d-a4ad-47dd-9273-9920f3f0d86d" Sep 5 00:12:51.291242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3250424415.mount: Deactivated successfully. Sep 5 00:12:51.308164 containerd[1534]: time="2024-09-05T00:12:51.308114743Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:12:51.308377 containerd[1534]: time="2024-09-05T00:12:51.308295800Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Sep 5 00:12:51.308721 containerd[1534]: time="2024-09-05T00:12:51.308666261Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:12:51.309592 containerd[1534]: time="2024-09-05T00:12:51.309547701Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:12:51.309945 containerd[1534]: time="2024-09-05T00:12:51.309905004Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 4.633097286s" Sep 5 00:12:51.309945 containerd[1534]: time="2024-09-05T00:12:51.309921558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Sep 5 00:12:51.313486 containerd[1534]: time="2024-09-05T00:12:51.313467883Z" level=info msg="CreateContainer within sandbox \"5a3b9df237fe0cf7c56bec636db46b42ea2f76f5e0727e79de54263f8e151790\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 5 00:12:51.319041 containerd[1534]: time="2024-09-05T00:12:51.318997613Z" level=info msg="CreateContainer within sandbox \"5a3b9df237fe0cf7c56bec636db46b42ea2f76f5e0727e79de54263f8e151790\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"637e4da4834be38918637566e2a68e4e537f98f84e2c90b24589e9c113929c90\"" Sep 5 00:12:51.319299 containerd[1534]: time="2024-09-05T00:12:51.319265990Z" level=info msg="StartContainer for \"637e4da4834be38918637566e2a68e4e537f98f84e2c90b24589e9c113929c90\"" Sep 5 00:12:51.347720 systemd[1]: Started cri-containerd-637e4da4834be38918637566e2a68e4e537f98f84e2c90b24589e9c113929c90.scope - libcontainer container 637e4da4834be38918637566e2a68e4e537f98f84e2c90b24589e9c113929c90. Sep 5 00:12:51.396522 containerd[1534]: time="2024-09-05T00:12:51.396486963Z" level=info msg="StartContainer for \"637e4da4834be38918637566e2a68e4e537f98f84e2c90b24589e9c113929c90\" returns successfully" Sep 5 00:12:51.475314 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 5 00:12:51.475362 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 5 00:12:51.715170 kubelet[2919]: I0905 00:12:51.715116 2919 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-dcnnv" podStartSLOduration=1.61498641 podStartE2EDuration="14.71502738s" podCreationTimestamp="2024-09-05 00:12:37 +0000 UTC" firstStartedPulling="2024-09-05 00:12:38.210067638 +0000 UTC m=+18.650899829" lastFinishedPulling="2024-09-05 00:12:51.310108608 +0000 UTC m=+31.750940799" observedRunningTime="2024-09-05 00:12:51.714183673 +0000 UTC m=+32.155015949" watchObservedRunningTime="2024-09-05 00:12:51.71502738 +0000 UTC m=+32.155859616" Sep 5 00:12:52.691001 kubelet[2919]: I0905 00:12:52.690946 2919 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:12:58.605552 containerd[1534]: time="2024-09-05T00:12:58.605459824Z" level=info msg="StopPodSandbox for \"3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655\"" Sep 5 00:12:58.606546 containerd[1534]: time="2024-09-05T00:12:58.605532210Z" level=info msg="StopPodSandbox for \"e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3\"" Sep 5 00:12:58.656786 containerd[1534]: 2024-09-05 00:12:58.641 [INFO][4563] k8s.go 608: Cleaning up netns ContainerID="3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655" Sep 5 00:12:58.656786 containerd[1534]: 2024-09-05 00:12:58.641 [INFO][4563] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655" iface="eth0" netns="/var/run/netns/cni-b419cb48-14e1-0977-6e39-348b6b982e1a" Sep 5 00:12:58.656786 containerd[1534]: 2024-09-05 00:12:58.641 [INFO][4563] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655" iface="eth0" netns="/var/run/netns/cni-b419cb48-14e1-0977-6e39-348b6b982e1a" Sep 5 00:12:58.656786 containerd[1534]: 2024-09-05 00:12:58.641 [INFO][4563] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655" iface="eth0" netns="/var/run/netns/cni-b419cb48-14e1-0977-6e39-348b6b982e1a" Sep 5 00:12:58.656786 containerd[1534]: 2024-09-05 00:12:58.641 [INFO][4563] k8s.go 615: Releasing IP address(es) ContainerID="3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655" Sep 5 00:12:58.656786 containerd[1534]: 2024-09-05 00:12:58.641 [INFO][4563] utils.go 188: Calico CNI releasing IP address ContainerID="3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655" Sep 5 00:12:58.656786 containerd[1534]: 2024-09-05 00:12:58.651 [INFO][4598] ipam_plugin.go 417: Releasing address using handleID ContainerID="3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655" HandleID="k8s-pod-network.3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655" Workload="ci--4054.1.0--a--b942d58550-k8s-calico--kube--controllers--5996bfb947--4hzkj-eth0" Sep 5 00:12:58.656786 containerd[1534]: 2024-09-05 00:12:58.651 [INFO][4598] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 5 00:12:58.656786 containerd[1534]: 2024-09-05 00:12:58.651 [INFO][4598] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 5 00:12:58.656786 containerd[1534]: 2024-09-05 00:12:58.654 [WARNING][4598] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655" HandleID="k8s-pod-network.3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655" Workload="ci--4054.1.0--a--b942d58550-k8s-calico--kube--controllers--5996bfb947--4hzkj-eth0" Sep 5 00:12:58.656786 containerd[1534]: 2024-09-05 00:12:58.654 [INFO][4598] ipam_plugin.go 445: Releasing address using workloadID ContainerID="3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655" HandleID="k8s-pod-network.3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655" Workload="ci--4054.1.0--a--b942d58550-k8s-calico--kube--controllers--5996bfb947--4hzkj-eth0" Sep 5 00:12:58.656786 containerd[1534]: 2024-09-05 00:12:58.654 [INFO][4598] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 5 00:12:58.656786 containerd[1534]: 2024-09-05 00:12:58.656 [INFO][4563] k8s.go 621: Teardown processing complete. ContainerID="3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655" Sep 5 00:12:58.657080 containerd[1534]: time="2024-09-05T00:12:58.656870245Z" level=info msg="TearDown network for sandbox \"3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655\" successfully" Sep 5 00:12:58.657080 containerd[1534]: time="2024-09-05T00:12:58.656891309Z" level=info msg="StopPodSandbox for \"3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655\" returns successfully" Sep 5 00:12:58.657273 containerd[1534]: time="2024-09-05T00:12:58.657252688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5996bfb947-4hzkj,Uid:76079e94-a1fe-4931-9c5f-c490f8c55dd8,Namespace:calico-system,Attempt:1,}" Sep 5 00:12:58.658448 systemd[1]: run-netns-cni\x2db419cb48\x2d14e1\x2d0977\x2d6e39\x2d348b6b982e1a.mount: Deactivated successfully. Sep 5 00:12:58.659917 containerd[1534]: 2024-09-05 00:12:58.640 [INFO][4562] k8s.go 608: Cleaning up netns ContainerID="e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3" Sep 5 00:12:58.659917 containerd[1534]: 2024-09-05 00:12:58.640 [INFO][4562] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3" iface="eth0" netns="/var/run/netns/cni-ef032c55-2921-a0e0-66dd-766bda48874b" Sep 5 00:12:58.659917 containerd[1534]: 2024-09-05 00:12:58.640 [INFO][4562] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3" iface="eth0" netns="/var/run/netns/cni-ef032c55-2921-a0e0-66dd-766bda48874b" Sep 5 00:12:58.659917 containerd[1534]: 2024-09-05 00:12:58.640 [INFO][4562] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3" iface="eth0" netns="/var/run/netns/cni-ef032c55-2921-a0e0-66dd-766bda48874b" Sep 5 00:12:58.659917 containerd[1534]: 2024-09-05 00:12:58.640 [INFO][4562] k8s.go 615: Releasing IP address(es) ContainerID="e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3" Sep 5 00:12:58.659917 containerd[1534]: 2024-09-05 00:12:58.640 [INFO][4562] utils.go 188: Calico CNI releasing IP address ContainerID="e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3" Sep 5 00:12:58.659917 containerd[1534]: 2024-09-05 00:12:58.651 [INFO][4597] ipam_plugin.go 417: Releasing address using handleID ContainerID="e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3" HandleID="k8s-pod-network.e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3" Workload="ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--xr8b9-eth0" Sep 5 00:12:58.659917 containerd[1534]: 2024-09-05 00:12:58.651 [INFO][4597] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 5 00:12:58.659917 containerd[1534]: 2024-09-05 00:12:58.654 [INFO][4597] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 5 00:12:58.659917 containerd[1534]: 2024-09-05 00:12:58.657 [WARNING][4597] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3" HandleID="k8s-pod-network.e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3" Workload="ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--xr8b9-eth0" Sep 5 00:12:58.659917 containerd[1534]: 2024-09-05 00:12:58.657 [INFO][4597] ipam_plugin.go 445: Releasing address using workloadID ContainerID="e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3" HandleID="k8s-pod-network.e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3" Workload="ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--xr8b9-eth0" Sep 5 00:12:58.659917 containerd[1534]: 2024-09-05 00:12:58.658 [INFO][4597] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 5 00:12:58.659917 containerd[1534]: 2024-09-05 00:12:58.659 [INFO][4562] k8s.go 621: Teardown processing complete. ContainerID="e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3" Sep 5 00:12:58.660298 containerd[1534]: time="2024-09-05T00:12:58.660046693Z" level=info msg="TearDown network for sandbox \"e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3\" successfully" Sep 5 00:12:58.660298 containerd[1534]: time="2024-09-05T00:12:58.660073320Z" level=info msg="StopPodSandbox for \"e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3\" returns successfully" Sep 5 00:12:58.660558 containerd[1534]: time="2024-09-05T00:12:58.660527696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xr8b9,Uid:e7b785e3-e901-481a-96a4-597b35c1db1f,Namespace:kube-system,Attempt:1,}" Sep 5 00:12:58.661526 systemd[1]: run-netns-cni\x2def032c55\x2d2921\x2da0e0\x2d66dd\x2d766bda48874b.mount: Deactivated successfully. Sep 5 00:12:58.717483 systemd-networkd[1329]: calicccd767ed92: Link UP Sep 5 00:12:58.717599 systemd-networkd[1329]: calicccd767ed92: Gained carrier Sep 5 00:12:58.723543 containerd[1534]: 2024-09-05 00:12:58.676 [INFO][4630] utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 00:12:58.723543 containerd[1534]: 2024-09-05 00:12:58.683 [INFO][4630] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4054.1.0--a--b942d58550-k8s-calico--kube--controllers--5996bfb947--4hzkj-eth0 calico-kube-controllers-5996bfb947- calico-system 76079e94-a1fe-4931-9c5f-c490f8c55dd8 695 0 2024-09-05 00:12:38 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5996bfb947 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4054.1.0-a-b942d58550 calico-kube-controllers-5996bfb947-4hzkj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calicccd767ed92 [] []}} ContainerID="6db08ecbcab381bdcfe33687f4e9169e1277e0f75cc1a18674d0e9324b7ffabe" Namespace="calico-system" Pod="calico-kube-controllers-5996bfb947-4hzkj" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-calico--kube--controllers--5996bfb947--4hzkj-" Sep 5 00:12:58.723543 containerd[1534]: 2024-09-05 00:12:58.683 [INFO][4630] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6db08ecbcab381bdcfe33687f4e9169e1277e0f75cc1a18674d0e9324b7ffabe" Namespace="calico-system" Pod="calico-kube-controllers-5996bfb947-4hzkj" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-calico--kube--controllers--5996bfb947--4hzkj-eth0" Sep 5 00:12:58.723543 containerd[1534]: 2024-09-05 00:12:58.697 [INFO][4675] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6db08ecbcab381bdcfe33687f4e9169e1277e0f75cc1a18674d0e9324b7ffabe" HandleID="k8s-pod-network.6db08ecbcab381bdcfe33687f4e9169e1277e0f75cc1a18674d0e9324b7ffabe" Workload="ci--4054.1.0--a--b942d58550-k8s-calico--kube--controllers--5996bfb947--4hzkj-eth0" Sep 5 00:12:58.723543 containerd[1534]: 2024-09-05 00:12:58.702 [INFO][4675] ipam_plugin.go 270: Auto assigning IP ContainerID="6db08ecbcab381bdcfe33687f4e9169e1277e0f75cc1a18674d0e9324b7ffabe" HandleID="k8s-pod-network.6db08ecbcab381bdcfe33687f4e9169e1277e0f75cc1a18674d0e9324b7ffabe" Workload="ci--4054.1.0--a--b942d58550-k8s-calico--kube--controllers--5996bfb947--4hzkj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000374bc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4054.1.0-a-b942d58550", "pod":"calico-kube-controllers-5996bfb947-4hzkj", "timestamp":"2024-09-05 00:12:58.697800287 +0000 UTC"}, Hostname:"ci-4054.1.0-a-b942d58550", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:12:58.723543 containerd[1534]: 2024-09-05 00:12:58.702 [INFO][4675] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 5 00:12:58.723543 containerd[1534]: 2024-09-05 00:12:58.702 [INFO][4675] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 5 00:12:58.723543 containerd[1534]: 2024-09-05 00:12:58.702 [INFO][4675] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4054.1.0-a-b942d58550' Sep 5 00:12:58.723543 containerd[1534]: 2024-09-05 00:12:58.703 [INFO][4675] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6db08ecbcab381bdcfe33687f4e9169e1277e0f75cc1a18674d0e9324b7ffabe" host="ci-4054.1.0-a-b942d58550" Sep 5 00:12:58.723543 containerd[1534]: 2024-09-05 00:12:58.705 [INFO][4675] ipam.go 372: Looking up existing affinities for host host="ci-4054.1.0-a-b942d58550" Sep 5 00:12:58.723543 containerd[1534]: 2024-09-05 00:12:58.706 [INFO][4675] ipam.go 489: Trying affinity for 192.168.19.0/26 host="ci-4054.1.0-a-b942d58550" Sep 5 00:12:58.723543 containerd[1534]: 2024-09-05 00:12:58.707 [INFO][4675] ipam.go 155: Attempting to load block cidr=192.168.19.0/26 host="ci-4054.1.0-a-b942d58550" Sep 5 00:12:58.723543 containerd[1534]: 2024-09-05 00:12:58.708 [INFO][4675] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.19.0/26 host="ci-4054.1.0-a-b942d58550" Sep 5 00:12:58.723543 containerd[1534]: 2024-09-05 00:12:58.708 [INFO][4675] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.19.0/26 handle="k8s-pod-network.6db08ecbcab381bdcfe33687f4e9169e1277e0f75cc1a18674d0e9324b7ffabe" host="ci-4054.1.0-a-b942d58550" Sep 5 00:12:58.723543 containerd[1534]: 2024-09-05 00:12:58.709 [INFO][4675] ipam.go 1685: Creating new handle: k8s-pod-network.6db08ecbcab381bdcfe33687f4e9169e1277e0f75cc1a18674d0e9324b7ffabe Sep 5 00:12:58.723543 containerd[1534]: 2024-09-05 00:12:58.711 [INFO][4675] ipam.go 1203: Writing block in order to claim IPs block=192.168.19.0/26 handle="k8s-pod-network.6db08ecbcab381bdcfe33687f4e9169e1277e0f75cc1a18674d0e9324b7ffabe" host="ci-4054.1.0-a-b942d58550" Sep 5 00:12:58.723543 containerd[1534]: 2024-09-05 00:12:58.713 [INFO][4675] ipam.go 1216: Successfully claimed IPs: [192.168.19.1/26] block=192.168.19.0/26 handle="k8s-pod-network.6db08ecbcab381bdcfe33687f4e9169e1277e0f75cc1a18674d0e9324b7ffabe" host="ci-4054.1.0-a-b942d58550" Sep 5 00:12:58.723543 containerd[1534]: 2024-09-05 00:12:58.713 [INFO][4675] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.19.1/26] handle="k8s-pod-network.6db08ecbcab381bdcfe33687f4e9169e1277e0f75cc1a18674d0e9324b7ffabe" host="ci-4054.1.0-a-b942d58550" Sep 5 00:12:58.723543 containerd[1534]: 2024-09-05 00:12:58.713 [INFO][4675] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 5 00:12:58.723543 containerd[1534]: 2024-09-05 00:12:58.713 [INFO][4675] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.19.1/26] IPv6=[] ContainerID="6db08ecbcab381bdcfe33687f4e9169e1277e0f75cc1a18674d0e9324b7ffabe" HandleID="k8s-pod-network.6db08ecbcab381bdcfe33687f4e9169e1277e0f75cc1a18674d0e9324b7ffabe" Workload="ci--4054.1.0--a--b942d58550-k8s-calico--kube--controllers--5996bfb947--4hzkj-eth0" Sep 5 00:12:58.723961 containerd[1534]: 2024-09-05 00:12:58.713 [INFO][4630] k8s.go 386: Populated endpoint ContainerID="6db08ecbcab381bdcfe33687f4e9169e1277e0f75cc1a18674d0e9324b7ffabe" Namespace="calico-system" Pod="calico-kube-controllers-5996bfb947-4hzkj" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-calico--kube--controllers--5996bfb947--4hzkj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--b942d58550-k8s-calico--kube--controllers--5996bfb947--4hzkj-eth0", GenerateName:"calico-kube-controllers-5996bfb947-", Namespace:"calico-system", SelfLink:"", UID:"76079e94-a1fe-4931-9c5f-c490f8c55dd8", ResourceVersion:"695", Generation:0, CreationTimestamp:time.Date(2024, time.September, 5, 0, 12, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5996bfb947", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-b942d58550", ContainerID:"", Pod:"calico-kube-controllers-5996bfb947-4hzkj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.19.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicccd767ed92", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 5 00:12:58.723961 containerd[1534]: 2024-09-05 00:12:58.713 [INFO][4630] k8s.go 387: Calico CNI using IPs: [192.168.19.1/32] ContainerID="6db08ecbcab381bdcfe33687f4e9169e1277e0f75cc1a18674d0e9324b7ffabe" Namespace="calico-system" Pod="calico-kube-controllers-5996bfb947-4hzkj" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-calico--kube--controllers--5996bfb947--4hzkj-eth0" Sep 5 00:12:58.723961 containerd[1534]: 2024-09-05 00:12:58.713 [INFO][4630] dataplane_linux.go 68: Setting the host side veth name to calicccd767ed92 ContainerID="6db08ecbcab381bdcfe33687f4e9169e1277e0f75cc1a18674d0e9324b7ffabe" Namespace="calico-system" Pod="calico-kube-controllers-5996bfb947-4hzkj" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-calico--kube--controllers--5996bfb947--4hzkj-eth0" Sep 5 00:12:58.723961 containerd[1534]: 2024-09-05 00:12:58.717 [INFO][4630] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="6db08ecbcab381bdcfe33687f4e9169e1277e0f75cc1a18674d0e9324b7ffabe" Namespace="calico-system" Pod="calico-kube-controllers-5996bfb947-4hzkj" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-calico--kube--controllers--5996bfb947--4hzkj-eth0" Sep 5 00:12:58.723961 containerd[1534]: 2024-09-05 00:12:58.717 [INFO][4630] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6db08ecbcab381bdcfe33687f4e9169e1277e0f75cc1a18674d0e9324b7ffabe" Namespace="calico-system" Pod="calico-kube-controllers-5996bfb947-4hzkj" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-calico--kube--controllers--5996bfb947--4hzkj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--b942d58550-k8s-calico--kube--controllers--5996bfb947--4hzkj-eth0", GenerateName:"calico-kube-controllers-5996bfb947-", Namespace:"calico-system", SelfLink:"", UID:"76079e94-a1fe-4931-9c5f-c490f8c55dd8", ResourceVersion:"695", Generation:0, CreationTimestamp:time.Date(2024, time.September, 5, 0, 12, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5996bfb947", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-b942d58550", ContainerID:"6db08ecbcab381bdcfe33687f4e9169e1277e0f75cc1a18674d0e9324b7ffabe", Pod:"calico-kube-controllers-5996bfb947-4hzkj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.19.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicccd767ed92", MAC:"1e:54:6e:b1:eb:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 5 00:12:58.723961 containerd[1534]: 2024-09-05 00:12:58.721 [INFO][4630] k8s.go 500: Wrote updated endpoint to datastore ContainerID="6db08ecbcab381bdcfe33687f4e9169e1277e0f75cc1a18674d0e9324b7ffabe" Namespace="calico-system" Pod="calico-kube-controllers-5996bfb947-4hzkj" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-calico--kube--controllers--5996bfb947--4hzkj-eth0" Sep 5 00:12:58.729561 systemd-networkd[1329]: cali1cf3ad180ac: Link UP Sep 5 00:12:58.729702 systemd-networkd[1329]: cali1cf3ad180ac: Gained carrier Sep 5 00:12:58.734457 containerd[1534]: 2024-09-05 00:12:58.683 [INFO][4646] utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 00:12:58.734457 containerd[1534]: 2024-09-05 00:12:58.689 [INFO][4646] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--xr8b9-eth0 coredns-76f75df574- kube-system e7b785e3-e901-481a-96a4-597b35c1db1f 694 0 2024-09-05 00:12:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4054.1.0-a-b942d58550 coredns-76f75df574-xr8b9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1cf3ad180ac [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="fa5ff91af375b08ac6a4b6c781ef83b29da84f32993a23b4344e33050057e230" Namespace="kube-system" Pod="coredns-76f75df574-xr8b9" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--xr8b9-" Sep 5 00:12:58.734457 containerd[1534]: 2024-09-05 00:12:58.689 [INFO][4646] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fa5ff91af375b08ac6a4b6c781ef83b29da84f32993a23b4344e33050057e230" Namespace="kube-system" Pod="coredns-76f75df574-xr8b9" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--xr8b9-eth0" Sep 5 00:12:58.734457 containerd[1534]: 2024-09-05 00:12:58.702 [INFO][4685] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fa5ff91af375b08ac6a4b6c781ef83b29da84f32993a23b4344e33050057e230" HandleID="k8s-pod-network.fa5ff91af375b08ac6a4b6c781ef83b29da84f32993a23b4344e33050057e230" Workload="ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--xr8b9-eth0" Sep 5 00:12:58.734457 containerd[1534]: 2024-09-05 00:12:58.706 [INFO][4685] ipam_plugin.go 270: Auto assigning IP ContainerID="fa5ff91af375b08ac6a4b6c781ef83b29da84f32993a23b4344e33050057e230" HandleID="k8s-pod-network.fa5ff91af375b08ac6a4b6c781ef83b29da84f32993a23b4344e33050057e230" Workload="ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--xr8b9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000493720), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4054.1.0-a-b942d58550", "pod":"coredns-76f75df574-xr8b9", "timestamp":"2024-09-05 00:12:58.702483464 +0000 UTC"}, Hostname:"ci-4054.1.0-a-b942d58550", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:12:58.734457 containerd[1534]: 2024-09-05 00:12:58.706 [INFO][4685] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 5 00:12:58.734457 containerd[1534]: 2024-09-05 00:12:58.713 [INFO][4685] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 5 00:12:58.734457 containerd[1534]: 2024-09-05 00:12:58.713 [INFO][4685] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4054.1.0-a-b942d58550' Sep 5 00:12:58.734457 containerd[1534]: 2024-09-05 00:12:58.713 [INFO][4685] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fa5ff91af375b08ac6a4b6c781ef83b29da84f32993a23b4344e33050057e230" host="ci-4054.1.0-a-b942d58550" Sep 5 00:12:58.734457 containerd[1534]: 2024-09-05 00:12:58.715 [INFO][4685] ipam.go 372: Looking up existing affinities for host host="ci-4054.1.0-a-b942d58550" Sep 5 00:12:58.734457 containerd[1534]: 2024-09-05 00:12:58.717 [INFO][4685] ipam.go 489: Trying affinity for 192.168.19.0/26 host="ci-4054.1.0-a-b942d58550" Sep 5 00:12:58.734457 containerd[1534]: 2024-09-05 00:12:58.718 [INFO][4685] ipam.go 155: Attempting to load block cidr=192.168.19.0/26 host="ci-4054.1.0-a-b942d58550" Sep 5 00:12:58.734457 containerd[1534]: 2024-09-05 00:12:58.719 [INFO][4685] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.19.0/26 host="ci-4054.1.0-a-b942d58550" Sep 5 00:12:58.734457 containerd[1534]: 2024-09-05 00:12:58.720 [INFO][4685] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.19.0/26 handle="k8s-pod-network.fa5ff91af375b08ac6a4b6c781ef83b29da84f32993a23b4344e33050057e230" host="ci-4054.1.0-a-b942d58550" Sep 5 00:12:58.734457 containerd[1534]: 2024-09-05 00:12:58.721 [INFO][4685] ipam.go 1685: Creating new handle: k8s-pod-network.fa5ff91af375b08ac6a4b6c781ef83b29da84f32993a23b4344e33050057e230 Sep 5 00:12:58.734457 containerd[1534]: 2024-09-05 00:12:58.724 [INFO][4685] ipam.go 1203: Writing block in order to claim IPs block=192.168.19.0/26 handle="k8s-pod-network.fa5ff91af375b08ac6a4b6c781ef83b29da84f32993a23b4344e33050057e230" host="ci-4054.1.0-a-b942d58550" Sep 5 00:12:58.734457 containerd[1534]: 2024-09-05 00:12:58.727 [INFO][4685] ipam.go 1216: Successfully claimed IPs: [192.168.19.2/26] block=192.168.19.0/26 handle="k8s-pod-network.fa5ff91af375b08ac6a4b6c781ef83b29da84f32993a23b4344e33050057e230" host="ci-4054.1.0-a-b942d58550" Sep 5 00:12:58.734457 containerd[1534]: 2024-09-05 00:12:58.727 [INFO][4685] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.19.2/26] handle="k8s-pod-network.fa5ff91af375b08ac6a4b6c781ef83b29da84f32993a23b4344e33050057e230" host="ci-4054.1.0-a-b942d58550" Sep 5 00:12:58.734457 containerd[1534]: 2024-09-05 00:12:58.727 [INFO][4685] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 5 00:12:58.734457 containerd[1534]: 2024-09-05 00:12:58.727 [INFO][4685] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.19.2/26] IPv6=[] ContainerID="fa5ff91af375b08ac6a4b6c781ef83b29da84f32993a23b4344e33050057e230" HandleID="k8s-pod-network.fa5ff91af375b08ac6a4b6c781ef83b29da84f32993a23b4344e33050057e230" Workload="ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--xr8b9-eth0" Sep 5 00:12:58.734888 containerd[1534]: 2024-09-05 00:12:58.728 [INFO][4646] k8s.go 386: Populated endpoint ContainerID="fa5ff91af375b08ac6a4b6c781ef83b29da84f32993a23b4344e33050057e230" Namespace="kube-system" Pod="coredns-76f75df574-xr8b9" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--xr8b9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--xr8b9-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"e7b785e3-e901-481a-96a4-597b35c1db1f", ResourceVersion:"694", Generation:0, CreationTimestamp:time.Date(2024, time.September, 5, 0, 12, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-b942d58550", ContainerID:"", Pod:"coredns-76f75df574-xr8b9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1cf3ad180ac", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 5 00:12:58.734888 containerd[1534]: 2024-09-05 00:12:58.728 [INFO][4646] k8s.go 387: Calico CNI using IPs: [192.168.19.2/32] ContainerID="fa5ff91af375b08ac6a4b6c781ef83b29da84f32993a23b4344e33050057e230" Namespace="kube-system" Pod="coredns-76f75df574-xr8b9" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--xr8b9-eth0" Sep 5 00:12:58.734888 containerd[1534]: 2024-09-05 00:12:58.728 [INFO][4646] dataplane_linux.go 68: Setting the host side veth name to cali1cf3ad180ac ContainerID="fa5ff91af375b08ac6a4b6c781ef83b29da84f32993a23b4344e33050057e230" Namespace="kube-system" Pod="coredns-76f75df574-xr8b9" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--xr8b9-eth0" Sep 5 00:12:58.734888 containerd[1534]: 2024-09-05 00:12:58.729 [INFO][4646] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="fa5ff91af375b08ac6a4b6c781ef83b29da84f32993a23b4344e33050057e230" Namespace="kube-system" Pod="coredns-76f75df574-xr8b9" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--xr8b9-eth0" Sep 5 00:12:58.734888 containerd[1534]: 2024-09-05 00:12:58.729 [INFO][4646] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fa5ff91af375b08ac6a4b6c781ef83b29da84f32993a23b4344e33050057e230" Namespace="kube-system" Pod="coredns-76f75df574-xr8b9" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--xr8b9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--xr8b9-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"e7b785e3-e901-481a-96a4-597b35c1db1f", ResourceVersion:"694", Generation:0, CreationTimestamp:time.Date(2024, time.September, 5, 0, 12, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-b942d58550", ContainerID:"fa5ff91af375b08ac6a4b6c781ef83b29da84f32993a23b4344e33050057e230", Pod:"coredns-76f75df574-xr8b9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1cf3ad180ac", MAC:"b6:32:49:35:2e:6d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 5 00:12:58.735034 containerd[1534]: 2024-09-05 00:12:58.733 [INFO][4646] k8s.go 500: Wrote updated endpoint to datastore ContainerID="fa5ff91af375b08ac6a4b6c781ef83b29da84f32993a23b4344e33050057e230" Namespace="kube-system" Pod="coredns-76f75df574-xr8b9" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--xr8b9-eth0" Sep 5 00:12:58.735034 containerd[1534]: time="2024-09-05T00:12:58.734381379Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:12:58.735034 containerd[1534]: time="2024-09-05T00:12:58.734610786Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:12:58.735034 containerd[1534]: time="2024-09-05T00:12:58.734623953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:12:58.735034 containerd[1534]: time="2024-09-05T00:12:58.734714625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:12:58.743701 containerd[1534]: time="2024-09-05T00:12:58.743635208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:12:58.743701 containerd[1534]: time="2024-09-05T00:12:58.743660461Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:12:58.743701 containerd[1534]: time="2024-09-05T00:12:58.743667360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:12:58.743814 containerd[1534]: time="2024-09-05T00:12:58.743706557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:12:58.754618 systemd[1]: Started cri-containerd-6db08ecbcab381bdcfe33687f4e9169e1277e0f75cc1a18674d0e9324b7ffabe.scope - libcontainer container 6db08ecbcab381bdcfe33687f4e9169e1277e0f75cc1a18674d0e9324b7ffabe. Sep 5 00:12:58.756488 systemd[1]: Started cri-containerd-fa5ff91af375b08ac6a4b6c781ef83b29da84f32993a23b4344e33050057e230.scope - libcontainer container fa5ff91af375b08ac6a4b6c781ef83b29da84f32993a23b4344e33050057e230. Sep 5 00:12:58.777872 containerd[1534]: time="2024-09-05T00:12:58.777845598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xr8b9,Uid:e7b785e3-e901-481a-96a4-597b35c1db1f,Namespace:kube-system,Attempt:1,} returns sandbox id \"fa5ff91af375b08ac6a4b6c781ef83b29da84f32993a23b4344e33050057e230\"" Sep 5 00:12:58.778041 containerd[1534]: time="2024-09-05T00:12:58.778026017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5996bfb947-4hzkj,Uid:76079e94-a1fe-4931-9c5f-c490f8c55dd8,Namespace:calico-system,Attempt:1,} returns sandbox id \"6db08ecbcab381bdcfe33687f4e9169e1277e0f75cc1a18674d0e9324b7ffabe\"" Sep 5 00:12:58.778681 containerd[1534]: time="2024-09-05T00:12:58.778663684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Sep 5 00:12:58.779413 containerd[1534]: time="2024-09-05T00:12:58.779383286Z" level=info msg="CreateContainer within sandbox \"fa5ff91af375b08ac6a4b6c781ef83b29da84f32993a23b4344e33050057e230\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 00:12:58.785176 containerd[1534]: time="2024-09-05T00:12:58.785148144Z" level=info msg="CreateContainer within sandbox \"fa5ff91af375b08ac6a4b6c781ef83b29da84f32993a23b4344e33050057e230\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"42605a3a14151c0804430a1635b28c252c8e933139ed578088b5a15efd604748\"" Sep 5 00:12:58.785484 containerd[1534]: time="2024-09-05T00:12:58.785469060Z" level=info msg="StartContainer for \"42605a3a14151c0804430a1635b28c252c8e933139ed578088b5a15efd604748\"" Sep 5 00:12:58.814416 systemd[1]: Started cri-containerd-42605a3a14151c0804430a1635b28c252c8e933139ed578088b5a15efd604748.scope - libcontainer container 42605a3a14151c0804430a1635b28c252c8e933139ed578088b5a15efd604748. Sep 5 00:12:58.825388 containerd[1534]: time="2024-09-05T00:12:58.825336816Z" level=info msg="StartContainer for \"42605a3a14151c0804430a1635b28c252c8e933139ed578088b5a15efd604748\" returns successfully" Sep 5 00:12:59.604603 containerd[1534]: time="2024-09-05T00:12:59.604574662Z" level=info msg="StopPodSandbox for \"46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4\"" Sep 5 00:12:59.604765 containerd[1534]: time="2024-09-05T00:12:59.604574734Z" level=info msg="StopPodSandbox for \"02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8\"" Sep 5 00:12:59.646890 containerd[1534]: 2024-09-05 00:12:59.628 [INFO][4937] k8s.go 608: Cleaning up netns ContainerID="02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8" Sep 5 00:12:59.646890 containerd[1534]: 2024-09-05 00:12:59.628 [INFO][4937] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8" iface="eth0" netns="/var/run/netns/cni-0e94aaf7-279e-0f60-a00b-863633d473e4" Sep 5 00:12:59.646890 containerd[1534]: 2024-09-05 00:12:59.629 [INFO][4937] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8" iface="eth0" netns="/var/run/netns/cni-0e94aaf7-279e-0f60-a00b-863633d473e4" Sep 5 00:12:59.646890 containerd[1534]: 2024-09-05 00:12:59.629 [INFO][4937] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8" iface="eth0" netns="/var/run/netns/cni-0e94aaf7-279e-0f60-a00b-863633d473e4" Sep 5 00:12:59.646890 containerd[1534]: 2024-09-05 00:12:59.629 [INFO][4937] k8s.go 615: Releasing IP address(es) ContainerID="02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8" Sep 5 00:12:59.646890 containerd[1534]: 2024-09-05 00:12:59.629 [INFO][4937] utils.go 188: Calico CNI releasing IP address ContainerID="02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8" Sep 5 00:12:59.646890 containerd[1534]: 2024-09-05 00:12:59.640 [INFO][4969] ipam_plugin.go 417: Releasing address using handleID ContainerID="02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8" HandleID="k8s-pod-network.02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8" Workload="ci--4054.1.0--a--b942d58550-k8s-csi--node--driver--26b8j-eth0" Sep 5 00:12:59.646890 containerd[1534]: 2024-09-05 00:12:59.640 [INFO][4969] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 5 00:12:59.646890 containerd[1534]: 2024-09-05 00:12:59.640 [INFO][4969] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 5 00:12:59.646890 containerd[1534]: 2024-09-05 00:12:59.644 [WARNING][4969] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8" HandleID="k8s-pod-network.02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8" Workload="ci--4054.1.0--a--b942d58550-k8s-csi--node--driver--26b8j-eth0" Sep 5 00:12:59.646890 containerd[1534]: 2024-09-05 00:12:59.644 [INFO][4969] ipam_plugin.go 445: Releasing address using workloadID ContainerID="02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8" HandleID="k8s-pod-network.02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8" Workload="ci--4054.1.0--a--b942d58550-k8s-csi--node--driver--26b8j-eth0" Sep 5 00:12:59.646890 containerd[1534]: 2024-09-05 00:12:59.645 [INFO][4969] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 5 00:12:59.646890 containerd[1534]: 2024-09-05 00:12:59.646 [INFO][4937] k8s.go 621: Teardown processing complete. ContainerID="02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8" Sep 5 00:12:59.647454 containerd[1534]: time="2024-09-05T00:12:59.646972608Z" level=info msg="TearDown network for sandbox \"02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8\" successfully" Sep 5 00:12:59.647454 containerd[1534]: time="2024-09-05T00:12:59.646993121Z" level=info msg="StopPodSandbox for \"02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8\" returns successfully" Sep 5 00:12:59.647723 containerd[1534]: time="2024-09-05T00:12:59.647705329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-26b8j,Uid:3109814d-a4ad-47dd-9273-9920f3f0d86d,Namespace:calico-system,Attempt:1,}" Sep 5 00:12:59.652179 containerd[1534]: 2024-09-05 00:12:59.629 [INFO][4938] k8s.go 608: Cleaning up netns ContainerID="46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4" Sep 5 00:12:59.652179 containerd[1534]: 2024-09-05 00:12:59.629 [INFO][4938] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4" iface="eth0" netns="/var/run/netns/cni-dd4a9aff-cf60-4250-86e5-26c1b60790d2" Sep 5 00:12:59.652179 containerd[1534]: 2024-09-05 00:12:59.629 [INFO][4938] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4" iface="eth0" netns="/var/run/netns/cni-dd4a9aff-cf60-4250-86e5-26c1b60790d2" Sep 5 00:12:59.652179 containerd[1534]: 2024-09-05 00:12:59.629 [INFO][4938] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4" iface="eth0" netns="/var/run/netns/cni-dd4a9aff-cf60-4250-86e5-26c1b60790d2" Sep 5 00:12:59.652179 containerd[1534]: 2024-09-05 00:12:59.629 [INFO][4938] k8s.go 615: Releasing IP address(es) ContainerID="46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4" Sep 5 00:12:59.652179 containerd[1534]: 2024-09-05 00:12:59.629 [INFO][4938] utils.go 188: Calico CNI releasing IP address ContainerID="46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4" Sep 5 00:12:59.652179 containerd[1534]: 2024-09-05 00:12:59.640 [INFO][4970] ipam_plugin.go 417: Releasing address using handleID ContainerID="46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4" HandleID="k8s-pod-network.46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4" Workload="ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--46mhs-eth0" Sep 5 00:12:59.652179 containerd[1534]: 2024-09-05 00:12:59.640 [INFO][4970] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 5 00:12:59.652179 containerd[1534]: 2024-09-05 00:12:59.645 [INFO][4970] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 5 00:12:59.652179 containerd[1534]: 2024-09-05 00:12:59.650 [WARNING][4970] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4" HandleID="k8s-pod-network.46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4" Workload="ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--46mhs-eth0" Sep 5 00:12:59.652179 containerd[1534]: 2024-09-05 00:12:59.650 [INFO][4970] ipam_plugin.go 445: Releasing address using workloadID ContainerID="46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4" HandleID="k8s-pod-network.46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4" Workload="ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--46mhs-eth0" Sep 5 00:12:59.652179 containerd[1534]: 2024-09-05 00:12:59.651 [INFO][4970] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 5 00:12:59.652179 containerd[1534]: 2024-09-05 00:12:59.651 [INFO][4938] k8s.go 621: Teardown processing complete. ContainerID="46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4" Sep 5 00:12:59.652438 containerd[1534]: time="2024-09-05T00:12:59.652232088Z" level=info msg="TearDown network for sandbox \"46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4\" successfully" Sep 5 00:12:59.652438 containerd[1534]: time="2024-09-05T00:12:59.652245934Z" level=info msg="StopPodSandbox for \"46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4\" returns successfully" Sep 5 00:12:59.652479 containerd[1534]: time="2024-09-05T00:12:59.652456950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-46mhs,Uid:b66a1d66-183c-460e-a689-dc92fc0f3c54,Namespace:kube-system,Attempt:1,}" Sep 5 00:12:59.662036 systemd[1]: run-netns-cni\x2d0e94aaf7\x2d279e\x2d0f60\x2da00b\x2d863633d473e4.mount: Deactivated successfully. Sep 5 00:12:59.662110 systemd[1]: run-netns-cni\x2ddd4a9aff\x2dcf60\x2d4250\x2d86e5\x2d26c1b60790d2.mount: Deactivated successfully. Sep 5 00:12:59.704027 systemd-networkd[1329]: calif8f59ecb13c: Link UP Sep 5 00:12:59.704168 systemd-networkd[1329]: calif8f59ecb13c: Gained carrier Sep 5 00:12:59.712999 kubelet[2919]: I0905 00:12:59.712702 2919 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-xr8b9" podStartSLOduration=27.712665039 podStartE2EDuration="27.712665039s" podCreationTimestamp="2024-09-05 00:12:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-05 00:12:59.712416672 +0000 UTC m=+40.153248866" watchObservedRunningTime="2024-09-05 00:12:59.712665039 +0000 UTC m=+40.153497230" Sep 5 00:12:59.717218 containerd[1534]: 2024-09-05 00:12:59.662 [INFO][5003] utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 00:12:59.717218 containerd[1534]: 2024-09-05 00:12:59.669 [INFO][5003] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4054.1.0--a--b942d58550-k8s-csi--node--driver--26b8j-eth0 csi-node-driver- calico-system 3109814d-a4ad-47dd-9273-9920f3f0d86d 710 0 2024-09-05 00:12:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-4054.1.0-a-b942d58550 csi-node-driver-26b8j eth0 default [] [] [kns.calico-system ksa.calico-system.default] calif8f59ecb13c [] []}} ContainerID="8635671a197ad8aaf803ed33d2fd6960d7e428433213df041f92693bb45a8af0" Namespace="calico-system" Pod="csi-node-driver-26b8j" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-csi--node--driver--26b8j-" Sep 5 00:12:59.717218 containerd[1534]: 2024-09-05 00:12:59.669 [INFO][5003] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8635671a197ad8aaf803ed33d2fd6960d7e428433213df041f92693bb45a8af0" Namespace="calico-system" Pod="csi-node-driver-26b8j" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-csi--node--driver--26b8j-eth0" Sep 5 00:12:59.717218 containerd[1534]: 2024-09-05 00:12:59.684 [INFO][5046] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8635671a197ad8aaf803ed33d2fd6960d7e428433213df041f92693bb45a8af0" HandleID="k8s-pod-network.8635671a197ad8aaf803ed33d2fd6960d7e428433213df041f92693bb45a8af0" Workload="ci--4054.1.0--a--b942d58550-k8s-csi--node--driver--26b8j-eth0" Sep 5 00:12:59.717218 containerd[1534]: 2024-09-05 00:12:59.689 [INFO][5046] ipam_plugin.go 270: Auto assigning IP ContainerID="8635671a197ad8aaf803ed33d2fd6960d7e428433213df041f92693bb45a8af0" HandleID="k8s-pod-network.8635671a197ad8aaf803ed33d2fd6960d7e428433213df041f92693bb45a8af0" Workload="ci--4054.1.0--a--b942d58550-k8s-csi--node--driver--26b8j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ac0b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4054.1.0-a-b942d58550", "pod":"csi-node-driver-26b8j", "timestamp":"2024-09-05 00:12:59.6841512 +0000 UTC"}, Hostname:"ci-4054.1.0-a-b942d58550", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:12:59.717218 containerd[1534]: 2024-09-05 00:12:59.689 [INFO][5046] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 5 00:12:59.717218 containerd[1534]: 2024-09-05 00:12:59.689 [INFO][5046] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 5 00:12:59.717218 containerd[1534]: 2024-09-05 00:12:59.689 [INFO][5046] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4054.1.0-a-b942d58550' Sep 5 00:12:59.717218 containerd[1534]: 2024-09-05 00:12:59.690 [INFO][5046] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8635671a197ad8aaf803ed33d2fd6960d7e428433213df041f92693bb45a8af0" host="ci-4054.1.0-a-b942d58550" Sep 5 00:12:59.717218 containerd[1534]: 2024-09-05 00:12:59.693 [INFO][5046] ipam.go 372: Looking up existing affinities for host host="ci-4054.1.0-a-b942d58550" Sep 5 00:12:59.717218 containerd[1534]: 2024-09-05 00:12:59.695 [INFO][5046] ipam.go 489: Trying affinity for 192.168.19.0/26 host="ci-4054.1.0-a-b942d58550" Sep 5 00:12:59.717218 containerd[1534]: 2024-09-05 00:12:59.696 [INFO][5046] ipam.go 155: Attempting to load block cidr=192.168.19.0/26 host="ci-4054.1.0-a-b942d58550" Sep 5 00:12:59.717218 containerd[1534]: 2024-09-05 00:12:59.697 [INFO][5046] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.19.0/26 host="ci-4054.1.0-a-b942d58550" Sep 5 00:12:59.717218 containerd[1534]: 2024-09-05 00:12:59.697 [INFO][5046] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.19.0/26 handle="k8s-pod-network.8635671a197ad8aaf803ed33d2fd6960d7e428433213df041f92693bb45a8af0" host="ci-4054.1.0-a-b942d58550" Sep 5 00:12:59.717218 containerd[1534]: 2024-09-05 00:12:59.698 [INFO][5046] ipam.go 1685: Creating new handle: k8s-pod-network.8635671a197ad8aaf803ed33d2fd6960d7e428433213df041f92693bb45a8af0 Sep 5 00:12:59.717218 containerd[1534]: 2024-09-05 00:12:59.700 [INFO][5046] ipam.go 1203: Writing block in order to claim IPs block=192.168.19.0/26 handle="k8s-pod-network.8635671a197ad8aaf803ed33d2fd6960d7e428433213df041f92693bb45a8af0" host="ci-4054.1.0-a-b942d58550" Sep 5 00:12:59.717218 containerd[1534]: 2024-09-05 00:12:59.702 [INFO][5046] ipam.go 1216: Successfully claimed IPs: [192.168.19.3/26] block=192.168.19.0/26 handle="k8s-pod-network.8635671a197ad8aaf803ed33d2fd6960d7e428433213df041f92693bb45a8af0" host="ci-4054.1.0-a-b942d58550" Sep 5 00:12:59.717218 containerd[1534]: 2024-09-05 00:12:59.702 [INFO][5046] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.19.3/26] handle="k8s-pod-network.8635671a197ad8aaf803ed33d2fd6960d7e428433213df041f92693bb45a8af0" host="ci-4054.1.0-a-b942d58550" Sep 5 00:12:59.717218 containerd[1534]: 2024-09-05 00:12:59.702 [INFO][5046] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 5 00:12:59.717218 containerd[1534]: 2024-09-05 00:12:59.702 [INFO][5046] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.19.3/26] IPv6=[] ContainerID="8635671a197ad8aaf803ed33d2fd6960d7e428433213df041f92693bb45a8af0" HandleID="k8s-pod-network.8635671a197ad8aaf803ed33d2fd6960d7e428433213df041f92693bb45a8af0" Workload="ci--4054.1.0--a--b942d58550-k8s-csi--node--driver--26b8j-eth0" Sep 5 00:12:59.717662 containerd[1534]: 2024-09-05 00:12:59.703 [INFO][5003] k8s.go 386: Populated endpoint ContainerID="8635671a197ad8aaf803ed33d2fd6960d7e428433213df041f92693bb45a8af0" Namespace="calico-system" Pod="csi-node-driver-26b8j" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-csi--node--driver--26b8j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--b942d58550-k8s-csi--node--driver--26b8j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3109814d-a4ad-47dd-9273-9920f3f0d86d", ResourceVersion:"710", Generation:0, CreationTimestamp:time.Date(2024, time.September, 5, 0, 12, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-b942d58550", ContainerID:"", Pod:"csi-node-driver-26b8j", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.19.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calif8f59ecb13c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 5 00:12:59.717662 containerd[1534]: 2024-09-05 00:12:59.703 [INFO][5003] k8s.go 387: Calico CNI using IPs: [192.168.19.3/32] ContainerID="8635671a197ad8aaf803ed33d2fd6960d7e428433213df041f92693bb45a8af0" Namespace="calico-system" Pod="csi-node-driver-26b8j" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-csi--node--driver--26b8j-eth0" Sep 5 00:12:59.717662 containerd[1534]: 2024-09-05 00:12:59.703 [INFO][5003] dataplane_linux.go 68: Setting the host side veth name to calif8f59ecb13c ContainerID="8635671a197ad8aaf803ed33d2fd6960d7e428433213df041f92693bb45a8af0" Namespace="calico-system" Pod="csi-node-driver-26b8j" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-csi--node--driver--26b8j-eth0" Sep 5 00:12:59.717662 containerd[1534]: 2024-09-05 00:12:59.704 [INFO][5003] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="8635671a197ad8aaf803ed33d2fd6960d7e428433213df041f92693bb45a8af0" Namespace="calico-system" Pod="csi-node-driver-26b8j" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-csi--node--driver--26b8j-eth0" Sep 5 00:12:59.717662 containerd[1534]: 2024-09-05 00:12:59.704 [INFO][5003] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8635671a197ad8aaf803ed33d2fd6960d7e428433213df041f92693bb45a8af0" Namespace="calico-system" Pod="csi-node-driver-26b8j" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-csi--node--driver--26b8j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--b942d58550-k8s-csi--node--driver--26b8j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3109814d-a4ad-47dd-9273-9920f3f0d86d", ResourceVersion:"710", Generation:0, CreationTimestamp:time.Date(2024, time.September, 5, 0, 12, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-b942d58550", ContainerID:"8635671a197ad8aaf803ed33d2fd6960d7e428433213df041f92693bb45a8af0", Pod:"csi-node-driver-26b8j", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.19.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calif8f59ecb13c", MAC:"fa:fa:c2:72:04:5a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 5 00:12:59.717662 containerd[1534]: 2024-09-05 00:12:59.715 [INFO][5003] k8s.go 500: Wrote updated endpoint to datastore ContainerID="8635671a197ad8aaf803ed33d2fd6960d7e428433213df041f92693bb45a8af0" Namespace="calico-system" Pod="csi-node-driver-26b8j" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-csi--node--driver--26b8j-eth0" Sep 5 00:12:59.726061 systemd-networkd[1329]: cali151f970ee35: Link UP Sep 5 00:12:59.726176 systemd-networkd[1329]: cali151f970ee35: Gained carrier Sep 5 00:12:59.727877 containerd[1534]: time="2024-09-05T00:12:59.727807107Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:12:59.727877 containerd[1534]: time="2024-09-05T00:12:59.727833642Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:12:59.727877 containerd[1534]: time="2024-09-05T00:12:59.727840898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:12:59.727979 containerd[1534]: time="2024-09-05T00:12:59.727882835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:12:59.731192 containerd[1534]: 2024-09-05 00:12:59.667 [INFO][5015] utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 00:12:59.731192 containerd[1534]: 2024-09-05 00:12:59.673 [INFO][5015] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--46mhs-eth0 coredns-76f75df574- kube-system b66a1d66-183c-460e-a689-dc92fc0f3c54 711 0 2024-09-05 00:12:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4054.1.0-a-b942d58550 coredns-76f75df574-46mhs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali151f970ee35 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="fddcd4e64afce49772fcea710d3443a875cc96325651fe000eef8b87b2858043" Namespace="kube-system" Pod="coredns-76f75df574-46mhs" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--46mhs-" Sep 5 00:12:59.731192 containerd[1534]: 2024-09-05 00:12:59.673 [INFO][5015] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fddcd4e64afce49772fcea710d3443a875cc96325651fe000eef8b87b2858043" Namespace="kube-system" Pod="coredns-76f75df574-46mhs" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--46mhs-eth0" Sep 5 00:12:59.731192 containerd[1534]: 2024-09-05 00:12:59.688 [INFO][5050] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fddcd4e64afce49772fcea710d3443a875cc96325651fe000eef8b87b2858043" HandleID="k8s-pod-network.fddcd4e64afce49772fcea710d3443a875cc96325651fe000eef8b87b2858043" Workload="ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--46mhs-eth0" Sep 5 00:12:59.731192 containerd[1534]: 2024-09-05 00:12:59.693 [INFO][5050] ipam_plugin.go 270: Auto assigning IP ContainerID="fddcd4e64afce49772fcea710d3443a875cc96325651fe000eef8b87b2858043" HandleID="k8s-pod-network.fddcd4e64afce49772fcea710d3443a875cc96325651fe000eef8b87b2858043" Workload="ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--46mhs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000218e90), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4054.1.0-a-b942d58550", "pod":"coredns-76f75df574-46mhs", "timestamp":"2024-09-05 00:12:59.688792623 +0000 UTC"}, Hostname:"ci-4054.1.0-a-b942d58550", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:12:59.731192 containerd[1534]: 2024-09-05 00:12:59.693 [INFO][5050] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 5 00:12:59.731192 containerd[1534]: 2024-09-05 00:12:59.702 [INFO][5050] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 5 00:12:59.731192 containerd[1534]: 2024-09-05 00:12:59.702 [INFO][5050] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4054.1.0-a-b942d58550' Sep 5 00:12:59.731192 containerd[1534]: 2024-09-05 00:12:59.703 [INFO][5050] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fddcd4e64afce49772fcea710d3443a875cc96325651fe000eef8b87b2858043" host="ci-4054.1.0-a-b942d58550" Sep 5 00:12:59.731192 containerd[1534]: 2024-09-05 00:12:59.705 [INFO][5050] ipam.go 372: Looking up existing affinities for host host="ci-4054.1.0-a-b942d58550" Sep 5 00:12:59.731192 containerd[1534]: 2024-09-05 00:12:59.708 [INFO][5050] ipam.go 489: Trying affinity for 192.168.19.0/26 host="ci-4054.1.0-a-b942d58550" Sep 5 00:12:59.731192 containerd[1534]: 2024-09-05 00:12:59.716 [INFO][5050] ipam.go 155: Attempting to load block cidr=192.168.19.0/26 host="ci-4054.1.0-a-b942d58550" Sep 5 00:12:59.731192 containerd[1534]: 2024-09-05 00:12:59.718 [INFO][5050] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.19.0/26 host="ci-4054.1.0-a-b942d58550" Sep 5 00:12:59.731192 containerd[1534]: 2024-09-05 00:12:59.718 [INFO][5050] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.19.0/26 handle="k8s-pod-network.fddcd4e64afce49772fcea710d3443a875cc96325651fe000eef8b87b2858043" host="ci-4054.1.0-a-b942d58550" Sep 5 00:12:59.731192 containerd[1534]: 2024-09-05 00:12:59.719 [INFO][5050] ipam.go 1685: Creating new handle: k8s-pod-network.fddcd4e64afce49772fcea710d3443a875cc96325651fe000eef8b87b2858043 Sep 5 00:12:59.731192 containerd[1534]: 2024-09-05 00:12:59.721 [INFO][5050] ipam.go 1203: Writing block in order to claim IPs block=192.168.19.0/26 handle="k8s-pod-network.fddcd4e64afce49772fcea710d3443a875cc96325651fe000eef8b87b2858043" host="ci-4054.1.0-a-b942d58550" Sep 5 00:12:59.731192 containerd[1534]: 2024-09-05 00:12:59.724 [INFO][5050] ipam.go 1216: Successfully claimed IPs: [192.168.19.4/26] block=192.168.19.0/26 handle="k8s-pod-network.fddcd4e64afce49772fcea710d3443a875cc96325651fe000eef8b87b2858043" host="ci-4054.1.0-a-b942d58550" Sep 5 00:12:59.731192 containerd[1534]: 2024-09-05 00:12:59.724 [INFO][5050] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.19.4/26] handle="k8s-pod-network.fddcd4e64afce49772fcea710d3443a875cc96325651fe000eef8b87b2858043" host="ci-4054.1.0-a-b942d58550" Sep 5 00:12:59.731192 containerd[1534]: 2024-09-05 00:12:59.724 [INFO][5050] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 5 00:12:59.731192 containerd[1534]: 2024-09-05 00:12:59.724 [INFO][5050] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.19.4/26] IPv6=[] ContainerID="fddcd4e64afce49772fcea710d3443a875cc96325651fe000eef8b87b2858043" HandleID="k8s-pod-network.fddcd4e64afce49772fcea710d3443a875cc96325651fe000eef8b87b2858043" Workload="ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--46mhs-eth0" Sep 5 00:12:59.731634 containerd[1534]: 2024-09-05 00:12:59.725 [INFO][5015] k8s.go 386: Populated endpoint ContainerID="fddcd4e64afce49772fcea710d3443a875cc96325651fe000eef8b87b2858043" Namespace="kube-system" Pod="coredns-76f75df574-46mhs" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--46mhs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--46mhs-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"b66a1d66-183c-460e-a689-dc92fc0f3c54", ResourceVersion:"711", Generation:0, CreationTimestamp:time.Date(2024, time.September, 5, 0, 12, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-b942d58550", ContainerID:"", Pod:"coredns-76f75df574-46mhs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali151f970ee35", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 5 00:12:59.731634 containerd[1534]: 2024-09-05 00:12:59.725 [INFO][5015] k8s.go 387: Calico CNI using IPs: [192.168.19.4/32] ContainerID="fddcd4e64afce49772fcea710d3443a875cc96325651fe000eef8b87b2858043" Namespace="kube-system" Pod="coredns-76f75df574-46mhs" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--46mhs-eth0" Sep 5 00:12:59.731634 containerd[1534]: 2024-09-05 00:12:59.725 [INFO][5015] dataplane_linux.go 68: Setting the host side veth name to cali151f970ee35 ContainerID="fddcd4e64afce49772fcea710d3443a875cc96325651fe000eef8b87b2858043" Namespace="kube-system" Pod="coredns-76f75df574-46mhs" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--46mhs-eth0" Sep 5 00:12:59.731634 containerd[1534]: 2024-09-05 00:12:59.726 [INFO][5015] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="fddcd4e64afce49772fcea710d3443a875cc96325651fe000eef8b87b2858043" Namespace="kube-system" Pod="coredns-76f75df574-46mhs" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--46mhs-eth0" Sep 5 00:12:59.731634 containerd[1534]: 2024-09-05 00:12:59.726 [INFO][5015] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fddcd4e64afce49772fcea710d3443a875cc96325651fe000eef8b87b2858043" Namespace="kube-system" Pod="coredns-76f75df574-46mhs" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--46mhs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--46mhs-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"b66a1d66-183c-460e-a689-dc92fc0f3c54", ResourceVersion:"711", Generation:0, CreationTimestamp:time.Date(2024, time.September, 5, 0, 12, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-b942d58550", ContainerID:"fddcd4e64afce49772fcea710d3443a875cc96325651fe000eef8b87b2858043", Pod:"coredns-76f75df574-46mhs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali151f970ee35", MAC:"ae:a9:5d:25:8b:97", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 5 00:12:59.731766 containerd[1534]: 2024-09-05 00:12:59.730 [INFO][5015] k8s.go 500: Wrote updated endpoint to datastore ContainerID="fddcd4e64afce49772fcea710d3443a875cc96325651fe000eef8b87b2858043" Namespace="kube-system" Pod="coredns-76f75df574-46mhs" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--46mhs-eth0" Sep 5 00:12:59.741400 containerd[1534]: time="2024-09-05T00:12:59.741322162Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:12:59.741400 containerd[1534]: time="2024-09-05T00:12:59.741354001Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:12:59.741400 containerd[1534]: time="2024-09-05T00:12:59.741361524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:12:59.741496 containerd[1534]: time="2024-09-05T00:12:59.741404545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:12:59.752599 systemd[1]: Started cri-containerd-8635671a197ad8aaf803ed33d2fd6960d7e428433213df041f92693bb45a8af0.scope - libcontainer container 8635671a197ad8aaf803ed33d2fd6960d7e428433213df041f92693bb45a8af0. Sep 5 00:12:59.756609 systemd[1]: Started cri-containerd-fddcd4e64afce49772fcea710d3443a875cc96325651fe000eef8b87b2858043.scope - libcontainer container fddcd4e64afce49772fcea710d3443a875cc96325651fe000eef8b87b2858043. Sep 5 00:12:59.763146 containerd[1534]: time="2024-09-05T00:12:59.763127051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-26b8j,Uid:3109814d-a4ad-47dd-9273-9920f3f0d86d,Namespace:calico-system,Attempt:1,} returns sandbox id \"8635671a197ad8aaf803ed33d2fd6960d7e428433213df041f92693bb45a8af0\"" Sep 5 00:12:59.778851 containerd[1534]: time="2024-09-05T00:12:59.778825751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-46mhs,Uid:b66a1d66-183c-460e-a689-dc92fc0f3c54,Namespace:kube-system,Attempt:1,} returns sandbox id \"fddcd4e64afce49772fcea710d3443a875cc96325651fe000eef8b87b2858043\"" Sep 5 00:12:59.780051 containerd[1534]: time="2024-09-05T00:12:59.780034697Z" level=info msg="CreateContainer within sandbox \"fddcd4e64afce49772fcea710d3443a875cc96325651fe000eef8b87b2858043\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 00:12:59.784504 containerd[1534]: time="2024-09-05T00:12:59.784462502Z" level=info msg="CreateContainer within sandbox \"fddcd4e64afce49772fcea710d3443a875cc96325651fe000eef8b87b2858043\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"be30548c8adb6adf72c792072607750abffd2e7d9941d467cf61efe3f99d5fd9\"" Sep 5 00:12:59.784741 containerd[1534]: time="2024-09-05T00:12:59.784682574Z" level=info msg="StartContainer for \"be30548c8adb6adf72c792072607750abffd2e7d9941d467cf61efe3f99d5fd9\"" Sep 5 00:12:59.806479 systemd[1]: Started cri-containerd-be30548c8adb6adf72c792072607750abffd2e7d9941d467cf61efe3f99d5fd9.scope - libcontainer container be30548c8adb6adf72c792072607750abffd2e7d9941d467cf61efe3f99d5fd9. Sep 5 00:12:59.819980 containerd[1534]: time="2024-09-05T00:12:59.819958731Z" level=info msg="StartContainer for \"be30548c8adb6adf72c792072607750abffd2e7d9941d467cf61efe3f99d5fd9\" returns successfully" Sep 5 00:13:00.087558 systemd-networkd[1329]: calicccd767ed92: Gained IPv6LL Sep 5 00:13:00.663353 systemd-networkd[1329]: cali1cf3ad180ac: Gained IPv6LL Sep 5 00:13:00.713919 kubelet[2919]: I0905 00:13:00.713893 2919 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-46mhs" podStartSLOduration=28.713858291 podStartE2EDuration="28.713858291s" podCreationTimestamp="2024-09-05 00:12:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-05 00:13:00.713398787 +0000 UTC m=+41.154230978" watchObservedRunningTime="2024-09-05 00:13:00.713858291 +0000 UTC m=+41.154690479" Sep 5 00:13:00.865329 containerd[1534]: time="2024-09-05T00:13:00.865299902Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:13:00.865609 containerd[1534]: time="2024-09-05T00:13:00.865461318Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Sep 5 00:13:00.866044 containerd[1534]: time="2024-09-05T00:13:00.866032618Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:13:00.866932 containerd[1534]: time="2024-09-05T00:13:00.866917496Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:13:00.867388 containerd[1534]: time="2024-09-05T00:13:00.867373005Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 2.088690729s" Sep 5 00:13:00.867433 containerd[1534]: time="2024-09-05T00:13:00.867390764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Sep 5 00:13:00.867712 containerd[1534]: time="2024-09-05T00:13:00.867701373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Sep 5 00:13:00.870596 containerd[1534]: time="2024-09-05T00:13:00.870540203Z" level=info msg="CreateContainer within sandbox \"6db08ecbcab381bdcfe33687f4e9169e1277e0f75cc1a18674d0e9324b7ffabe\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 5 00:13:00.876888 containerd[1534]: time="2024-09-05T00:13:00.876873214Z" level=info msg="CreateContainer within sandbox \"6db08ecbcab381bdcfe33687f4e9169e1277e0f75cc1a18674d0e9324b7ffabe\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"62fb7c7b5c4ea65aa9dc25279379d68fbf62afede44bbd960b0f2527700f9398\"" Sep 5 00:13:00.877059 containerd[1534]: time="2024-09-05T00:13:00.877048024Z" level=info msg="StartContainer for \"62fb7c7b5c4ea65aa9dc25279379d68fbf62afede44bbd960b0f2527700f9398\"" Sep 5 00:13:00.905553 systemd[1]: Started cri-containerd-62fb7c7b5c4ea65aa9dc25279379d68fbf62afede44bbd960b0f2527700f9398.scope - libcontainer container 62fb7c7b5c4ea65aa9dc25279379d68fbf62afede44bbd960b0f2527700f9398. Sep 5 00:13:00.927874 containerd[1534]: time="2024-09-05T00:13:00.927818352Z" level=info msg="StartContainer for \"62fb7c7b5c4ea65aa9dc25279379d68fbf62afede44bbd960b0f2527700f9398\" returns successfully" Sep 5 00:13:01.367627 systemd-networkd[1329]: calif8f59ecb13c: Gained IPv6LL Sep 5 00:13:01.716207 kubelet[2919]: I0905 00:13:01.716113 2919 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5996bfb947-4hzkj" podStartSLOduration=21.626998251 podStartE2EDuration="23.716078798s" podCreationTimestamp="2024-09-05 00:12:38 +0000 UTC" firstStartedPulling="2024-09-05 00:12:58.778513014 +0000 UTC m=+39.219345208" lastFinishedPulling="2024-09-05 00:13:00.867593564 +0000 UTC m=+41.308425755" observedRunningTime="2024-09-05 00:13:01.715753322 +0000 UTC m=+42.156585515" watchObservedRunningTime="2024-09-05 00:13:01.716078798 +0000 UTC m=+42.156910988" Sep 5 00:13:01.751536 systemd-networkd[1329]: cali151f970ee35: Gained IPv6LL Sep 5 00:13:02.710404 kubelet[2919]: I0905 00:13:02.710387 2919 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:13:02.733183 containerd[1534]: time="2024-09-05T00:13:02.733160222Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:13:02.733483 containerd[1534]: time="2024-09-05T00:13:02.733422555Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Sep 5 00:13:02.733834 containerd[1534]: time="2024-09-05T00:13:02.733792284Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:13:02.734840 containerd[1534]: time="2024-09-05T00:13:02.734800904Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:13:02.735526 containerd[1534]: time="2024-09-05T00:13:02.735483591Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 1.86776572s" Sep 5 00:13:02.735526 containerd[1534]: time="2024-09-05T00:13:02.735500188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Sep 5 00:13:02.736496 containerd[1534]: time="2024-09-05T00:13:02.736454411Z" level=info msg="CreateContainer within sandbox \"8635671a197ad8aaf803ed33d2fd6960d7e428433213df041f92693bb45a8af0\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 5 00:13:02.743959 containerd[1534]: time="2024-09-05T00:13:02.743915397Z" level=info msg="CreateContainer within sandbox \"8635671a197ad8aaf803ed33d2fd6960d7e428433213df041f92693bb45a8af0\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0b1c201ecde2411a3c2ac463465b375f515ac8f391d7a85c0c08ad9be7341b7b\"" Sep 5 00:13:02.744236 containerd[1534]: time="2024-09-05T00:13:02.744204562Z" level=info msg="StartContainer for \"0b1c201ecde2411a3c2ac463465b375f515ac8f391d7a85c0c08ad9be7341b7b\"" Sep 5 00:13:02.768591 systemd[1]: Started cri-containerd-0b1c201ecde2411a3c2ac463465b375f515ac8f391d7a85c0c08ad9be7341b7b.scope - libcontainer container 0b1c201ecde2411a3c2ac463465b375f515ac8f391d7a85c0c08ad9be7341b7b. Sep 5 00:13:02.781139 containerd[1534]: time="2024-09-05T00:13:02.781113534Z" level=info msg="StartContainer for \"0b1c201ecde2411a3c2ac463465b375f515ac8f391d7a85c0c08ad9be7341b7b\" returns successfully" Sep 5 00:13:02.781736 containerd[1534]: time="2024-09-05T00:13:02.781720582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Sep 5 00:13:04.397670 containerd[1534]: time="2024-09-05T00:13:04.397613437Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:13:04.397915 containerd[1534]: time="2024-09-05T00:13:04.397790731Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Sep 5 00:13:04.398116 containerd[1534]: time="2024-09-05T00:13:04.398080170Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:13:04.399186 containerd[1534]: time="2024-09-05T00:13:04.399144495Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:13:04.399557 containerd[1534]: time="2024-09-05T00:13:04.399515172Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 1.617773656s" Sep 5 00:13:04.399557 containerd[1534]: time="2024-09-05T00:13:04.399532677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Sep 5 00:13:04.400498 containerd[1534]: time="2024-09-05T00:13:04.400449971Z" level=info msg="CreateContainer within sandbox \"8635671a197ad8aaf803ed33d2fd6960d7e428433213df041f92693bb45a8af0\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 5 00:13:04.407084 containerd[1534]: time="2024-09-05T00:13:04.407041284Z" level=info msg="CreateContainer within sandbox \"8635671a197ad8aaf803ed33d2fd6960d7e428433213df041f92693bb45a8af0\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ee08b7fcfbe80bc40c1cdaabc28efe2917caa6459b4f84ecbd0eef31e67f5bc6\"" Sep 5 00:13:04.407268 containerd[1534]: time="2024-09-05T00:13:04.407251733Z" level=info msg="StartContainer for \"ee08b7fcfbe80bc40c1cdaabc28efe2917caa6459b4f84ecbd0eef31e67f5bc6\"" Sep 5 00:13:04.431461 systemd[1]: Started cri-containerd-ee08b7fcfbe80bc40c1cdaabc28efe2917caa6459b4f84ecbd0eef31e67f5bc6.scope - libcontainer container ee08b7fcfbe80bc40c1cdaabc28efe2917caa6459b4f84ecbd0eef31e67f5bc6. Sep 5 00:13:04.443765 containerd[1534]: time="2024-09-05T00:13:04.443717711Z" level=info msg="StartContainer for \"ee08b7fcfbe80bc40c1cdaabc28efe2917caa6459b4f84ecbd0eef31e67f5bc6\" returns successfully" Sep 5 00:13:04.642667 kubelet[2919]: I0905 00:13:04.642559 2919 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 5 00:13:04.642667 kubelet[2919]: I0905 00:13:04.642639 2919 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 5 00:13:04.741663 kubelet[2919]: I0905 00:13:04.741471 2919 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-26b8j" podStartSLOduration=23.105273851 podStartE2EDuration="27.741323125s" podCreationTimestamp="2024-09-05 00:12:37 +0000 UTC" firstStartedPulling="2024-09-05 00:12:59.763634761 +0000 UTC m=+40.204466952" lastFinishedPulling="2024-09-05 00:13:04.399684034 +0000 UTC m=+44.840516226" observedRunningTime="2024-09-05 00:13:04.739629151 +0000 UTC m=+45.180461443" watchObservedRunningTime="2024-09-05 00:13:04.741323125 +0000 UTC m=+45.182155383" Sep 5 00:13:05.464541 kubelet[2919]: I0905 00:13:05.464424 2919 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:13:05.955337 kernel: bpftool[5673]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 5 00:13:06.008952 kubelet[2919]: I0905 00:13:06.008931 2919 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:13:06.112434 systemd-networkd[1329]: vxlan.calico: Link UP Sep 5 00:13:06.112438 systemd-networkd[1329]: vxlan.calico: Gained carrier Sep 5 00:13:07.959580 systemd-networkd[1329]: vxlan.calico: Gained IPv6LL Sep 5 00:13:11.872002 kubelet[2919]: I0905 00:13:11.871925 2919 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:13:14.137553 systemd[1]: Started sshd@10-147.28.180.221:22-123.58.218.88:37078.service - OpenSSH per-connection server daemon (123.58.218.88:37078). Sep 5 00:13:15.608337 sshd[5951]: Received disconnect from 123.58.218.88 port 37078:11: Bye Bye [preauth] Sep 5 00:13:15.608337 sshd[5951]: Disconnected from authenticating user root 123.58.218.88 port 37078 [preauth] Sep 5 00:13:15.611880 systemd[1]: sshd@10-147.28.180.221:22-123.58.218.88:37078.service: Deactivated successfully. Sep 5 00:13:19.601392 containerd[1534]: time="2024-09-05T00:13:19.601333759Z" level=info msg="StopPodSandbox for \"3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655\"" Sep 5 00:13:19.641227 containerd[1534]: 2024-09-05 00:13:19.621 [WARNING][5982] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--b942d58550-k8s-calico--kube--controllers--5996bfb947--4hzkj-eth0", GenerateName:"calico-kube-controllers-5996bfb947-", Namespace:"calico-system", SelfLink:"", UID:"76079e94-a1fe-4931-9c5f-c490f8c55dd8", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2024, time.September, 5, 0, 12, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5996bfb947", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-b942d58550", ContainerID:"6db08ecbcab381bdcfe33687f4e9169e1277e0f75cc1a18674d0e9324b7ffabe", Pod:"calico-kube-controllers-5996bfb947-4hzkj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.19.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicccd767ed92", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 5 00:13:19.641227 containerd[1534]: 2024-09-05 00:13:19.621 [INFO][5982] k8s.go 608: Cleaning up netns ContainerID="3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655" Sep 5 00:13:19.641227 containerd[1534]: 2024-09-05 00:13:19.621 [INFO][5982] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655" iface="eth0" netns="" Sep 5 00:13:19.641227 containerd[1534]: 2024-09-05 00:13:19.621 [INFO][5982] k8s.go 615: Releasing IP address(es) ContainerID="3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655" Sep 5 00:13:19.641227 containerd[1534]: 2024-09-05 00:13:19.621 [INFO][5982] utils.go 188: Calico CNI releasing IP address ContainerID="3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655" Sep 5 00:13:19.641227 containerd[1534]: 2024-09-05 00:13:19.633 [INFO][6001] ipam_plugin.go 417: Releasing address using handleID ContainerID="3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655" HandleID="k8s-pod-network.3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655" Workload="ci--4054.1.0--a--b942d58550-k8s-calico--kube--controllers--5996bfb947--4hzkj-eth0" Sep 5 00:13:19.641227 containerd[1534]: 2024-09-05 00:13:19.633 [INFO][6001] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 5 00:13:19.641227 containerd[1534]: 2024-09-05 00:13:19.633 [INFO][6001] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 5 00:13:19.641227 containerd[1534]: 2024-09-05 00:13:19.638 [WARNING][6001] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655" HandleID="k8s-pod-network.3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655" Workload="ci--4054.1.0--a--b942d58550-k8s-calico--kube--controllers--5996bfb947--4hzkj-eth0" Sep 5 00:13:19.641227 containerd[1534]: 2024-09-05 00:13:19.638 [INFO][6001] ipam_plugin.go 445: Releasing address using workloadID ContainerID="3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655" HandleID="k8s-pod-network.3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655" Workload="ci--4054.1.0--a--b942d58550-k8s-calico--kube--controllers--5996bfb947--4hzkj-eth0" Sep 5 00:13:19.641227 containerd[1534]: 2024-09-05 00:13:19.639 [INFO][6001] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 5 00:13:19.641227 containerd[1534]: 2024-09-05 00:13:19.640 [INFO][5982] k8s.go 621: Teardown processing complete. ContainerID="3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655" Sep 5 00:13:19.641227 containerd[1534]: time="2024-09-05T00:13:19.641223846Z" level=info msg="TearDown network for sandbox \"3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655\" successfully" Sep 5 00:13:19.641642 containerd[1534]: time="2024-09-05T00:13:19.641242267Z" level=info msg="StopPodSandbox for \"3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655\" returns successfully" Sep 5 00:13:19.641642 containerd[1534]: time="2024-09-05T00:13:19.641519249Z" level=info msg="RemovePodSandbox for \"3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655\"" Sep 5 00:13:19.641642 containerd[1534]: time="2024-09-05T00:13:19.641544008Z" level=info msg="Forcibly stopping sandbox \"3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655\"" Sep 5 00:13:19.692025 containerd[1534]: 2024-09-05 00:13:19.667 [WARNING][6032] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--b942d58550-k8s-calico--kube--controllers--5996bfb947--4hzkj-eth0", GenerateName:"calico-kube-controllers-5996bfb947-", Namespace:"calico-system", SelfLink:"", UID:"76079e94-a1fe-4931-9c5f-c490f8c55dd8", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2024, time.September, 5, 0, 12, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5996bfb947", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-b942d58550", ContainerID:"6db08ecbcab381bdcfe33687f4e9169e1277e0f75cc1a18674d0e9324b7ffabe", Pod:"calico-kube-controllers-5996bfb947-4hzkj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.19.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicccd767ed92", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 5 00:13:19.692025 containerd[1534]: 2024-09-05 00:13:19.667 [INFO][6032] k8s.go 608: Cleaning up netns ContainerID="3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655" Sep 5 00:13:19.692025 containerd[1534]: 2024-09-05 00:13:19.667 [INFO][6032] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655" iface="eth0" netns="" Sep 5 00:13:19.692025 containerd[1534]: 2024-09-05 00:13:19.667 [INFO][6032] k8s.go 615: Releasing IP address(es) ContainerID="3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655" Sep 5 00:13:19.692025 containerd[1534]: 2024-09-05 00:13:19.667 [INFO][6032] utils.go 188: Calico CNI releasing IP address ContainerID="3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655" Sep 5 00:13:19.692025 containerd[1534]: 2024-09-05 00:13:19.683 [INFO][6051] ipam_plugin.go 417: Releasing address using handleID ContainerID="3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655" HandleID="k8s-pod-network.3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655" Workload="ci--4054.1.0--a--b942d58550-k8s-calico--kube--controllers--5996bfb947--4hzkj-eth0" Sep 5 00:13:19.692025 containerd[1534]: 2024-09-05 00:13:19.684 [INFO][6051] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 5 00:13:19.692025 containerd[1534]: 2024-09-05 00:13:19.684 [INFO][6051] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 5 00:13:19.692025 containerd[1534]: 2024-09-05 00:13:19.689 [WARNING][6051] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655" HandleID="k8s-pod-network.3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655" Workload="ci--4054.1.0--a--b942d58550-k8s-calico--kube--controllers--5996bfb947--4hzkj-eth0" Sep 5 00:13:19.692025 containerd[1534]: 2024-09-05 00:13:19.689 [INFO][6051] ipam_plugin.go 445: Releasing address using workloadID ContainerID="3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655" HandleID="k8s-pod-network.3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655" Workload="ci--4054.1.0--a--b942d58550-k8s-calico--kube--controllers--5996bfb947--4hzkj-eth0" Sep 5 00:13:19.692025 containerd[1534]: 2024-09-05 00:13:19.690 [INFO][6051] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 5 00:13:19.692025 containerd[1534]: 2024-09-05 00:13:19.691 [INFO][6032] k8s.go 621: Teardown processing complete. ContainerID="3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655" Sep 5 00:13:19.692772 containerd[1534]: time="2024-09-05T00:13:19.692064075Z" level=info msg="TearDown network for sandbox \"3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655\" successfully" Sep 5 00:13:19.693911 containerd[1534]: time="2024-09-05T00:13:19.693897681Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:13:19.693957 containerd[1534]: time="2024-09-05T00:13:19.693934789Z" level=info msg="RemovePodSandbox \"3de08cf79200226f4b43f52dcf68e61e8c423c2dfa37132dca335d2c4f60d655\" returns successfully" Sep 5 00:13:19.694334 containerd[1534]: time="2024-09-05T00:13:19.694252805Z" level=info msg="StopPodSandbox for \"46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4\"" Sep 5 00:13:19.729797 containerd[1534]: 2024-09-05 00:13:19.713 [WARNING][6082] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--46mhs-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"b66a1d66-183c-460e-a689-dc92fc0f3c54", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2024, time.September, 5, 0, 12, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-b942d58550", ContainerID:"fddcd4e64afce49772fcea710d3443a875cc96325651fe000eef8b87b2858043", Pod:"coredns-76f75df574-46mhs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali151f970ee35", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 5 00:13:19.729797 containerd[1534]: 2024-09-05 00:13:19.713 [INFO][6082] k8s.go 608: Cleaning up netns ContainerID="46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4" Sep 5 00:13:19.729797 containerd[1534]: 2024-09-05 00:13:19.713 [INFO][6082] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4" iface="eth0" netns="" Sep 5 00:13:19.729797 containerd[1534]: 2024-09-05 00:13:19.713 [INFO][6082] k8s.go 615: Releasing IP address(es) ContainerID="46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4" Sep 5 00:13:19.729797 containerd[1534]: 2024-09-05 00:13:19.713 [INFO][6082] utils.go 188: Calico CNI releasing IP address ContainerID="46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4" Sep 5 00:13:19.729797 containerd[1534]: 2024-09-05 00:13:19.723 [INFO][6096] ipam_plugin.go 417: Releasing address using handleID ContainerID="46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4" HandleID="k8s-pod-network.46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4" Workload="ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--46mhs-eth0" Sep 5 00:13:19.729797 containerd[1534]: 2024-09-05 00:13:19.723 [INFO][6096] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 5 00:13:19.729797 containerd[1534]: 2024-09-05 00:13:19.723 [INFO][6096] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 5 00:13:19.729797 containerd[1534]: 2024-09-05 00:13:19.727 [WARNING][6096] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4" HandleID="k8s-pod-network.46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4" Workload="ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--46mhs-eth0" Sep 5 00:13:19.729797 containerd[1534]: 2024-09-05 00:13:19.727 [INFO][6096] ipam_plugin.go 445: Releasing address using workloadID ContainerID="46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4" HandleID="k8s-pod-network.46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4" Workload="ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--46mhs-eth0" Sep 5 00:13:19.729797 containerd[1534]: 2024-09-05 00:13:19.728 [INFO][6096] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 5 00:13:19.729797 containerd[1534]: 2024-09-05 00:13:19.729 [INFO][6082] k8s.go 621: Teardown processing complete. ContainerID="46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4" Sep 5 00:13:19.730090 containerd[1534]: time="2024-09-05T00:13:19.729808502Z" level=info msg="TearDown network for sandbox \"46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4\" successfully" Sep 5 00:13:19.730090 containerd[1534]: time="2024-09-05T00:13:19.729826083Z" level=info msg="StopPodSandbox for \"46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4\" returns successfully" Sep 5 00:13:19.730125 containerd[1534]: time="2024-09-05T00:13:19.730105995Z" level=info msg="RemovePodSandbox for \"46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4\"" Sep 5 00:13:19.730125 containerd[1534]: time="2024-09-05T00:13:19.730123272Z" level=info msg="Forcibly stopping sandbox \"46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4\"" Sep 5 00:13:19.766943 containerd[1534]: 2024-09-05 00:13:19.749 [WARNING][6126] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--46mhs-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"b66a1d66-183c-460e-a689-dc92fc0f3c54", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2024, time.September, 5, 0, 12, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-b942d58550", ContainerID:"fddcd4e64afce49772fcea710d3443a875cc96325651fe000eef8b87b2858043", Pod:"coredns-76f75df574-46mhs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali151f970ee35", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 5 00:13:19.766943 containerd[1534]: 2024-09-05 00:13:19.749 [INFO][6126] k8s.go 608: Cleaning up netns ContainerID="46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4" Sep 5 00:13:19.766943 containerd[1534]: 2024-09-05 00:13:19.749 [INFO][6126] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4" iface="eth0" netns="" Sep 5 00:13:19.766943 containerd[1534]: 2024-09-05 00:13:19.749 [INFO][6126] k8s.go 615: Releasing IP address(es) ContainerID="46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4" Sep 5 00:13:19.766943 containerd[1534]: 2024-09-05 00:13:19.749 [INFO][6126] utils.go 188: Calico CNI releasing IP address ContainerID="46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4" Sep 5 00:13:19.766943 containerd[1534]: 2024-09-05 00:13:19.760 [INFO][6142] ipam_plugin.go 417: Releasing address using handleID ContainerID="46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4" HandleID="k8s-pod-network.46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4" Workload="ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--46mhs-eth0" Sep 5 00:13:19.766943 containerd[1534]: 2024-09-05 00:13:19.760 [INFO][6142] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 5 00:13:19.766943 containerd[1534]: 2024-09-05 00:13:19.760 [INFO][6142] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 5 00:13:19.766943 containerd[1534]: 2024-09-05 00:13:19.764 [WARNING][6142] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4" HandleID="k8s-pod-network.46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4" Workload="ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--46mhs-eth0" Sep 5 00:13:19.766943 containerd[1534]: 2024-09-05 00:13:19.764 [INFO][6142] ipam_plugin.go 445: Releasing address using workloadID ContainerID="46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4" HandleID="k8s-pod-network.46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4" Workload="ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--46mhs-eth0" Sep 5 00:13:19.766943 containerd[1534]: 2024-09-05 00:13:19.765 [INFO][6142] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 5 00:13:19.766943 containerd[1534]: 2024-09-05 00:13:19.766 [INFO][6126] k8s.go 621: Teardown processing complete. ContainerID="46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4" Sep 5 00:13:19.766943 containerd[1534]: time="2024-09-05T00:13:19.766935921Z" level=info msg="TearDown network for sandbox \"46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4\" successfully" Sep 5 00:13:19.768194 containerd[1534]: time="2024-09-05T00:13:19.768158284Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:13:19.768194 containerd[1534]: time="2024-09-05T00:13:19.768178345Z" level=info msg="RemovePodSandbox \"46f29ff829aa20a18cd6a4acace3ee1e4cf621fa7fff99512b929fa08dc824d4\" returns successfully" Sep 5 00:13:19.768415 containerd[1534]: time="2024-09-05T00:13:19.768376005Z" level=info msg="StopPodSandbox for \"02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8\"" Sep 5 00:13:19.802430 containerd[1534]: 2024-09-05 00:13:19.785 [WARNING][6174] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--b942d58550-k8s-csi--node--driver--26b8j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3109814d-a4ad-47dd-9273-9920f3f0d86d", ResourceVersion:"766", Generation:0, CreationTimestamp:time.Date(2024, time.September, 5, 0, 12, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-b942d58550", ContainerID:"8635671a197ad8aaf803ed33d2fd6960d7e428433213df041f92693bb45a8af0", Pod:"csi-node-driver-26b8j", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.19.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calif8f59ecb13c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 5 00:13:19.802430 containerd[1534]: 2024-09-05 00:13:19.785 [INFO][6174] k8s.go 608: Cleaning up netns ContainerID="02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8" Sep 5 00:13:19.802430 containerd[1534]: 2024-09-05 00:13:19.785 [INFO][6174] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8" iface="eth0" netns="" Sep 5 00:13:19.802430 containerd[1534]: 2024-09-05 00:13:19.785 [INFO][6174] k8s.go 615: Releasing IP address(es) ContainerID="02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8" Sep 5 00:13:19.802430 containerd[1534]: 2024-09-05 00:13:19.785 [INFO][6174] utils.go 188: Calico CNI releasing IP address ContainerID="02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8" Sep 5 00:13:19.802430 containerd[1534]: 2024-09-05 00:13:19.796 [INFO][6191] ipam_plugin.go 417: Releasing address using handleID ContainerID="02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8" HandleID="k8s-pod-network.02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8" Workload="ci--4054.1.0--a--b942d58550-k8s-csi--node--driver--26b8j-eth0" Sep 5 00:13:19.802430 containerd[1534]: 2024-09-05 00:13:19.796 [INFO][6191] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 5 00:13:19.802430 containerd[1534]: 2024-09-05 00:13:19.796 [INFO][6191] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 5 00:13:19.802430 containerd[1534]: 2024-09-05 00:13:19.800 [WARNING][6191] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8" HandleID="k8s-pod-network.02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8" Workload="ci--4054.1.0--a--b942d58550-k8s-csi--node--driver--26b8j-eth0" Sep 5 00:13:19.802430 containerd[1534]: 2024-09-05 00:13:19.800 [INFO][6191] ipam_plugin.go 445: Releasing address using workloadID ContainerID="02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8" HandleID="k8s-pod-network.02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8" Workload="ci--4054.1.0--a--b942d58550-k8s-csi--node--driver--26b8j-eth0" Sep 5 00:13:19.802430 containerd[1534]: 2024-09-05 00:13:19.801 [INFO][6191] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 5 00:13:19.802430 containerd[1534]: 2024-09-05 00:13:19.801 [INFO][6174] k8s.go 621: Teardown processing complete. ContainerID="02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8" Sep 5 00:13:19.802792 containerd[1534]: time="2024-09-05T00:13:19.802451838Z" level=info msg="TearDown network for sandbox \"02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8\" successfully" Sep 5 00:13:19.802792 containerd[1534]: time="2024-09-05T00:13:19.802468107Z" level=info msg="StopPodSandbox for \"02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8\" returns successfully" Sep 5 00:13:19.802792 containerd[1534]: time="2024-09-05T00:13:19.802771896Z" level=info msg="RemovePodSandbox for \"02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8\"" Sep 5 00:13:19.802892 containerd[1534]: time="2024-09-05T00:13:19.802794544Z" level=info msg="Forcibly stopping sandbox \"02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8\"" Sep 5 00:13:19.847502 containerd[1534]: 2024-09-05 00:13:19.825 [WARNING][6220] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--b942d58550-k8s-csi--node--driver--26b8j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3109814d-a4ad-47dd-9273-9920f3f0d86d", ResourceVersion:"766", Generation:0, CreationTimestamp:time.Date(2024, time.September, 5, 0, 12, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-b942d58550", ContainerID:"8635671a197ad8aaf803ed33d2fd6960d7e428433213df041f92693bb45a8af0", Pod:"csi-node-driver-26b8j", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.19.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calif8f59ecb13c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 5 00:13:19.847502 containerd[1534]: 2024-09-05 00:13:19.825 [INFO][6220] k8s.go 608: Cleaning up netns ContainerID="02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8" Sep 5 00:13:19.847502 containerd[1534]: 2024-09-05 00:13:19.825 [INFO][6220] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8" iface="eth0" netns="" Sep 5 00:13:19.847502 containerd[1534]: 2024-09-05 00:13:19.825 [INFO][6220] k8s.go 615: Releasing IP address(es) ContainerID="02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8" Sep 5 00:13:19.847502 containerd[1534]: 2024-09-05 00:13:19.825 [INFO][6220] utils.go 188: Calico CNI releasing IP address ContainerID="02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8" Sep 5 00:13:19.847502 containerd[1534]: 2024-09-05 00:13:19.839 [INFO][6235] ipam_plugin.go 417: Releasing address using handleID ContainerID="02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8" HandleID="k8s-pod-network.02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8" Workload="ci--4054.1.0--a--b942d58550-k8s-csi--node--driver--26b8j-eth0" Sep 5 00:13:19.847502 containerd[1534]: 2024-09-05 00:13:19.839 [INFO][6235] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 5 00:13:19.847502 containerd[1534]: 2024-09-05 00:13:19.839 [INFO][6235] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 5 00:13:19.847502 containerd[1534]: 2024-09-05 00:13:19.844 [WARNING][6235] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8" HandleID="k8s-pod-network.02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8" Workload="ci--4054.1.0--a--b942d58550-k8s-csi--node--driver--26b8j-eth0" Sep 5 00:13:19.847502 containerd[1534]: 2024-09-05 00:13:19.844 [INFO][6235] ipam_plugin.go 445: Releasing address using workloadID ContainerID="02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8" HandleID="k8s-pod-network.02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8" Workload="ci--4054.1.0--a--b942d58550-k8s-csi--node--driver--26b8j-eth0" Sep 5 00:13:19.847502 containerd[1534]: 2024-09-05 00:13:19.845 [INFO][6235] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 5 00:13:19.847502 containerd[1534]: 2024-09-05 00:13:19.846 [INFO][6220] k8s.go 621: Teardown processing complete. ContainerID="02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8" Sep 5 00:13:19.847502 containerd[1534]: time="2024-09-05T00:13:19.847481211Z" level=info msg="TearDown network for sandbox \"02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8\" successfully" Sep 5 00:13:19.849159 containerd[1534]: time="2024-09-05T00:13:19.849115724Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:13:19.849159 containerd[1534]: time="2024-09-05T00:13:19.849146672Z" level=info msg="RemovePodSandbox \"02c52999cc86bab163eb9c186f3667d8945485176284ac09137095bf86d569b8\" returns successfully" Sep 5 00:13:19.849426 containerd[1534]: time="2024-09-05T00:13:19.849382609Z" level=info msg="StopPodSandbox for \"e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3\"" Sep 5 00:13:19.883785 containerd[1534]: 2024-09-05 00:13:19.867 [WARNING][6266] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--xr8b9-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"e7b785e3-e901-481a-96a4-597b35c1db1f", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2024, time.September, 5, 0, 12, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-b942d58550", ContainerID:"fa5ff91af375b08ac6a4b6c781ef83b29da84f32993a23b4344e33050057e230", Pod:"coredns-76f75df574-xr8b9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1cf3ad180ac", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 5 00:13:19.883785 containerd[1534]: 2024-09-05 00:13:19.867 [INFO][6266] k8s.go 608: Cleaning up netns ContainerID="e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3" Sep 5 00:13:19.883785 containerd[1534]: 2024-09-05 00:13:19.867 [INFO][6266] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3" iface="eth0" netns="" Sep 5 00:13:19.883785 containerd[1534]: 2024-09-05 00:13:19.867 [INFO][6266] k8s.go 615: Releasing IP address(es) ContainerID="e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3" Sep 5 00:13:19.883785 containerd[1534]: 2024-09-05 00:13:19.867 [INFO][6266] utils.go 188: Calico CNI releasing IP address ContainerID="e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3" Sep 5 00:13:19.883785 containerd[1534]: 2024-09-05 00:13:19.877 [INFO][6283] ipam_plugin.go 417: Releasing address using handleID ContainerID="e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3" HandleID="k8s-pod-network.e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3" Workload="ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--xr8b9-eth0" Sep 5 00:13:19.883785 containerd[1534]: 2024-09-05 00:13:19.877 [INFO][6283] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 5 00:13:19.883785 containerd[1534]: 2024-09-05 00:13:19.877 [INFO][6283] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 5 00:13:19.883785 containerd[1534]: 2024-09-05 00:13:19.881 [WARNING][6283] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3" HandleID="k8s-pod-network.e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3" Workload="ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--xr8b9-eth0" Sep 5 00:13:19.883785 containerd[1534]: 2024-09-05 00:13:19.881 [INFO][6283] ipam_plugin.go 445: Releasing address using workloadID ContainerID="e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3" HandleID="k8s-pod-network.e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3" Workload="ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--xr8b9-eth0" Sep 5 00:13:19.883785 containerd[1534]: 2024-09-05 00:13:19.882 [INFO][6283] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 5 00:13:19.883785 containerd[1534]: 2024-09-05 00:13:19.883 [INFO][6266] k8s.go 621: Teardown processing complete. ContainerID="e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3" Sep 5 00:13:19.883785 containerd[1534]: time="2024-09-05T00:13:19.883749615Z" level=info msg="TearDown network for sandbox \"e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3\" successfully" Sep 5 00:13:19.883785 containerd[1534]: time="2024-09-05T00:13:19.883766251Z" level=info msg="StopPodSandbox for \"e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3\" returns successfully" Sep 5 00:13:19.884232 containerd[1534]: time="2024-09-05T00:13:19.884044318Z" level=info msg="RemovePodSandbox for \"e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3\"" Sep 5 00:13:19.884232 containerd[1534]: time="2024-09-05T00:13:19.884064462Z" level=info msg="Forcibly stopping sandbox \"e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3\"" Sep 5 00:13:19.920111 containerd[1534]: 2024-09-05 00:13:19.901 [WARNING][6313] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--xr8b9-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"e7b785e3-e901-481a-96a4-597b35c1db1f", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2024, time.September, 5, 0, 12, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-b942d58550", ContainerID:"fa5ff91af375b08ac6a4b6c781ef83b29da84f32993a23b4344e33050057e230", Pod:"coredns-76f75df574-xr8b9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1cf3ad180ac", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 5 00:13:19.920111 containerd[1534]: 2024-09-05 00:13:19.902 [INFO][6313] k8s.go 608: Cleaning up netns ContainerID="e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3" Sep 5 00:13:19.920111 containerd[1534]: 2024-09-05 00:13:19.902 [INFO][6313] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3" iface="eth0" netns="" Sep 5 00:13:19.920111 containerd[1534]: 2024-09-05 00:13:19.902 [INFO][6313] k8s.go 615: Releasing IP address(es) ContainerID="e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3" Sep 5 00:13:19.920111 containerd[1534]: 2024-09-05 00:13:19.902 [INFO][6313] utils.go 188: Calico CNI releasing IP address ContainerID="e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3" Sep 5 00:13:19.920111 containerd[1534]: 2024-09-05 00:13:19.912 [INFO][6330] ipam_plugin.go 417: Releasing address using handleID ContainerID="e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3" HandleID="k8s-pod-network.e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3" Workload="ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--xr8b9-eth0" Sep 5 00:13:19.920111 containerd[1534]: 2024-09-05 00:13:19.913 [INFO][6330] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 5 00:13:19.920111 containerd[1534]: 2024-09-05 00:13:19.913 [INFO][6330] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 5 00:13:19.920111 containerd[1534]: 2024-09-05 00:13:19.917 [WARNING][6330] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3" HandleID="k8s-pod-network.e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3" Workload="ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--xr8b9-eth0" Sep 5 00:13:19.920111 containerd[1534]: 2024-09-05 00:13:19.917 [INFO][6330] ipam_plugin.go 445: Releasing address using workloadID ContainerID="e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3" HandleID="k8s-pod-network.e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3" Workload="ci--4054.1.0--a--b942d58550-k8s-coredns--76f75df574--xr8b9-eth0" Sep 5 00:13:19.920111 containerd[1534]: 2024-09-05 00:13:19.918 [INFO][6330] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 5 00:13:19.920111 containerd[1534]: 2024-09-05 00:13:19.919 [INFO][6313] k8s.go 621: Teardown processing complete. ContainerID="e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3" Sep 5 00:13:19.920486 containerd[1534]: time="2024-09-05T00:13:19.920137919Z" level=info msg="TearDown network for sandbox \"e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3\" successfully" Sep 5 00:13:19.922297 containerd[1534]: time="2024-09-05T00:13:19.922271789Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:13:19.922346 containerd[1534]: time="2024-09-05T00:13:19.922318056Z" level=info msg="RemovePodSandbox \"e86155274af296307d08daa74eeedb577acc805c6fbcae7f31a867d33f15fcb3\" returns successfully" Sep 5 00:13:38.140144 kubelet[2919]: I0905 00:13:38.140099 2919 topology_manager.go:215] "Topology Admit Handler" podUID="206b4d7f-9eb8-4597-a798-00b64629f99e" podNamespace="calico-apiserver" podName="calico-apiserver-7458894c5c-kz9g6" Sep 5 00:13:38.143152 kubelet[2919]: I0905 00:13:38.143119 2919 topology_manager.go:215] "Topology Admit Handler" podUID="aeccb52f-2274-4fc4-af1f-ba785b968b42" podNamespace="calico-apiserver" podName="calico-apiserver-7458894c5c-tpn2s" Sep 5 00:13:38.148635 systemd[1]: Created slice kubepods-besteffort-pod206b4d7f_9eb8_4597_a798_00b64629f99e.slice - libcontainer container kubepods-besteffort-pod206b4d7f_9eb8_4597_a798_00b64629f99e.slice. Sep 5 00:13:38.153220 systemd[1]: Created slice kubepods-besteffort-podaeccb52f_2274_4fc4_af1f_ba785b968b42.slice - libcontainer container kubepods-besteffort-podaeccb52f_2274_4fc4_af1f_ba785b968b42.slice. Sep 5 00:13:38.212578 kubelet[2919]: I0905 00:13:38.212475 2919 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd24l\" (UniqueName: \"kubernetes.io/projected/206b4d7f-9eb8-4597-a798-00b64629f99e-kube-api-access-vd24l\") pod \"calico-apiserver-7458894c5c-kz9g6\" (UID: \"206b4d7f-9eb8-4597-a798-00b64629f99e\") " pod="calico-apiserver/calico-apiserver-7458894c5c-kz9g6" Sep 5 00:13:38.212832 kubelet[2919]: I0905 00:13:38.212605 2919 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/aeccb52f-2274-4fc4-af1f-ba785b968b42-calico-apiserver-certs\") pod \"calico-apiserver-7458894c5c-tpn2s\" (UID: \"aeccb52f-2274-4fc4-af1f-ba785b968b42\") " pod="calico-apiserver/calico-apiserver-7458894c5c-tpn2s" Sep 5 00:13:38.212953 kubelet[2919]: I0905 00:13:38.212829 2919 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/206b4d7f-9eb8-4597-a798-00b64629f99e-calico-apiserver-certs\") pod \"calico-apiserver-7458894c5c-kz9g6\" (UID: \"206b4d7f-9eb8-4597-a798-00b64629f99e\") " pod="calico-apiserver/calico-apiserver-7458894c5c-kz9g6" Sep 5 00:13:38.212953 kubelet[2919]: I0905 00:13:38.212924 2919 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnbtp\" (UniqueName: \"kubernetes.io/projected/aeccb52f-2274-4fc4-af1f-ba785b968b42-kube-api-access-pnbtp\") pod \"calico-apiserver-7458894c5c-tpn2s\" (UID: \"aeccb52f-2274-4fc4-af1f-ba785b968b42\") " pod="calico-apiserver/calico-apiserver-7458894c5c-tpn2s" Sep 5 00:13:38.314638 kubelet[2919]: E0905 00:13:38.314553 2919 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Sep 5 00:13:38.314638 kubelet[2919]: E0905 00:13:38.314601 2919 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Sep 5 00:13:38.315190 kubelet[2919]: E0905 00:13:38.314783 2919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aeccb52f-2274-4fc4-af1f-ba785b968b42-calico-apiserver-certs podName:aeccb52f-2274-4fc4-af1f-ba785b968b42 nodeName:}" failed. No retries permitted until 2024-09-05 00:13:38.814722392 +0000 UTC m=+79.255554662 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/aeccb52f-2274-4fc4-af1f-ba785b968b42-calico-apiserver-certs") pod "calico-apiserver-7458894c5c-tpn2s" (UID: "aeccb52f-2274-4fc4-af1f-ba785b968b42") : secret "calico-apiserver-certs" not found Sep 5 00:13:38.315190 kubelet[2919]: E0905 00:13:38.314855 2919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/206b4d7f-9eb8-4597-a798-00b64629f99e-calico-apiserver-certs podName:206b4d7f-9eb8-4597-a798-00b64629f99e nodeName:}" failed. No retries permitted until 2024-09-05 00:13:38.814816016 +0000 UTC m=+79.255648268 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/206b4d7f-9eb8-4597-a798-00b64629f99e-calico-apiserver-certs") pod "calico-apiserver-7458894c5c-kz9g6" (UID: "206b4d7f-9eb8-4597-a798-00b64629f99e") : secret "calico-apiserver-certs" not found Sep 5 00:13:39.052604 containerd[1534]: time="2024-09-05T00:13:39.052501677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7458894c5c-kz9g6,Uid:206b4d7f-9eb8-4597-a798-00b64629f99e,Namespace:calico-apiserver,Attempt:0,}" Sep 5 00:13:39.056050 containerd[1534]: time="2024-09-05T00:13:39.056018858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7458894c5c-tpn2s,Uid:aeccb52f-2274-4fc4-af1f-ba785b968b42,Namespace:calico-apiserver,Attempt:0,}" Sep 5 00:13:39.110214 systemd-networkd[1329]: cali7cc2eebc6c4: Link UP Sep 5 00:13:39.110565 systemd-networkd[1329]: cali7cc2eebc6c4: Gained carrier Sep 5 00:13:39.114725 containerd[1534]: 2024-09-05 00:13:39.075 [INFO][6403] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4054.1.0--a--b942d58550-k8s-calico--apiserver--7458894c5c--kz9g6-eth0 calico-apiserver-7458894c5c- calico-apiserver 206b4d7f-9eb8-4597-a798-00b64629f99e 889 0 2024-09-05 00:13:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7458894c5c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4054.1.0-a-b942d58550 calico-apiserver-7458894c5c-kz9g6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7cc2eebc6c4 [] []}} ContainerID="e2b847ae87c2f61f5a9f0b66a025ddaf76e961b891d22a1a1df5170c441236a6" Namespace="calico-apiserver" Pod="calico-apiserver-7458894c5c-kz9g6" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-calico--apiserver--7458894c5c--kz9g6-" Sep 5 00:13:39.114725 containerd[1534]: 2024-09-05 00:13:39.075 [INFO][6403] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e2b847ae87c2f61f5a9f0b66a025ddaf76e961b891d22a1a1df5170c441236a6" Namespace="calico-apiserver" Pod="calico-apiserver-7458894c5c-kz9g6" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-calico--apiserver--7458894c5c--kz9g6-eth0" Sep 5 00:13:39.114725 containerd[1534]: 2024-09-05 00:13:39.089 [INFO][6449] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e2b847ae87c2f61f5a9f0b66a025ddaf76e961b891d22a1a1df5170c441236a6" HandleID="k8s-pod-network.e2b847ae87c2f61f5a9f0b66a025ddaf76e961b891d22a1a1df5170c441236a6" Workload="ci--4054.1.0--a--b942d58550-k8s-calico--apiserver--7458894c5c--kz9g6-eth0" Sep 5 00:13:39.114725 containerd[1534]: 2024-09-05 00:13:39.094 [INFO][6449] ipam_plugin.go 270: Auto assigning IP ContainerID="e2b847ae87c2f61f5a9f0b66a025ddaf76e961b891d22a1a1df5170c441236a6" HandleID="k8s-pod-network.e2b847ae87c2f61f5a9f0b66a025ddaf76e961b891d22a1a1df5170c441236a6" Workload="ci--4054.1.0--a--b942d58550-k8s-calico--apiserver--7458894c5c--kz9g6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000383ea0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4054.1.0-a-b942d58550", "pod":"calico-apiserver-7458894c5c-kz9g6", "timestamp":"2024-09-05 00:13:39.089338966 +0000 UTC"}, Hostname:"ci-4054.1.0-a-b942d58550", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:13:39.114725 containerd[1534]: 2024-09-05 00:13:39.094 [INFO][6449] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 5 00:13:39.114725 containerd[1534]: 2024-09-05 00:13:39.094 [INFO][6449] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 5 00:13:39.114725 containerd[1534]: 2024-09-05 00:13:39.094 [INFO][6449] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4054.1.0-a-b942d58550' Sep 5 00:13:39.114725 containerd[1534]: 2024-09-05 00:13:39.095 [INFO][6449] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e2b847ae87c2f61f5a9f0b66a025ddaf76e961b891d22a1a1df5170c441236a6" host="ci-4054.1.0-a-b942d58550" Sep 5 00:13:39.114725 containerd[1534]: 2024-09-05 00:13:39.097 [INFO][6449] ipam.go 372: Looking up existing affinities for host host="ci-4054.1.0-a-b942d58550" Sep 5 00:13:39.114725 containerd[1534]: 2024-09-05 00:13:39.100 [INFO][6449] ipam.go 489: Trying affinity for 192.168.19.0/26 host="ci-4054.1.0-a-b942d58550" Sep 5 00:13:39.114725 containerd[1534]: 2024-09-05 00:13:39.101 [INFO][6449] ipam.go 155: Attempting to load block cidr=192.168.19.0/26 host="ci-4054.1.0-a-b942d58550" Sep 5 00:13:39.114725 containerd[1534]: 2024-09-05 00:13:39.102 [INFO][6449] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.19.0/26 host="ci-4054.1.0-a-b942d58550" Sep 5 00:13:39.114725 containerd[1534]: 2024-09-05 00:13:39.103 [INFO][6449] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.19.0/26 handle="k8s-pod-network.e2b847ae87c2f61f5a9f0b66a025ddaf76e961b891d22a1a1df5170c441236a6" host="ci-4054.1.0-a-b942d58550" Sep 5 00:13:39.114725 containerd[1534]: 2024-09-05 00:13:39.104 [INFO][6449] ipam.go 1685: Creating new handle: k8s-pod-network.e2b847ae87c2f61f5a9f0b66a025ddaf76e961b891d22a1a1df5170c441236a6 Sep 5 00:13:39.114725 containerd[1534]: 2024-09-05 00:13:39.106 [INFO][6449] ipam.go 1203: Writing block in order to claim IPs block=192.168.19.0/26 handle="k8s-pod-network.e2b847ae87c2f61f5a9f0b66a025ddaf76e961b891d22a1a1df5170c441236a6" host="ci-4054.1.0-a-b942d58550" Sep 5 00:13:39.114725 containerd[1534]: 2024-09-05 00:13:39.108 [INFO][6449] ipam.go 1216: Successfully claimed IPs: [192.168.19.5/26] block=192.168.19.0/26 handle="k8s-pod-network.e2b847ae87c2f61f5a9f0b66a025ddaf76e961b891d22a1a1df5170c441236a6" host="ci-4054.1.0-a-b942d58550" Sep 5 00:13:39.114725 containerd[1534]: 2024-09-05 00:13:39.108 [INFO][6449] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.19.5/26] handle="k8s-pod-network.e2b847ae87c2f61f5a9f0b66a025ddaf76e961b891d22a1a1df5170c441236a6" host="ci-4054.1.0-a-b942d58550" Sep 5 00:13:39.114725 containerd[1534]: 2024-09-05 00:13:39.108 [INFO][6449] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 5 00:13:39.114725 containerd[1534]: 2024-09-05 00:13:39.108 [INFO][6449] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.19.5/26] IPv6=[] ContainerID="e2b847ae87c2f61f5a9f0b66a025ddaf76e961b891d22a1a1df5170c441236a6" HandleID="k8s-pod-network.e2b847ae87c2f61f5a9f0b66a025ddaf76e961b891d22a1a1df5170c441236a6" Workload="ci--4054.1.0--a--b942d58550-k8s-calico--apiserver--7458894c5c--kz9g6-eth0" Sep 5 00:13:39.115202 containerd[1534]: 2024-09-05 00:13:39.109 [INFO][6403] k8s.go 386: Populated endpoint ContainerID="e2b847ae87c2f61f5a9f0b66a025ddaf76e961b891d22a1a1df5170c441236a6" Namespace="calico-apiserver" Pod="calico-apiserver-7458894c5c-kz9g6" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-calico--apiserver--7458894c5c--kz9g6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--b942d58550-k8s-calico--apiserver--7458894c5c--kz9g6-eth0", GenerateName:"calico-apiserver-7458894c5c-", Namespace:"calico-apiserver", SelfLink:"", UID:"206b4d7f-9eb8-4597-a798-00b64629f99e", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2024, time.September, 5, 0, 13, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7458894c5c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-b942d58550", ContainerID:"", Pod:"calico-apiserver-7458894c5c-kz9g6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7cc2eebc6c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 5 00:13:39.115202 containerd[1534]: 2024-09-05 00:13:39.109 [INFO][6403] k8s.go 387: Calico CNI using IPs: [192.168.19.5/32] ContainerID="e2b847ae87c2f61f5a9f0b66a025ddaf76e961b891d22a1a1df5170c441236a6" Namespace="calico-apiserver" Pod="calico-apiserver-7458894c5c-kz9g6" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-calico--apiserver--7458894c5c--kz9g6-eth0" Sep 5 00:13:39.115202 containerd[1534]: 2024-09-05 00:13:39.109 [INFO][6403] dataplane_linux.go 68: Setting the host side veth name to cali7cc2eebc6c4 ContainerID="e2b847ae87c2f61f5a9f0b66a025ddaf76e961b891d22a1a1df5170c441236a6" Namespace="calico-apiserver" Pod="calico-apiserver-7458894c5c-kz9g6" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-calico--apiserver--7458894c5c--kz9g6-eth0" Sep 5 00:13:39.115202 containerd[1534]: 2024-09-05 00:13:39.110 [INFO][6403] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="e2b847ae87c2f61f5a9f0b66a025ddaf76e961b891d22a1a1df5170c441236a6" Namespace="calico-apiserver" Pod="calico-apiserver-7458894c5c-kz9g6" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-calico--apiserver--7458894c5c--kz9g6-eth0" Sep 5 00:13:39.115202 containerd[1534]: 2024-09-05 00:13:39.111 [INFO][6403] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e2b847ae87c2f61f5a9f0b66a025ddaf76e961b891d22a1a1df5170c441236a6" Namespace="calico-apiserver" Pod="calico-apiserver-7458894c5c-kz9g6" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-calico--apiserver--7458894c5c--kz9g6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--b942d58550-k8s-calico--apiserver--7458894c5c--kz9g6-eth0", GenerateName:"calico-apiserver-7458894c5c-", Namespace:"calico-apiserver", SelfLink:"", UID:"206b4d7f-9eb8-4597-a798-00b64629f99e", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2024, time.September, 5, 0, 13, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7458894c5c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-b942d58550", ContainerID:"e2b847ae87c2f61f5a9f0b66a025ddaf76e961b891d22a1a1df5170c441236a6", Pod:"calico-apiserver-7458894c5c-kz9g6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7cc2eebc6c4", MAC:"a6:39:e3:f1:4d:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 5 00:13:39.115202 containerd[1534]: 2024-09-05 00:13:39.113 [INFO][6403] k8s.go 500: Wrote updated endpoint to datastore ContainerID="e2b847ae87c2f61f5a9f0b66a025ddaf76e961b891d22a1a1df5170c441236a6" Namespace="calico-apiserver" Pod="calico-apiserver-7458894c5c-kz9g6" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-calico--apiserver--7458894c5c--kz9g6-eth0" Sep 5 00:13:39.124742 systemd-networkd[1329]: cali56a428a22b7: Link UP Sep 5 00:13:39.125020 containerd[1534]: time="2024-09-05T00:13:39.124823917Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:13:39.125020 containerd[1534]: time="2024-09-05T00:13:39.124877711Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:13:39.125020 containerd[1534]: time="2024-09-05T00:13:39.124888881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:13:39.125020 containerd[1534]: time="2024-09-05T00:13:39.124952953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:13:39.125113 systemd-networkd[1329]: cali56a428a22b7: Gained carrier Sep 5 00:13:39.129692 containerd[1534]: 2024-09-05 00:13:39.076 [INFO][6412] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4054.1.0--a--b942d58550-k8s-calico--apiserver--7458894c5c--tpn2s-eth0 calico-apiserver-7458894c5c- calico-apiserver aeccb52f-2274-4fc4-af1f-ba785b968b42 891 0 2024-09-05 00:13:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7458894c5c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4054.1.0-a-b942d58550 calico-apiserver-7458894c5c-tpn2s eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali56a428a22b7 [] []}} ContainerID="5d963f148a35f767a267aabaa43a0f3eb31d15d1a0bdb999e1e181ce95c7d443" Namespace="calico-apiserver" Pod="calico-apiserver-7458894c5c-tpn2s" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-calico--apiserver--7458894c5c--tpn2s-" Sep 5 00:13:39.129692 containerd[1534]: 2024-09-05 00:13:39.076 [INFO][6412] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5d963f148a35f767a267aabaa43a0f3eb31d15d1a0bdb999e1e181ce95c7d443" Namespace="calico-apiserver" Pod="calico-apiserver-7458894c5c-tpn2s" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-calico--apiserver--7458894c5c--tpn2s-eth0" Sep 5 00:13:39.129692 containerd[1534]: 2024-09-05 00:13:39.089 [INFO][6453] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5d963f148a35f767a267aabaa43a0f3eb31d15d1a0bdb999e1e181ce95c7d443" HandleID="k8s-pod-network.5d963f148a35f767a267aabaa43a0f3eb31d15d1a0bdb999e1e181ce95c7d443" Workload="ci--4054.1.0--a--b942d58550-k8s-calico--apiserver--7458894c5c--tpn2s-eth0" Sep 5 00:13:39.129692 containerd[1534]: 2024-09-05 00:13:39.094 [INFO][6453] ipam_plugin.go 270: Auto assigning IP ContainerID="5d963f148a35f767a267aabaa43a0f3eb31d15d1a0bdb999e1e181ce95c7d443" HandleID="k8s-pod-network.5d963f148a35f767a267aabaa43a0f3eb31d15d1a0bdb999e1e181ce95c7d443" Workload="ci--4054.1.0--a--b942d58550-k8s-calico--apiserver--7458894c5c--tpn2s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f4e70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4054.1.0-a-b942d58550", "pod":"calico-apiserver-7458894c5c-tpn2s", "timestamp":"2024-09-05 00:13:39.089331044 +0000 UTC"}, Hostname:"ci-4054.1.0-a-b942d58550", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:13:39.129692 containerd[1534]: 2024-09-05 00:13:39.094 [INFO][6453] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 5 00:13:39.129692 containerd[1534]: 2024-09-05 00:13:39.108 [INFO][6453] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 5 00:13:39.129692 containerd[1534]: 2024-09-05 00:13:39.108 [INFO][6453] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4054.1.0-a-b942d58550' Sep 5 00:13:39.129692 containerd[1534]: 2024-09-05 00:13:39.109 [INFO][6453] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5d963f148a35f767a267aabaa43a0f3eb31d15d1a0bdb999e1e181ce95c7d443" host="ci-4054.1.0-a-b942d58550" Sep 5 00:13:39.129692 containerd[1534]: 2024-09-05 00:13:39.111 [INFO][6453] ipam.go 372: Looking up existing affinities for host host="ci-4054.1.0-a-b942d58550" Sep 5 00:13:39.129692 containerd[1534]: 2024-09-05 00:13:39.114 [INFO][6453] ipam.go 489: Trying affinity for 192.168.19.0/26 host="ci-4054.1.0-a-b942d58550" Sep 5 00:13:39.129692 containerd[1534]: 2024-09-05 00:13:39.115 [INFO][6453] ipam.go 155: Attempting to load block cidr=192.168.19.0/26 host="ci-4054.1.0-a-b942d58550" Sep 5 00:13:39.129692 containerd[1534]: 2024-09-05 00:13:39.117 [INFO][6453] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.19.0/26 host="ci-4054.1.0-a-b942d58550" Sep 5 00:13:39.129692 containerd[1534]: 2024-09-05 00:13:39.117 [INFO][6453] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.19.0/26 handle="k8s-pod-network.5d963f148a35f767a267aabaa43a0f3eb31d15d1a0bdb999e1e181ce95c7d443" host="ci-4054.1.0-a-b942d58550" Sep 5 00:13:39.129692 containerd[1534]: 2024-09-05 00:13:39.117 [INFO][6453] ipam.go 1685: Creating new handle: k8s-pod-network.5d963f148a35f767a267aabaa43a0f3eb31d15d1a0bdb999e1e181ce95c7d443 Sep 5 00:13:39.129692 containerd[1534]: 2024-09-05 00:13:39.119 [INFO][6453] ipam.go 1203: Writing block in order to claim IPs block=192.168.19.0/26 handle="k8s-pod-network.5d963f148a35f767a267aabaa43a0f3eb31d15d1a0bdb999e1e181ce95c7d443" host="ci-4054.1.0-a-b942d58550" Sep 5 00:13:39.129692 containerd[1534]: 2024-09-05 00:13:39.122 [INFO][6453] ipam.go 1216: Successfully claimed IPs: [192.168.19.6/26] block=192.168.19.0/26 handle="k8s-pod-network.5d963f148a35f767a267aabaa43a0f3eb31d15d1a0bdb999e1e181ce95c7d443" host="ci-4054.1.0-a-b942d58550" Sep 5 00:13:39.129692 containerd[1534]: 2024-09-05 00:13:39.122 [INFO][6453] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.19.6/26] handle="k8s-pod-network.5d963f148a35f767a267aabaa43a0f3eb31d15d1a0bdb999e1e181ce95c7d443" host="ci-4054.1.0-a-b942d58550" Sep 5 00:13:39.129692 containerd[1534]: 2024-09-05 00:13:39.122 [INFO][6453] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 5 00:13:39.129692 containerd[1534]: 2024-09-05 00:13:39.122 [INFO][6453] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.19.6/26] IPv6=[] ContainerID="5d963f148a35f767a267aabaa43a0f3eb31d15d1a0bdb999e1e181ce95c7d443" HandleID="k8s-pod-network.5d963f148a35f767a267aabaa43a0f3eb31d15d1a0bdb999e1e181ce95c7d443" Workload="ci--4054.1.0--a--b942d58550-k8s-calico--apiserver--7458894c5c--tpn2s-eth0" Sep 5 00:13:39.130145 containerd[1534]: 2024-09-05 00:13:39.123 [INFO][6412] k8s.go 386: Populated endpoint ContainerID="5d963f148a35f767a267aabaa43a0f3eb31d15d1a0bdb999e1e181ce95c7d443" Namespace="calico-apiserver" Pod="calico-apiserver-7458894c5c-tpn2s" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-calico--apiserver--7458894c5c--tpn2s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--b942d58550-k8s-calico--apiserver--7458894c5c--tpn2s-eth0", GenerateName:"calico-apiserver-7458894c5c-", Namespace:"calico-apiserver", SelfLink:"", UID:"aeccb52f-2274-4fc4-af1f-ba785b968b42", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2024, time.September, 5, 0, 13, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7458894c5c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-b942d58550", ContainerID:"", Pod:"calico-apiserver-7458894c5c-tpn2s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali56a428a22b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 5 00:13:39.130145 containerd[1534]: 2024-09-05 00:13:39.123 [INFO][6412] k8s.go 387: Calico CNI using IPs: [192.168.19.6/32] ContainerID="5d963f148a35f767a267aabaa43a0f3eb31d15d1a0bdb999e1e181ce95c7d443" Namespace="calico-apiserver" Pod="calico-apiserver-7458894c5c-tpn2s" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-calico--apiserver--7458894c5c--tpn2s-eth0" Sep 5 00:13:39.130145 containerd[1534]: 2024-09-05 00:13:39.123 [INFO][6412] dataplane_linux.go 68: Setting the host side veth name to cali56a428a22b7 ContainerID="5d963f148a35f767a267aabaa43a0f3eb31d15d1a0bdb999e1e181ce95c7d443" Namespace="calico-apiserver" Pod="calico-apiserver-7458894c5c-tpn2s" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-calico--apiserver--7458894c5c--tpn2s-eth0" Sep 5 00:13:39.130145 containerd[1534]: 2024-09-05 00:13:39.124 [INFO][6412] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="5d963f148a35f767a267aabaa43a0f3eb31d15d1a0bdb999e1e181ce95c7d443" Namespace="calico-apiserver" Pod="calico-apiserver-7458894c5c-tpn2s" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-calico--apiserver--7458894c5c--tpn2s-eth0" Sep 5 00:13:39.130145 containerd[1534]: 2024-09-05 00:13:39.125 [INFO][6412] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5d963f148a35f767a267aabaa43a0f3eb31d15d1a0bdb999e1e181ce95c7d443" Namespace="calico-apiserver" Pod="calico-apiserver-7458894c5c-tpn2s" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-calico--apiserver--7458894c5c--tpn2s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--b942d58550-k8s-calico--apiserver--7458894c5c--tpn2s-eth0", GenerateName:"calico-apiserver-7458894c5c-", Namespace:"calico-apiserver", SelfLink:"", UID:"aeccb52f-2274-4fc4-af1f-ba785b968b42", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2024, time.September, 5, 0, 13, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7458894c5c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-b942d58550", ContainerID:"5d963f148a35f767a267aabaa43a0f3eb31d15d1a0bdb999e1e181ce95c7d443", Pod:"calico-apiserver-7458894c5c-tpn2s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali56a428a22b7", MAC:"06:ea:2a:6b:50:5f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 5 00:13:39.130145 containerd[1534]: 2024-09-05 00:13:39.128 [INFO][6412] k8s.go 500: Wrote updated endpoint to datastore ContainerID="5d963f148a35f767a267aabaa43a0f3eb31d15d1a0bdb999e1e181ce95c7d443" Namespace="calico-apiserver" Pod="calico-apiserver-7458894c5c-tpn2s" WorkloadEndpoint="ci--4054.1.0--a--b942d58550-k8s-calico--apiserver--7458894c5c--tpn2s-eth0" Sep 5 00:13:39.139099 containerd[1534]: time="2024-09-05T00:13:39.139047092Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:13:39.139099 containerd[1534]: time="2024-09-05T00:13:39.139084083Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:13:39.139099 containerd[1534]: time="2024-09-05T00:13:39.139094991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:13:39.139226 containerd[1534]: time="2024-09-05T00:13:39.139150709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:13:39.143408 systemd[1]: Started cri-containerd-e2b847ae87c2f61f5a9f0b66a025ddaf76e961b891d22a1a1df5170c441236a6.scope - libcontainer container e2b847ae87c2f61f5a9f0b66a025ddaf76e961b891d22a1a1df5170c441236a6. Sep 5 00:13:39.145448 systemd[1]: Started cri-containerd-5d963f148a35f767a267aabaa43a0f3eb31d15d1a0bdb999e1e181ce95c7d443.scope - libcontainer container 5d963f148a35f767a267aabaa43a0f3eb31d15d1a0bdb999e1e181ce95c7d443. Sep 5 00:13:39.166804 containerd[1534]: time="2024-09-05T00:13:39.166775281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7458894c5c-kz9g6,Uid:206b4d7f-9eb8-4597-a798-00b64629f99e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e2b847ae87c2f61f5a9f0b66a025ddaf76e961b891d22a1a1df5170c441236a6\"" Sep 5 00:13:39.167520 containerd[1534]: time="2024-09-05T00:13:39.167503479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Sep 5 00:13:39.169708 containerd[1534]: time="2024-09-05T00:13:39.169693556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7458894c5c-tpn2s,Uid:aeccb52f-2274-4fc4-af1f-ba785b968b42,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5d963f148a35f767a267aabaa43a0f3eb31d15d1a0bdb999e1e181ce95c7d443\"" Sep 5 00:13:40.855436 systemd-networkd[1329]: cali7cc2eebc6c4: Gained IPv6LL Sep 5 00:13:40.919366 systemd-networkd[1329]: cali56a428a22b7: Gained IPv6LL Sep 5 00:13:41.636195 containerd[1534]: time="2024-09-05T00:13:41.636170367Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:13:41.636412 containerd[1534]: time="2024-09-05T00:13:41.636366355Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Sep 5 00:13:41.636760 containerd[1534]: time="2024-09-05T00:13:41.636746513Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:13:41.637876 containerd[1534]: time="2024-09-05T00:13:41.637864819Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:13:41.638416 containerd[1534]: time="2024-09-05T00:13:41.638402108Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 2.470879539s" Sep 5 00:13:41.638445 containerd[1534]: time="2024-09-05T00:13:41.638421567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Sep 5 00:13:41.638758 containerd[1534]: time="2024-09-05T00:13:41.638743793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Sep 5 00:13:41.639359 containerd[1534]: time="2024-09-05T00:13:41.639346332Z" level=info msg="CreateContainer within sandbox \"e2b847ae87c2f61f5a9f0b66a025ddaf76e961b891d22a1a1df5170c441236a6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 00:13:41.645462 containerd[1534]: time="2024-09-05T00:13:41.645445476Z" level=info msg="CreateContainer within sandbox \"e2b847ae87c2f61f5a9f0b66a025ddaf76e961b891d22a1a1df5170c441236a6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d8cb52e7aa8197ee16826e116f5750b2ee32bc1c007fe587635543d7debd6ee7\"" Sep 5 00:13:41.645703 containerd[1534]: time="2024-09-05T00:13:41.645689381Z" level=info msg="StartContainer for \"d8cb52e7aa8197ee16826e116f5750b2ee32bc1c007fe587635543d7debd6ee7\"" Sep 5 00:13:41.665554 systemd[1]: Started cri-containerd-d8cb52e7aa8197ee16826e116f5750b2ee32bc1c007fe587635543d7debd6ee7.scope - libcontainer container d8cb52e7aa8197ee16826e116f5750b2ee32bc1c007fe587635543d7debd6ee7. Sep 5 00:13:41.689745 containerd[1534]: time="2024-09-05T00:13:41.689719494Z" level=info msg="StartContainer for \"d8cb52e7aa8197ee16826e116f5750b2ee32bc1c007fe587635543d7debd6ee7\" returns successfully" Sep 5 00:13:41.849672 kubelet[2919]: I0905 00:13:41.849622 2919 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7458894c5c-kz9g6" podStartSLOduration=1.3782912 podStartE2EDuration="3.849587114s" podCreationTimestamp="2024-09-05 00:13:38 +0000 UTC" firstStartedPulling="2024-09-05 00:13:39.167338214 +0000 UTC m=+79.608170407" lastFinishedPulling="2024-09-05 00:13:41.63863413 +0000 UTC m=+82.079466321" observedRunningTime="2024-09-05 00:13:41.849104971 +0000 UTC m=+82.289937163" watchObservedRunningTime="2024-09-05 00:13:41.849587114 +0000 UTC m=+82.290419304" Sep 5 00:13:42.012516 containerd[1534]: time="2024-09-05T00:13:42.012451865Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:13:42.012673 containerd[1534]: time="2024-09-05T00:13:42.012648205Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=77" Sep 5 00:13:42.013826 containerd[1534]: time="2024-09-05T00:13:42.013809371Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 375.044803ms" Sep 5 00:13:42.013851 containerd[1534]: time="2024-09-05T00:13:42.013828064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Sep 5 00:13:42.014680 containerd[1534]: time="2024-09-05T00:13:42.014666014Z" level=info msg="CreateContainer within sandbox \"5d963f148a35f767a267aabaa43a0f3eb31d15d1a0bdb999e1e181ce95c7d443\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 00:13:42.019105 containerd[1534]: time="2024-09-05T00:13:42.019090054Z" level=info msg="CreateContainer within sandbox \"5d963f148a35f767a267aabaa43a0f3eb31d15d1a0bdb999e1e181ce95c7d443\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fe348b5344d5169225f26e2c70b3c6687e6b5691abb417fd3a0d745bd22004f0\"" Sep 5 00:13:42.019313 containerd[1534]: time="2024-09-05T00:13:42.019302053Z" level=info msg="StartContainer for \"fe348b5344d5169225f26e2c70b3c6687e6b5691abb417fd3a0d745bd22004f0\"" Sep 5 00:13:42.047514 systemd[1]: Started cri-containerd-fe348b5344d5169225f26e2c70b3c6687e6b5691abb417fd3a0d745bd22004f0.scope - libcontainer container fe348b5344d5169225f26e2c70b3c6687e6b5691abb417fd3a0d745bd22004f0. Sep 5 00:13:42.076499 containerd[1534]: time="2024-09-05T00:13:42.076468018Z" level=info msg="StartContainer for \"fe348b5344d5169225f26e2c70b3c6687e6b5691abb417fd3a0d745bd22004f0\" returns successfully" Sep 5 00:13:42.869643 kubelet[2919]: I0905 00:13:42.869567 2919 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7458894c5c-tpn2s" podStartSLOduration=2.025651185 podStartE2EDuration="4.869446305s" podCreationTimestamp="2024-09-05 00:13:38 +0000 UTC" firstStartedPulling="2024-09-05 00:13:39.170171919 +0000 UTC m=+79.611004109" lastFinishedPulling="2024-09-05 00:13:42.013967037 +0000 UTC m=+82.454799229" observedRunningTime="2024-09-05 00:13:42.867257622 +0000 UTC m=+83.308089900" watchObservedRunningTime="2024-09-05 00:13:42.869446305 +0000 UTC m=+83.310278549" Sep 5 00:14:35.234424 update_engine[1521]: I0905 00:14:35.234333 1521 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 5 00:14:35.234424 update_engine[1521]: I0905 00:14:35.234431 1521 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 5 00:14:35.235970 update_engine[1521]: I0905 00:14:35.234913 1521 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 5 00:14:35.236428 update_engine[1521]: I0905 00:14:35.236377 1521 omaha_request_params.cc:62] Current group set to beta Sep 5 00:14:35.236743 update_engine[1521]: I0905 00:14:35.236702 1521 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 5 00:14:35.236743 update_engine[1521]: I0905 00:14:35.236725 1521 update_attempter.cc:643] Scheduling an action processor start. Sep 5 00:14:35.237082 update_engine[1521]: I0905 00:14:35.236764 1521 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 5 00:14:35.237082 update_engine[1521]: I0905 00:14:35.236871 1521 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 5 00:14:35.237448 update_engine[1521]: I0905 00:14:35.237083 1521 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 5 00:14:35.237448 update_engine[1521]: I0905 00:14:35.237108 1521 omaha_request_action.cc:272] Request: Sep 5 00:14:35.237448 update_engine[1521]: Sep 5 00:14:35.237448 update_engine[1521]: Sep 5 00:14:35.237448 update_engine[1521]: Sep 5 00:14:35.237448 update_engine[1521]: Sep 5 00:14:35.237448 update_engine[1521]: Sep 5 00:14:35.237448 update_engine[1521]: Sep 5 00:14:35.237448 update_engine[1521]: Sep 5 00:14:35.237448 update_engine[1521]: Sep 5 00:14:35.237448 update_engine[1521]: I0905 00:14:35.237125 1521 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 5 00:14:35.239134 locksmithd[1560]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 5 00:14:35.239842 update_engine[1521]: I0905 00:14:35.239832 1521 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 5 00:14:35.240012 update_engine[1521]: I0905 00:14:35.240002 1521 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 5 00:14:35.240478 update_engine[1521]: E0905 00:14:35.240468 1521 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 5 00:14:35.240514 update_engine[1521]: I0905 00:14:35.240501 1521 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 5 00:14:36.105273 systemd[1]: Started sshd@11-147.28.180.221:22-123.58.218.88:37122.service - OpenSSH per-connection server daemon (123.58.218.88:37122). Sep 5 00:14:44.591116 sshd[6880]: Connection closed by 123.58.218.88 port 37122 [preauth] Sep 5 00:14:44.591864 systemd[1]: sshd@11-147.28.180.221:22-123.58.218.88:37122.service: Deactivated successfully. Sep 5 00:14:45.190646 update_engine[1521]: I0905 00:14:45.190552 1521 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 5 00:14:45.191837 update_engine[1521]: I0905 00:14:45.191320 1521 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 5 00:14:45.192027 update_engine[1521]: I0905 00:14:45.191953 1521 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 5 00:14:45.192565 update_engine[1521]: E0905 00:14:45.192514 1521 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 5 00:14:45.192750 update_engine[1521]: I0905 00:14:45.192649 1521 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 5 00:14:55.183098 update_engine[1521]: I0905 00:14:55.183000 1521 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 5 00:14:55.184099 update_engine[1521]: I0905 00:14:55.183579 1521 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 5 00:14:55.184099 update_engine[1521]: I0905 00:14:55.184082 1521 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 5 00:14:55.185034 update_engine[1521]: E0905 00:14:55.184949 1521 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 5 00:14:55.185209 update_engine[1521]: I0905 00:14:55.185067 1521 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 5 00:15:05.191375 update_engine[1521]: I0905 00:15:05.191228 1521 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 5 00:15:05.192367 update_engine[1521]: I0905 00:15:05.191788 1521 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 5 00:15:05.192367 update_engine[1521]: I0905 00:15:05.192342 1521 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 5 00:15:05.193005 update_engine[1521]: E0905 00:15:05.192906 1521 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 5 00:15:05.193205 update_engine[1521]: I0905 00:15:05.193032 1521 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 5 00:15:05.193205 update_engine[1521]: I0905 00:15:05.193058 1521 omaha_request_action.cc:617] Omaha request response: Sep 5 00:15:05.193451 update_engine[1521]: E0905 00:15:05.193223 1521 omaha_request_action.cc:636] Omaha request network transfer failed. Sep 5 00:15:05.193451 update_engine[1521]: I0905 00:15:05.193288 1521 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 5 00:15:05.193451 update_engine[1521]: I0905 00:15:05.193301 1521 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 5 00:15:05.193451 update_engine[1521]: I0905 00:15:05.193310 1521 update_attempter.cc:306] Processing Done. Sep 5 00:15:05.193451 update_engine[1521]: E0905 00:15:05.193343 1521 update_attempter.cc:619] Update failed. Sep 5 00:15:05.193451 update_engine[1521]: I0905 00:15:05.193353 1521 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 5 00:15:05.193451 update_engine[1521]: I0905 00:15:05.193361 1521 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 5 00:15:05.193451 update_engine[1521]: I0905 00:15:05.193372 1521 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 5 00:15:05.194125 update_engine[1521]: I0905 00:15:05.193517 1521 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 5 00:15:05.194125 update_engine[1521]: I0905 00:15:05.193563 1521 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 5 00:15:05.194125 update_engine[1521]: I0905 00:15:05.193572 1521 omaha_request_action.cc:272] Request: Sep 5 00:15:05.194125 update_engine[1521]: Sep 5 00:15:05.194125 update_engine[1521]: Sep 5 00:15:05.194125 update_engine[1521]: Sep 5 00:15:05.194125 update_engine[1521]: Sep 5 00:15:05.194125 update_engine[1521]: Sep 5 00:15:05.194125 update_engine[1521]: Sep 5 00:15:05.194125 update_engine[1521]: I0905 00:15:05.193583 1521 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 5 00:15:05.194125 update_engine[1521]: I0905 00:15:05.193981 1521 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 5 00:15:05.195013 update_engine[1521]: I0905 00:15:05.194463 1521 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 5 00:15:05.195122 locksmithd[1560]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 5 00:15:05.195764 update_engine[1521]: E0905 00:15:05.195143 1521 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 5 00:15:05.195764 update_engine[1521]: I0905 00:15:05.195290 1521 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 5 00:15:05.195764 update_engine[1521]: I0905 00:15:05.195318 1521 omaha_request_action.cc:617] Omaha request response: Sep 5 00:15:05.195764 update_engine[1521]: I0905 00:15:05.195333 1521 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 5 00:15:05.195764 update_engine[1521]: I0905 00:15:05.195339 1521 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 5 00:15:05.195764 update_engine[1521]: I0905 00:15:05.195348 1521 update_attempter.cc:306] Processing Done. Sep 5 00:15:05.195764 update_engine[1521]: I0905 00:15:05.195358 1521 update_attempter.cc:310] Error event sent. Sep 5 00:15:05.195764 update_engine[1521]: I0905 00:15:05.195374 1521 update_check_scheduler.cc:74] Next update check in 43m16s Sep 5 00:15:05.196477 locksmithd[1560]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 5 00:16:00.200600 systemd[1]: Started sshd@12-147.28.180.221:22-123.58.218.88:37162.service - OpenSSH per-connection server daemon (123.58.218.88:37162). Sep 5 00:16:01.503304 sshd[7107]: Invalid user zoom from 123.58.218.88 port 37162 Sep 5 00:16:01.659949 sshd[7107]: Received disconnect from 123.58.218.88 port 37162:11: Bye Bye [preauth] Sep 5 00:16:01.659949 sshd[7107]: Disconnected from invalid user zoom 123.58.218.88 port 37162 [preauth] Sep 5 00:16:01.661511 systemd[1]: sshd@12-147.28.180.221:22-123.58.218.88:37162.service: Deactivated successfully. Sep 5 00:17:24.259464 systemd[1]: Started sshd@13-147.28.180.221:22-123.58.218.88:37206.service - OpenSSH per-connection server daemon (123.58.218.88:37206). Sep 5 00:17:25.082794 sshd[7346]: Invalid user itsys from 123.58.218.88 port 37206 Sep 5 00:17:25.238088 sshd[7346]: Received disconnect from 123.58.218.88 port 37206:11: Bye Bye [preauth] Sep 5 00:17:25.238088 sshd[7346]: Disconnected from invalid user itsys 123.58.218.88 port 37206 [preauth] Sep 5 00:17:25.241402 systemd[1]: sshd@13-147.28.180.221:22-123.58.218.88:37206.service: Deactivated successfully. Sep 5 00:18:46.366031 systemd[1]: Started sshd@14-147.28.180.221:22-123.58.218.88:37246.service - OpenSSH per-connection server daemon (123.58.218.88:37246). Sep 5 00:18:47.232353 sshd[7602]: Invalid user vscode from 123.58.218.88 port 37246 Sep 5 00:18:47.387304 sshd[7602]: Received disconnect from 123.58.218.88 port 37246:11: Bye Bye [preauth] Sep 5 00:18:47.387304 sshd[7602]: Disconnected from invalid user vscode 123.58.218.88 port 37246 [preauth] Sep 5 00:18:47.390572 systemd[1]: sshd@14-147.28.180.221:22-123.58.218.88:37246.service: Deactivated successfully. Sep 5 00:20:09.040578 systemd[1]: Started sshd@15-147.28.180.221:22-123.58.218.88:37288.service - OpenSSH per-connection server daemon (123.58.218.88:37288). Sep 5 00:20:40.656444 systemd[1]: Started sshd@16-147.28.180.221:22-85.209.11.27:63566.service - OpenSSH per-connection server daemon (85.209.11.27:63566). Sep 5 00:20:44.011744 sshd[7881]: Invalid user admin from 85.209.11.27 port 63566 Sep 5 00:20:44.208150 sshd[7881]: Connection closed by invalid user admin 85.209.11.27 port 63566 [preauth] Sep 5 00:20:44.211521 systemd[1]: sshd@16-147.28.180.221:22-85.209.11.27:63566.service: Deactivated successfully. Sep 5 00:21:31.274109 systemd[1]: Started sshd@17-147.28.180.221:22-123.58.218.88:37332.service - OpenSSH per-connection server daemon (123.58.218.88:37332). Sep 5 00:22:09.047582 systemd[1]: sshd@15-147.28.180.221:22-123.58.218.88:37288.service: Deactivated successfully. Sep 5 00:22:55.611503 systemd[1]: Started sshd@18-147.28.180.221:22-123.58.218.88:37372.service - OpenSSH per-connection server daemon (123.58.218.88:37372). Sep 5 00:22:56.472048 sshd[8279]: Invalid user admin from 123.58.218.88 port 37372 Sep 5 00:22:56.631552 sshd[8279]: Received disconnect from 123.58.218.88 port 37372:11: Bye Bye [preauth] Sep 5 00:22:56.631552 sshd[8279]: Disconnected from invalid user admin 123.58.218.88 port 37372 [preauth] Sep 5 00:22:56.634785 systemd[1]: sshd@18-147.28.180.221:22-123.58.218.88:37372.service: Deactivated successfully. Sep 5 00:23:31.283193 systemd[1]: sshd@17-147.28.180.221:22-123.58.218.88:37332.service: Deactivated successfully. Sep 5 00:24:18.575904 systemd[1]: Started sshd@19-147.28.180.221:22-123.58.218.88:37416.service - OpenSSH per-connection server daemon (123.58.218.88:37416). Sep 5 00:24:19.946505 sshd[8516]: Invalid user shigeru from 123.58.218.88 port 37416 Sep 5 00:24:20.112232 sshd[8516]: Received disconnect from 123.58.218.88 port 37416:11: Bye Bye [preauth] Sep 5 00:24:20.112232 sshd[8516]: Disconnected from invalid user shigeru 123.58.218.88 port 37416 [preauth] Sep 5 00:24:20.115562 systemd[1]: sshd@19-147.28.180.221:22-123.58.218.88:37416.service: Deactivated successfully. Sep 5 00:25:39.966769 systemd[1]: Started sshd@20-147.28.180.221:22-123.58.218.88:37456.service - OpenSSH per-connection server daemon (123.58.218.88:37456). Sep 5 00:25:41.919178 sshd[8716]: Received disconnect from 123.58.218.88 port 37456:11: Bye Bye [preauth] Sep 5 00:25:41.919178 sshd[8716]: Disconnected from authenticating user root 123.58.218.88 port 37456 [preauth] Sep 5 00:25:41.922337 systemd[1]: sshd@20-147.28.180.221:22-123.58.218.88:37456.service: Deactivated successfully. Sep 5 00:26:36.034374 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Sep 5 00:26:36.046126 systemd-tmpfiles[8870]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 5 00:26:36.046407 systemd-tmpfiles[8870]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 5 00:26:36.046883 systemd-tmpfiles[8870]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 5 00:26:36.047032 systemd-tmpfiles[8870]: ACLs are not supported, ignoring. Sep 5 00:26:36.047065 systemd-tmpfiles[8870]: ACLs are not supported, ignoring. Sep 5 00:26:36.053452 systemd-tmpfiles[8870]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 00:26:36.053456 systemd-tmpfiles[8870]: Skipping /boot Sep 5 00:26:36.057180 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Sep 5 00:26:36.057334 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Sep 5 00:27:01.495529 systemd[1]: Started sshd@21-147.28.180.221:22-123.58.218.88:37498.service - OpenSSH per-connection server daemon (123.58.218.88:37498). Sep 5 00:27:03.722438 sshd[8941]: Invalid user amit from 123.58.218.88 port 37498 Sep 5 00:27:03.882827 sshd[8941]: Received disconnect from 123.58.218.88 port 37498:11: Bye Bye [preauth] Sep 5 00:27:03.882827 sshd[8941]: Disconnected from invalid user amit 123.58.218.88 port 37498 [preauth] Sep 5 00:27:03.886091 systemd[1]: sshd@21-147.28.180.221:22-123.58.218.88:37498.service: Deactivated successfully. Sep 5 00:28:22.920219 systemd[1]: Started sshd@22-147.28.180.221:22-123.58.218.88:37542.service - OpenSSH per-connection server daemon (123.58.218.88:37542). Sep 5 00:29:45.648636 systemd[1]: Started sshd@23-147.28.180.221:22-123.58.218.88:37588.service - OpenSSH per-connection server daemon (123.58.218.88:37588). Sep 5 00:29:46.943969 sshd[9427]: Invalid user becky from 123.58.218.88 port 37588 Sep 5 00:29:47.105838 sshd[9427]: Received disconnect from 123.58.218.88 port 37588:11: Bye Bye [preauth] Sep 5 00:29:47.105838 sshd[9427]: Disconnected from invalid user becky 123.58.218.88 port 37588 [preauth] Sep 5 00:29:47.109700 systemd[1]: sshd@23-147.28.180.221:22-123.58.218.88:37588.service: Deactivated successfully. Sep 5 00:30:22.929371 systemd[1]: sshd@22-147.28.180.221:22-123.58.218.88:37542.service: Deactivated successfully. Sep 5 00:31:08.086558 systemd[1]: Started sshd@24-147.28.180.221:22-123.58.218.88:37632.service - OpenSSH per-connection server daemon (123.58.218.88:37632). Sep 5 00:31:09.954573 sshd[9643]: Received disconnect from 123.58.218.88 port 37632:11: Bye Bye [preauth] Sep 5 00:31:09.954573 sshd[9643]: Disconnected from authenticating user root 123.58.218.88 port 37632 [preauth] Sep 5 00:31:09.957863 systemd[1]: sshd@24-147.28.180.221:22-123.58.218.88:37632.service: Deactivated successfully. Sep 5 00:32:31.178526 systemd[1]: Started sshd@25-147.28.180.221:22-123.58.218.88:37676.service - OpenSSH per-connection server daemon (123.58.218.88:37676). Sep 5 00:32:32.044110 sshd[9838]: Invalid user zwj from 123.58.218.88 port 37676 Sep 5 00:32:32.701419 sshd[9838]: Received disconnect from 123.58.218.88 port 37676:11: Bye Bye [preauth] Sep 5 00:32:32.701419 sshd[9838]: Disconnected from invalid user zwj 123.58.218.88 port 37676 [preauth] Sep 5 00:32:32.704727 systemd[1]: sshd@25-147.28.180.221:22-123.58.218.88:37676.service: Deactivated successfully. Sep 5 00:32:36.316036 systemd[1]: Started sshd@26-147.28.180.221:22-137.74.239.155:34067.service - OpenSSH per-connection server daemon (137.74.239.155:34067). Sep 5 00:32:52.110213 sshd[9877]: Connection closed by 137.74.239.155 port 34067 Sep 5 00:32:52.113403 systemd[1]: sshd@26-147.28.180.221:22-137.74.239.155:34067.service: Deactivated successfully. Sep 5 00:33:54.131557 systemd[1]: Started sshd@27-147.28.180.221:22-123.58.218.88:37720.service - OpenSSH per-connection server daemon (123.58.218.88:37720). Sep 5 00:33:55.025113 sshd[10100]: Invalid user ubuntu from 123.58.218.88 port 37720 Sep 5 00:33:55.187912 sshd[10100]: Received disconnect from 123.58.218.88 port 37720:11: Bye Bye [preauth] Sep 5 00:33:55.187912 sshd[10100]: Disconnected from invalid user ubuntu 123.58.218.88 port 37720 [preauth] Sep 5 00:33:55.191145 systemd[1]: sshd@27-147.28.180.221:22-123.58.218.88:37720.service: Deactivated successfully. Sep 5 00:35:16.791442 systemd[1]: Started sshd@28-147.28.180.221:22-123.58.218.88:37762.service - OpenSSH per-connection server daemon (123.58.218.88:37762). Sep 5 00:35:17.654956 sshd[10334]: Invalid user zhanna from 123.58.218.88 port 37762 Sep 5 00:35:17.814899 sshd[10334]: Received disconnect from 123.58.218.88 port 37762:11: Bye Bye [preauth] Sep 5 00:35:17.814899 sshd[10334]: Disconnected from invalid user zhanna 123.58.218.88 port 37762 [preauth] Sep 5 00:35:17.818144 systemd[1]: sshd@28-147.28.180.221:22-123.58.218.88:37762.service: Deactivated successfully. Sep 5 00:36:39.098516 systemd[1]: Started sshd@29-147.28.180.221:22-123.58.218.88:37804.service - OpenSSH per-connection server daemon (123.58.218.88:37804). Sep 5 00:36:40.410156 sshd[10528]: Invalid user anshul from 123.58.218.88 port 37804 Sep 5 00:36:40.564608 sshd[10528]: Received disconnect from 123.58.218.88 port 37804:11: Bye Bye [preauth] Sep 5 00:36:40.564608 sshd[10528]: Disconnected from invalid user anshul 123.58.218.88 port 37804 [preauth] Sep 5 00:36:40.567931 systemd[1]: sshd@29-147.28.180.221:22-123.58.218.88:37804.service: Deactivated successfully. Sep 5 00:38:01.780119 systemd[1]: Started sshd@30-147.28.180.221:22-123.58.218.88:37844.service - OpenSSH per-connection server daemon (123.58.218.88:37844). Sep 5 00:38:02.878325 sshd[10753]: Received disconnect from 123.58.218.88 port 37844:11: Bye Bye [preauth] Sep 5 00:38:02.878325 sshd[10753]: Disconnected from authenticating user root 123.58.218.88 port 37844 [preauth] Sep 5 00:38:02.879944 systemd[1]: sshd@30-147.28.180.221:22-123.58.218.88:37844.service: Deactivated successfully. Sep 5 00:39:23.958587 systemd[1]: Started sshd@31-147.28.180.221:22-123.58.218.88:37886.service - OpenSSH per-connection server daemon (123.58.218.88:37886). Sep 5 00:39:24.786914 sshd[10988]: Invalid user sophia from 123.58.218.88 port 37886 Sep 5 00:39:24.947249 sshd[10988]: Received disconnect from 123.58.218.88 port 37886:11: Bye Bye [preauth] Sep 5 00:39:24.947249 sshd[10988]: Disconnected from invalid user sophia 123.58.218.88 port 37886 [preauth] Sep 5 00:39:24.950539 systemd[1]: sshd@31-147.28.180.221:22-123.58.218.88:37886.service: Deactivated successfully. Sep 5 00:42:18.577660 systemd[1]: Started sshd@32-147.28.180.221:22-85.209.11.27:46598.service - OpenSSH per-connection server daemon (85.209.11.27:46598). Sep 5 00:42:23.548927 sshd[11462]: Connection closed by authenticating user root 85.209.11.27 port 46598 [preauth] Sep 5 00:42:23.552421 systemd[1]: sshd@32-147.28.180.221:22-85.209.11.27:46598.service: Deactivated successfully. Sep 5 00:55:48.875598 systemd[1]: Started sshd@33-147.28.180.221:22-139.178.89.65:46752.service - OpenSSH per-connection server daemon (139.178.89.65:46752). Sep 5 00:55:48.902603 sshd[13648]: Accepted publickey for core from 139.178.89.65 port 46752 ssh2: RSA SHA256:4FMeK9BETRsOfh6qIKuAN/E0Ly9Zjkq+yHHcf+5tfaM Sep 5 00:55:48.903336 sshd[13648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:55:48.906069 systemd-logind[1516]: New session 12 of user core. Sep 5 00:55:48.925743 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 5 00:55:49.068437 sshd[13648]: pam_unix(sshd:session): session closed for user core Sep 5 00:55:49.070028 systemd[1]: sshd@33-147.28.180.221:22-139.178.89.65:46752.service: Deactivated successfully. Sep 5 00:55:49.070969 systemd[1]: session-12.scope: Deactivated successfully. Sep 5 00:55:49.071698 systemd-logind[1516]: Session 12 logged out. Waiting for processes to exit. Sep 5 00:55:49.072255 systemd-logind[1516]: Removed session 12. Sep 5 00:55:54.103463 systemd[1]: Started sshd@34-147.28.180.221:22-139.178.89.65:46764.service - OpenSSH per-connection server daemon (139.178.89.65:46764). Sep 5 00:55:54.139820 sshd[13697]: Accepted publickey for core from 139.178.89.65 port 46764 ssh2: RSA SHA256:4FMeK9BETRsOfh6qIKuAN/E0Ly9Zjkq+yHHcf+5tfaM Sep 5 00:55:54.140687 sshd[13697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:55:54.143778 systemd-logind[1516]: New session 13 of user core. Sep 5 00:55:54.163397 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 5 00:55:54.251142 sshd[13697]: pam_unix(sshd:session): session closed for user core Sep 5 00:55:54.253320 systemd[1]: sshd@34-147.28.180.221:22-139.178.89.65:46764.service: Deactivated successfully. Sep 5 00:55:54.254257 systemd[1]: session-13.scope: Deactivated successfully. Sep 5 00:55:54.254692 systemd-logind[1516]: Session 13 logged out. Waiting for processes to exit. Sep 5 00:55:54.255222 systemd-logind[1516]: Removed session 13. Sep 5 00:55:59.262307 systemd[1]: Started sshd@35-147.28.180.221:22-139.178.89.65:48670.service - OpenSSH per-connection server daemon (139.178.89.65:48670). Sep 5 00:55:59.293759 sshd[13728]: Accepted publickey for core from 139.178.89.65 port 48670 ssh2: RSA SHA256:4FMeK9BETRsOfh6qIKuAN/E0Ly9Zjkq+yHHcf+5tfaM Sep 5 00:55:59.294433 sshd[13728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:55:59.297018 systemd-logind[1516]: New session 14 of user core. Sep 5 00:55:59.313350 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 5 00:55:59.400431 sshd[13728]: pam_unix(sshd:session): session closed for user core Sep 5 00:55:59.410067 systemd[1]: sshd@35-147.28.180.221:22-139.178.89.65:48670.service: Deactivated successfully. Sep 5 00:55:59.410910 systemd[1]: session-14.scope: Deactivated successfully. Sep 5 00:55:59.411672 systemd-logind[1516]: Session 14 logged out. Waiting for processes to exit. Sep 5 00:55:59.412399 systemd[1]: Started sshd@36-147.28.180.221:22-139.178.89.65:48678.service - OpenSSH per-connection server daemon (139.178.89.65:48678). Sep 5 00:55:59.412924 systemd-logind[1516]: Removed session 14. Sep 5 00:55:59.444006 sshd[13755]: Accepted publickey for core from 139.178.89.65 port 48678 ssh2: RSA SHA256:4FMeK9BETRsOfh6qIKuAN/E0Ly9Zjkq+yHHcf+5tfaM Sep 5 00:55:59.444752 sshd[13755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:55:59.447634 systemd-logind[1516]: New session 15 of user core. Sep 5 00:55:59.463415 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 5 00:55:59.579973 sshd[13755]: pam_unix(sshd:session): session closed for user core Sep 5 00:55:59.605084 systemd[1]: sshd@36-147.28.180.221:22-139.178.89.65:48678.service: Deactivated successfully. Sep 5 00:55:59.610544 systemd[1]: session-15.scope: Deactivated successfully. Sep 5 00:55:59.614187 systemd-logind[1516]: Session 15 logged out. Waiting for processes to exit. Sep 5 00:55:59.639971 systemd[1]: Started sshd@37-147.28.180.221:22-139.178.89.65:48684.service - OpenSSH per-connection server daemon (139.178.89.65:48684). Sep 5 00:55:59.642630 systemd-logind[1516]: Removed session 15. Sep 5 00:55:59.700597 sshd[13779]: Accepted publickey for core from 139.178.89.65 port 48684 ssh2: RSA SHA256:4FMeK9BETRsOfh6qIKuAN/E0Ly9Zjkq+yHHcf+5tfaM Sep 5 00:55:59.701439 sshd[13779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:55:59.704302 systemd-logind[1516]: New session 16 of user core. Sep 5 00:55:59.718525 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 5 00:55:59.853150 sshd[13779]: pam_unix(sshd:session): session closed for user core Sep 5 00:55:59.854741 systemd[1]: sshd@37-147.28.180.221:22-139.178.89.65:48684.service: Deactivated successfully. Sep 5 00:55:59.855806 systemd[1]: session-16.scope: Deactivated successfully. Sep 5 00:55:59.856529 systemd-logind[1516]: Session 16 logged out. Waiting for processes to exit. Sep 5 00:55:59.857106 systemd-logind[1516]: Removed session 16. Sep 5 00:56:04.883636 systemd[1]: Started sshd@38-147.28.180.221:22-139.178.89.65:48688.service - OpenSSH per-connection server daemon (139.178.89.65:48688). Sep 5 00:56:04.909971 sshd[13819]: Accepted publickey for core from 139.178.89.65 port 48688 ssh2: RSA SHA256:4FMeK9BETRsOfh6qIKuAN/E0Ly9Zjkq+yHHcf+5tfaM Sep 5 00:56:04.910636 sshd[13819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:56:04.912874 systemd-logind[1516]: New session 17 of user core. Sep 5 00:56:04.928735 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 5 00:56:05.024198 sshd[13819]: pam_unix(sshd:session): session closed for user core Sep 5 00:56:05.025691 systemd[1]: sshd@38-147.28.180.221:22-139.178.89.65:48688.service: Deactivated successfully. Sep 5 00:56:05.026612 systemd[1]: session-17.scope: Deactivated successfully. Sep 5 00:56:05.027206 systemd-logind[1516]: Session 17 logged out. Waiting for processes to exit. Sep 5 00:56:05.027758 systemd-logind[1516]: Removed session 17. Sep 5 00:56:10.061817 systemd[1]: Started sshd@39-147.28.180.221:22-139.178.89.65:35548.service - OpenSSH per-connection server daemon (139.178.89.65:35548). Sep 5 00:56:10.097721 sshd[13874]: Accepted publickey for core from 139.178.89.65 port 35548 ssh2: RSA SHA256:4FMeK9BETRsOfh6qIKuAN/E0Ly9Zjkq+yHHcf+5tfaM Sep 5 00:56:10.098386 sshd[13874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:56:10.100912 systemd-logind[1516]: New session 18 of user core. Sep 5 00:56:10.111416 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 5 00:56:10.194645 sshd[13874]: pam_unix(sshd:session): session closed for user core Sep 5 00:56:10.196138 systemd[1]: sshd@39-147.28.180.221:22-139.178.89.65:35548.service: Deactivated successfully. Sep 5 00:56:10.197026 systemd[1]: session-18.scope: Deactivated successfully. Sep 5 00:56:10.197736 systemd-logind[1516]: Session 18 logged out. Waiting for processes to exit. Sep 5 00:56:10.198231 systemd-logind[1516]: Removed session 18. Sep 5 00:56:15.210583 systemd[1]: Started sshd@40-147.28.180.221:22-139.178.89.65:35554.service - OpenSSH per-connection server daemon (139.178.89.65:35554). Sep 5 00:56:15.240712 sshd[13924]: Accepted publickey for core from 139.178.89.65 port 35554 ssh2: RSA SHA256:4FMeK9BETRsOfh6qIKuAN/E0Ly9Zjkq+yHHcf+5tfaM Sep 5 00:56:15.241417 sshd[13924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:56:15.244037 systemd-logind[1516]: New session 19 of user core. Sep 5 00:56:15.260432 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 5 00:56:15.346442 sshd[13924]: pam_unix(sshd:session): session closed for user core Sep 5 00:56:15.348245 systemd[1]: sshd@40-147.28.180.221:22-139.178.89.65:35554.service: Deactivated successfully. Sep 5 00:56:15.349321 systemd[1]: session-19.scope: Deactivated successfully. Sep 5 00:56:15.350149 systemd-logind[1516]: Session 19 logged out. Waiting for processes to exit. Sep 5 00:56:15.350971 systemd-logind[1516]: Removed session 19. Sep 5 00:56:20.358108 systemd[1]: Started sshd@41-147.28.180.221:22-139.178.89.65:45726.service - OpenSSH per-connection server daemon (139.178.89.65:45726). Sep 5 00:56:20.397822 sshd[13953]: Accepted publickey for core from 139.178.89.65 port 45726 ssh2: RSA SHA256:4FMeK9BETRsOfh6qIKuAN/E0Ly9Zjkq+yHHcf+5tfaM Sep 5 00:56:20.398523 sshd[13953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:56:20.401038 systemd-logind[1516]: New session 20 of user core. Sep 5 00:56:20.419481 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 5 00:56:20.509456 sshd[13953]: pam_unix(sshd:session): session closed for user core Sep 5 00:56:20.532536 systemd[1]: sshd@41-147.28.180.221:22-139.178.89.65:45726.service: Deactivated successfully. Sep 5 00:56:20.533276 systemd[1]: session-20.scope: Deactivated successfully. Sep 5 00:56:20.533962 systemd-logind[1516]: Session 20 logged out. Waiting for processes to exit. Sep 5 00:56:20.546546 systemd[1]: Started sshd@42-147.28.180.221:22-139.178.89.65:45732.service - OpenSSH per-connection server daemon (139.178.89.65:45732). Sep 5 00:56:20.547084 systemd-logind[1516]: Removed session 20. Sep 5 00:56:20.574340 sshd[13977]: Accepted publickey for core from 139.178.89.65 port 45732 ssh2: RSA SHA256:4FMeK9BETRsOfh6qIKuAN/E0Ly9Zjkq+yHHcf+5tfaM Sep 5 00:56:20.575073 sshd[13977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:56:20.577810 systemd-logind[1516]: New session 21 of user core. Sep 5 00:56:20.587439 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 5 00:56:20.825331 sshd[13977]: pam_unix(sshd:session): session closed for user core Sep 5 00:56:20.847979 systemd[1]: sshd@42-147.28.180.221:22-139.178.89.65:45732.service: Deactivated successfully. Sep 5 00:56:20.848792 systemd[1]: session-21.scope: Deactivated successfully. Sep 5 00:56:20.849726 systemd-logind[1516]: Session 21 logged out. Waiting for processes to exit. Sep 5 00:56:20.850572 systemd[1]: Started sshd@43-147.28.180.221:22-139.178.89.65:45736.service - OpenSSH per-connection server daemon (139.178.89.65:45736). Sep 5 00:56:20.851086 systemd-logind[1516]: Removed session 21. Sep 5 00:56:20.881583 sshd[14003]: Accepted publickey for core from 139.178.89.65 port 45736 ssh2: RSA SHA256:4FMeK9BETRsOfh6qIKuAN/E0Ly9Zjkq+yHHcf+5tfaM Sep 5 00:56:20.882426 sshd[14003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:56:20.885657 systemd-logind[1516]: New session 22 of user core. Sep 5 00:56:20.904542 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 5 00:56:22.076990 sshd[14003]: pam_unix(sshd:session): session closed for user core Sep 5 00:56:22.105733 systemd[1]: sshd@43-147.28.180.221:22-139.178.89.65:45736.service: Deactivated successfully. Sep 5 00:56:22.109987 systemd[1]: session-22.scope: Deactivated successfully. Sep 5 00:56:22.113755 systemd-logind[1516]: Session 22 logged out. Waiting for processes to exit. Sep 5 00:56:22.124062 systemd[1]: Started sshd@44-147.28.180.221:22-139.178.89.65:45752.service - OpenSSH per-connection server daemon (139.178.89.65:45752). Sep 5 00:56:22.126806 systemd-logind[1516]: Removed session 22. Sep 5 00:56:22.188318 sshd[14033]: Accepted publickey for core from 139.178.89.65 port 45752 ssh2: RSA SHA256:4FMeK9BETRsOfh6qIKuAN/E0Ly9Zjkq+yHHcf+5tfaM Sep 5 00:56:22.189233 sshd[14033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:56:22.192148 systemd-logind[1516]: New session 23 of user core. Sep 5 00:56:22.201527 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 5 00:56:22.346860 sshd[14033]: pam_unix(sshd:session): session closed for user core Sep 5 00:56:22.359459 systemd[1]: sshd@44-147.28.180.221:22-139.178.89.65:45752.service: Deactivated successfully. Sep 5 00:56:22.360592 systemd[1]: session-23.scope: Deactivated successfully. Sep 5 00:56:22.361640 systemd-logind[1516]: Session 23 logged out. Waiting for processes to exit. Sep 5 00:56:22.362523 systemd[1]: Started sshd@45-147.28.180.221:22-139.178.89.65:45756.service - OpenSSH per-connection server daemon (139.178.89.65:45756). Sep 5 00:56:22.363241 systemd-logind[1516]: Removed session 23. Sep 5 00:56:22.416436 sshd[14058]: Accepted publickey for core from 139.178.89.65 port 45756 ssh2: RSA SHA256:4FMeK9BETRsOfh6qIKuAN/E0Ly9Zjkq+yHHcf+5tfaM Sep 5 00:56:22.417770 sshd[14058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:56:22.421856 systemd-logind[1516]: New session 24 of user core. Sep 5 00:56:22.439474 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 5 00:56:22.539304 sshd[14058]: pam_unix(sshd:session): session closed for user core Sep 5 00:56:22.541066 systemd[1]: sshd@45-147.28.180.221:22-139.178.89.65:45756.service: Deactivated successfully. Sep 5 00:56:22.542149 systemd[1]: session-24.scope: Deactivated successfully. Sep 5 00:56:22.542926 systemd-logind[1516]: Session 24 logged out. Waiting for processes to exit. Sep 5 00:56:22.543548 systemd-logind[1516]: Removed session 24. Sep 5 00:56:27.576193 systemd[1]: Started sshd@46-147.28.180.221:22-139.178.89.65:46348.service - OpenSSH per-connection server daemon (139.178.89.65:46348). Sep 5 00:56:27.634664 sshd[14094]: Accepted publickey for core from 139.178.89.65 port 46348 ssh2: RSA SHA256:4FMeK9BETRsOfh6qIKuAN/E0Ly9Zjkq+yHHcf+5tfaM Sep 5 00:56:27.635550 sshd[14094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:56:27.638467 systemd-logind[1516]: New session 25 of user core. Sep 5 00:56:27.654436 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 5 00:56:27.766641 sshd[14094]: pam_unix(sshd:session): session closed for user core Sep 5 00:56:27.768203 systemd[1]: sshd@46-147.28.180.221:22-139.178.89.65:46348.service: Deactivated successfully. Sep 5 00:56:27.769124 systemd[1]: session-25.scope: Deactivated successfully. Sep 5 00:56:27.769761 systemd-logind[1516]: Session 25 logged out. Waiting for processes to exit. Sep 5 00:56:27.770250 systemd-logind[1516]: Removed session 25. Sep 5 00:56:32.789629 systemd[1]: Started sshd@47-147.28.180.221:22-139.178.89.65:46350.service - OpenSSH per-connection server daemon (139.178.89.65:46350). Sep 5 00:56:32.818832 sshd[14120]: Accepted publickey for core from 139.178.89.65 port 46350 ssh2: RSA SHA256:4FMeK9BETRsOfh6qIKuAN/E0Ly9Zjkq+yHHcf+5tfaM Sep 5 00:56:32.819502 sshd[14120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:56:32.821941 systemd-logind[1516]: New session 26 of user core. Sep 5 00:56:32.836442 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 5 00:56:32.924961 sshd[14120]: pam_unix(sshd:session): session closed for user core Sep 5 00:56:32.926659 systemd[1]: sshd@47-147.28.180.221:22-139.178.89.65:46350.service: Deactivated successfully. Sep 5 00:56:32.927721 systemd[1]: session-26.scope: Deactivated successfully. Sep 5 00:56:32.928466 systemd-logind[1516]: Session 26 logged out. Waiting for processes to exit. Sep 5 00:56:32.929093 systemd-logind[1516]: Removed session 26. Sep 5 00:56:37.955462 systemd[1]: Started sshd@48-147.28.180.221:22-139.178.89.65:35138.service - OpenSSH per-connection server daemon (139.178.89.65:35138). Sep 5 00:56:37.983325 sshd[14182]: Accepted publickey for core from 139.178.89.65 port 35138 ssh2: RSA SHA256:4FMeK9BETRsOfh6qIKuAN/E0Ly9Zjkq+yHHcf+5tfaM Sep 5 00:56:37.984057 sshd[14182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:56:37.986607 systemd-logind[1516]: New session 27 of user core. Sep 5 00:56:38.009592 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 5 00:56:38.095436 sshd[14182]: pam_unix(sshd:session): session closed for user core Sep 5 00:56:38.097090 systemd[1]: sshd@48-147.28.180.221:22-139.178.89.65:35138.service: Deactivated successfully. Sep 5 00:56:38.098214 systemd[1]: session-27.scope: Deactivated successfully. Sep 5 00:56:38.099000 systemd-logind[1516]: Session 27 logged out. Waiting for processes to exit. Sep 5 00:56:38.099742 systemd-logind[1516]: Removed session 27.