Sep 4 19:46:36.994817 kernel: microcode: updated early: 0xf4 -> 0xfc, date = 2023-07-27 Sep 4 19:46:36.994831 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 4 15:54:07 -00 2024 Sep 4 19:46:36.994838 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.oem.id=packet flatcar.autologin verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 19:46:36.994843 kernel: BIOS-provided physical RAM map: Sep 4 19:46:36.994847 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Sep 4 19:46:36.994851 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Sep 4 19:46:36.994856 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Sep 4 19:46:36.994860 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Sep 4 19:46:36.994864 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Sep 4 19:46:36.994868 kernel: BIOS-e820: [mem 0x0000000040400000-0x00000000819cefff] usable Sep 4 19:46:36.994872 kernel: BIOS-e820: [mem 0x00000000819cf000-0x00000000819cffff] ACPI NVS Sep 4 19:46:36.994877 kernel: BIOS-e820: [mem 0x00000000819d0000-0x00000000819d0fff] reserved Sep 4 19:46:36.994881 kernel: BIOS-e820: [mem 0x00000000819d1000-0x000000008afccfff] usable Sep 4 19:46:36.994885 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Sep 4 19:46:36.994890 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Sep 4 19:46:36.994895 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Sep 4 19:46:36.994900 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Sep 4 19:46:36.994905 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Sep 4 19:46:36.994909 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Sep 4 19:46:36.994914 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 4 19:46:36.994918 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Sep 4 19:46:36.994922 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Sep 4 19:46:36.994927 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Sep 4 19:46:36.994931 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Sep 4 19:46:36.994936 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Sep 4 19:46:36.994940 kernel: NX (Execute Disable) protection: active Sep 4 19:46:36.994945 kernel: APIC: Static calls initialized Sep 4 19:46:36.994949 kernel: SMBIOS 3.2.1 present. Sep 4 19:46:36.994955 kernel: DMI: Supermicro X11SCM-F/X11SCM-F, BIOS 1.9 09/16/2022 Sep 4 19:46:36.994960 kernel: tsc: Detected 3400.000 MHz processor Sep 4 19:46:36.994964 kernel: tsc: Detected 3399.906 MHz TSC Sep 4 19:46:36.994969 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 19:46:36.994974 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 19:46:36.994978 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Sep 4 19:46:36.994983 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Sep 4 19:46:36.994988 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 19:46:36.994992 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Sep 4 19:46:36.994997 kernel: Using GB pages for direct mapping Sep 4 19:46:36.995002 kernel: ACPI: Early table checksum verification disabled Sep 4 19:46:36.995007 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Sep 4 19:46:36.995014 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Sep 4 19:46:36.995019 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Sep 4 19:46:36.995024 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Sep 4 19:46:36.995029 kernel: ACPI: FACS 0x000000008C66CF80 000040 Sep 4 19:46:36.995035 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Sep 4 19:46:36.995040 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Sep 4 19:46:36.995045 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Sep 4 19:46:36.995049 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Sep 4 19:46:36.995054 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Sep 4 19:46:36.995059 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Sep 4 19:46:36.995064 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Sep 4 19:46:36.995070 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Sep 4 19:46:36.995075 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 4 19:46:36.995080 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Sep 4 19:46:36.995084 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Sep 4 19:46:36.995089 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 4 19:46:36.995094 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 4 19:46:36.995099 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Sep 4 19:46:36.995104 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Sep 4 19:46:36.995109 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 4 19:46:36.995115 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Sep 4 19:46:36.995120 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Sep 4 19:46:36.995124 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Sep 4 19:46:36.995129 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Sep 4 19:46:36.995134 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Sep 4 19:46:36.995139 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Sep 4 19:46:36.995144 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Sep 4 19:46:36.995149 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Sep 4 19:46:36.995155 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Sep 4 19:46:36.995160 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Sep 4 19:46:36.995165 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Sep 4 19:46:36.995170 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Sep 4 19:46:36.995175 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Sep 4 19:46:36.995180 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Sep 4 19:46:36.995184 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Sep 4 19:46:36.995189 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Sep 4 19:46:36.995194 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Sep 4 19:46:36.995208 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Sep 4 19:46:36.995213 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Sep 4 19:46:36.995236 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Sep 4 19:46:36.995241 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Sep 4 19:46:36.995246 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Sep 4 19:46:36.995251 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Sep 4 19:46:36.995256 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Sep 4 19:46:36.995277 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Sep 4 19:46:36.995282 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Sep 4 19:46:36.995287 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Sep 4 19:46:36.995293 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Sep 4 19:46:36.995297 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Sep 4 19:46:36.995302 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Sep 4 19:46:36.995307 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Sep 4 19:46:36.995312 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Sep 4 19:46:36.995317 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Sep 4 19:46:36.995322 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Sep 4 19:46:36.995327 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Sep 4 19:46:36.995331 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Sep 4 19:46:36.995337 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Sep 4 19:46:36.995342 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Sep 4 19:46:36.995347 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Sep 4 19:46:36.995352 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Sep 4 19:46:36.995356 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Sep 4 19:46:36.995361 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Sep 4 19:46:36.995366 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Sep 4 19:46:36.995371 kernel: No NUMA configuration found Sep 4 19:46:36.995376 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Sep 4 19:46:36.995382 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Sep 4 19:46:36.995387 kernel: Zone ranges: Sep 4 19:46:36.995392 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 19:46:36.995396 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 4 19:46:36.995401 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Sep 4 19:46:36.995406 kernel: Movable zone start for each node Sep 4 19:46:36.995411 kernel: Early memory node ranges Sep 4 19:46:36.995416 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Sep 4 19:46:36.995421 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Sep 4 19:46:36.995426 kernel: node 0: [mem 0x0000000040400000-0x00000000819cefff] Sep 4 19:46:36.995432 kernel: node 0: [mem 0x00000000819d1000-0x000000008afccfff] Sep 4 19:46:36.995436 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Sep 4 19:46:36.995441 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Sep 4 19:46:36.995450 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Sep 4 19:46:36.995456 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Sep 4 19:46:36.995461 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 19:46:36.995466 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Sep 4 19:46:36.995472 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Sep 4 19:46:36.995478 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Sep 4 19:46:36.995483 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Sep 4 19:46:36.995488 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Sep 4 19:46:36.995493 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Sep 4 19:46:36.995499 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Sep 4 19:46:36.995504 kernel: ACPI: PM-Timer IO Port: 0x1808 Sep 4 19:46:36.995509 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Sep 4 19:46:36.995515 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Sep 4 19:46:36.995521 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Sep 4 19:46:36.995526 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Sep 4 19:46:36.995531 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Sep 4 19:46:36.995536 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Sep 4 19:46:36.995542 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Sep 4 19:46:36.995547 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Sep 4 19:46:36.995552 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Sep 4 19:46:36.995557 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Sep 4 19:46:36.995562 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Sep 4 19:46:36.995567 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Sep 4 19:46:36.995574 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Sep 4 19:46:36.995579 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Sep 4 19:46:36.995584 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Sep 4 19:46:36.995589 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Sep 4 19:46:36.995594 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Sep 4 19:46:36.995599 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 4 19:46:36.995605 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 19:46:36.995610 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 19:46:36.995615 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 4 19:46:36.995621 kernel: TSC deadline timer available Sep 4 19:46:36.995627 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Sep 4 19:46:36.995632 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Sep 4 19:46:36.995637 kernel: Booting paravirtualized kernel on bare hardware Sep 4 19:46:36.995642 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 19:46:36.995648 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Sep 4 19:46:36.995653 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u262144 Sep 4 19:46:36.995658 kernel: pcpu-alloc: s196904 r8192 d32472 u262144 alloc=1*2097152 Sep 4 19:46:36.995663 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Sep 4 19:46:36.995670 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.oem.id=packet flatcar.autologin verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 19:46:36.995676 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 19:46:36.995681 kernel: random: crng init done Sep 4 19:46:36.995686 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Sep 4 19:46:36.995691 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Sep 4 19:46:36.995697 kernel: Fallback order for Node 0: 0 Sep 4 19:46:36.995702 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Sep 4 19:46:36.995707 kernel: Policy zone: Normal Sep 4 19:46:36.995713 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 19:46:36.995719 kernel: software IO TLB: area num 16. Sep 4 19:46:36.995724 kernel: Memory: 32720308K/33452980K available (12288K kernel code, 2304K rwdata, 22708K rodata, 42704K init, 2488K bss, 732412K reserved, 0K cma-reserved) Sep 4 19:46:36.995730 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Sep 4 19:46:36.995735 kernel: ftrace: allocating 37748 entries in 148 pages Sep 4 19:46:36.995740 kernel: ftrace: allocated 148 pages with 3 groups Sep 4 19:46:36.995745 kernel: Dynamic Preempt: voluntary Sep 4 19:46:36.995751 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 19:46:36.995756 kernel: rcu: RCU event tracing is enabled. Sep 4 19:46:36.995763 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Sep 4 19:46:36.995768 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 19:46:36.995773 kernel: Rude variant of Tasks RCU enabled. Sep 4 19:46:36.995779 kernel: Tracing variant of Tasks RCU enabled. Sep 4 19:46:36.995784 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 19:46:36.995789 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Sep 4 19:46:36.995794 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Sep 4 19:46:36.995799 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 19:46:36.995805 kernel: Console: colour dummy device 80x25 Sep 4 19:46:36.995811 kernel: printk: console [tty0] enabled Sep 4 19:46:36.995816 kernel: printk: console [ttyS1] enabled Sep 4 19:46:36.995821 kernel: ACPI: Core revision 20230628 Sep 4 19:46:36.995827 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Sep 4 19:46:36.995832 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 19:46:36.995837 kernel: DMAR: Host address width 39 Sep 4 19:46:36.995842 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Sep 4 19:46:36.995848 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Sep 4 19:46:36.995853 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Sep 4 19:46:36.995859 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Sep 4 19:46:36.995864 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Sep 4 19:46:36.995870 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Sep 4 19:46:36.995875 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Sep 4 19:46:36.995880 kernel: x2apic enabled Sep 4 19:46:36.995886 kernel: APIC: Switched APIC routing to: cluster x2apic Sep 4 19:46:36.995891 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Sep 4 19:46:36.995896 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Sep 4 19:46:36.995902 kernel: CPU0: Thermal monitoring enabled (TM1) Sep 4 19:46:36.995908 kernel: process: using mwait in idle threads Sep 4 19:46:36.995913 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 4 19:46:36.995919 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Sep 4 19:46:36.995924 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 19:46:36.995929 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Sep 4 19:46:36.995934 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Sep 4 19:46:36.995939 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Sep 4 19:46:36.995945 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Sep 4 19:46:36.995950 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Sep 4 19:46:36.995955 kernel: RETBleed: Mitigation: Enhanced IBRS Sep 4 19:46:36.995960 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 4 19:46:36.995966 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 4 19:46:36.995972 kernel: TAA: Mitigation: TSX disabled Sep 4 19:46:36.995977 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Sep 4 19:46:36.995982 kernel: SRBDS: Mitigation: Microcode Sep 4 19:46:36.995987 kernel: GDS: Mitigation: Microcode Sep 4 19:46:36.995992 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 19:46:36.995998 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 19:46:36.996003 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 19:46:36.996008 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 4 19:46:36.996013 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 4 19:46:36.996018 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 19:46:36.996025 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 4 19:46:36.996030 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 4 19:46:36.996035 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Sep 4 19:46:36.996040 kernel: Freeing SMP alternatives memory: 32K Sep 4 19:46:36.996046 kernel: pid_max: default: 32768 minimum: 301 Sep 4 19:46:36.996051 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 4 19:46:36.996056 kernel: landlock: Up and running. Sep 4 19:46:36.996061 kernel: SELinux: Initializing. Sep 4 19:46:36.996067 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 19:46:36.996072 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 19:46:36.996077 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Sep 4 19:46:36.996082 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1. Sep 4 19:46:36.996089 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1. Sep 4 19:46:36.996094 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1. Sep 4 19:46:36.996099 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Sep 4 19:46:36.996105 kernel: ... version: 4 Sep 4 19:46:36.996110 kernel: ... bit width: 48 Sep 4 19:46:36.996115 kernel: ... generic registers: 4 Sep 4 19:46:36.996120 kernel: ... value mask: 0000ffffffffffff Sep 4 19:46:36.996125 kernel: ... max period: 00007fffffffffff Sep 4 19:46:36.996131 kernel: ... fixed-purpose events: 3 Sep 4 19:46:36.996137 kernel: ... event mask: 000000070000000f Sep 4 19:46:36.996142 kernel: signal: max sigframe size: 2032 Sep 4 19:46:36.996147 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Sep 4 19:46:36.996153 kernel: rcu: Hierarchical SRCU implementation. Sep 4 19:46:36.996158 kernel: rcu: Max phase no-delay instances is 400. Sep 4 19:46:36.996163 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Sep 4 19:46:36.996168 kernel: smp: Bringing up secondary CPUs ... Sep 4 19:46:36.996174 kernel: smpboot: x86: Booting SMP configuration: Sep 4 19:46:36.996179 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Sep 4 19:46:36.996186 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 4 19:46:36.996191 kernel: smp: Brought up 1 node, 16 CPUs Sep 4 19:46:36.996197 kernel: smpboot: Max logical packages: 1 Sep 4 19:46:36.996204 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Sep 4 19:46:36.996209 kernel: devtmpfs: initialized Sep 4 19:46:36.996214 kernel: x86/mm: Memory block size: 128MB Sep 4 19:46:36.996242 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x819cf000-0x819cffff] (4096 bytes) Sep 4 19:46:36.996247 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Sep 4 19:46:36.996269 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 19:46:36.996274 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Sep 4 19:46:36.996279 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 19:46:36.996284 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 19:46:36.996289 kernel: audit: initializing netlink subsys (disabled) Sep 4 19:46:36.996295 kernel: audit: type=2000 audit(1725479191.039:1): state=initialized audit_enabled=0 res=1 Sep 4 19:46:36.996300 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 19:46:36.996305 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 19:46:36.996310 kernel: cpuidle: using governor menu Sep 4 19:46:36.996316 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 19:46:36.996322 kernel: dca service started, version 1.12.1 Sep 4 19:46:36.996327 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Sep 4 19:46:36.996332 kernel: PCI: Using configuration type 1 for base access Sep 4 19:46:36.996338 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Sep 4 19:46:36.996343 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 19:46:36.996348 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 19:46:36.996353 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 19:46:36.996358 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 19:46:36.996365 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 19:46:36.996370 kernel: ACPI: Added _OSI(Module Device) Sep 4 19:46:36.996375 kernel: ACPI: Added _OSI(Processor Device) Sep 4 19:46:36.996381 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 19:46:36.996386 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 19:46:36.996391 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Sep 4 19:46:36.996396 kernel: ACPI: Dynamic OEM Table Load: Sep 4 19:46:36.996402 kernel: ACPI: SSDT 0xFFFF88BB41ECD800 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Sep 4 19:46:36.996407 kernel: ACPI: Dynamic OEM Table Load: Sep 4 19:46:36.996413 kernel: ACPI: SSDT 0xFFFF88BB41EC6000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Sep 4 19:46:36.996418 kernel: ACPI: Dynamic OEM Table Load: Sep 4 19:46:36.996424 kernel: ACPI: SSDT 0xFFFF88BB41537400 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Sep 4 19:46:36.996429 kernel: ACPI: Dynamic OEM Table Load: Sep 4 19:46:36.996434 kernel: ACPI: SSDT 0xFFFF88BB41EC4800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Sep 4 19:46:36.996439 kernel: ACPI: Dynamic OEM Table Load: Sep 4 19:46:36.996444 kernel: ACPI: SSDT 0xFFFF88BB41ED1000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Sep 4 19:46:36.996449 kernel: ACPI: Dynamic OEM Table Load: Sep 4 19:46:36.996455 kernel: ACPI: SSDT 0xFFFF88BB41ECDC00 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Sep 4 19:46:36.996460 kernel: ACPI: _OSC evaluated successfully for all CPUs Sep 4 19:46:36.996466 kernel: ACPI: Interpreter enabled Sep 4 19:46:36.996471 kernel: ACPI: PM: (supports S0 S5) Sep 4 19:46:36.996476 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 19:46:36.996482 kernel: HEST: Enabling Firmware First mode for corrected errors. Sep 4 19:46:36.996487 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Sep 4 19:46:36.996492 kernel: HEST: Table parsing has been initialized. Sep 4 19:46:36.996497 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Sep 4 19:46:36.996503 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 19:46:36.996508 kernel: PCI: Using E820 reservations for host bridge windows Sep 4 19:46:36.996514 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Sep 4 19:46:36.996519 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Sep 4 19:46:36.996525 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Sep 4 19:46:36.996530 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Sep 4 19:46:36.996535 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Sep 4 19:46:36.996541 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Sep 4 19:46:36.996546 kernel: ACPI: \_TZ_.FN00: New power resource Sep 4 19:46:36.996551 kernel: ACPI: \_TZ_.FN01: New power resource Sep 4 19:46:36.996556 kernel: ACPI: \_TZ_.FN02: New power resource Sep 4 19:46:36.996562 kernel: ACPI: \_TZ_.FN03: New power resource Sep 4 19:46:36.996568 kernel: ACPI: \_TZ_.FN04: New power resource Sep 4 19:46:36.996573 kernel: ACPI: \PIN_: New power resource Sep 4 19:46:36.996578 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Sep 4 19:46:36.996654 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 19:46:36.996707 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Sep 4 19:46:36.996753 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Sep 4 19:46:36.996763 kernel: PCI host bridge to bus 0000:00 Sep 4 19:46:36.996816 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 4 19:46:36.996858 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 4 19:46:36.996901 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 4 19:46:36.996941 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Sep 4 19:46:36.996982 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Sep 4 19:46:36.997022 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Sep 4 19:46:36.997081 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Sep 4 19:46:36.997137 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Sep 4 19:46:36.997188 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Sep 4 19:46:36.997278 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Sep 4 19:46:36.997326 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Sep 4 19:46:36.997377 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Sep 4 19:46:36.997428 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Sep 4 19:46:36.997479 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Sep 4 19:46:36.997526 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Sep 4 19:46:36.997572 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Sep 4 19:46:36.997623 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Sep 4 19:46:36.997671 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Sep 4 19:46:36.997720 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Sep 4 19:46:36.997771 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Sep 4 19:46:36.997817 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 4 19:46:36.997871 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Sep 4 19:46:36.997917 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 4 19:46:36.997968 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Sep 4 19:46:36.998017 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Sep 4 19:46:36.998066 kernel: pci 0000:00:16.0: PME# supported from D3hot Sep 4 19:46:36.998123 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Sep 4 19:46:36.998173 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Sep 4 19:46:36.998257 kernel: pci 0000:00:16.1: PME# supported from D3hot Sep 4 19:46:36.998308 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Sep 4 19:46:36.998354 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Sep 4 19:46:36.998403 kernel: pci 0000:00:16.4: PME# supported from D3hot Sep 4 19:46:36.998456 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Sep 4 19:46:36.998503 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Sep 4 19:46:36.998550 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Sep 4 19:46:36.998595 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Sep 4 19:46:36.998642 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Sep 4 19:46:36.998688 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Sep 4 19:46:36.998738 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Sep 4 19:46:36.998785 kernel: pci 0000:00:17.0: PME# supported from D3hot Sep 4 19:46:36.998838 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Sep 4 19:46:36.998886 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Sep 4 19:46:36.998942 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Sep 4 19:46:36.998992 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Sep 4 19:46:36.999045 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Sep 4 19:46:36.999092 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Sep 4 19:46:36.999144 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Sep 4 19:46:36.999192 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Sep 4 19:46:36.999284 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Sep 4 19:46:36.999333 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Sep 4 19:46:36.999385 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Sep 4 19:46:36.999433 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 4 19:46:36.999483 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Sep 4 19:46:36.999535 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Sep 4 19:46:36.999585 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Sep 4 19:46:36.999633 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Sep 4 19:46:36.999685 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Sep 4 19:46:36.999733 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Sep 4 19:46:36.999788 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Sep 4 19:46:36.999837 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Sep 4 19:46:36.999886 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Sep 4 19:46:36.999937 kernel: pci 0000:01:00.0: PME# supported from D3cold Sep 4 19:46:36.999986 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Sep 4 19:46:37.000034 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Sep 4 19:46:37.000087 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Sep 4 19:46:37.000135 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Sep 4 19:46:37.000184 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Sep 4 19:46:37.000276 kernel: pci 0000:01:00.1: PME# supported from D3cold Sep 4 19:46:37.000324 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Sep 4 19:46:37.000373 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Sep 4 19:46:37.000422 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 4 19:46:37.000470 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Sep 4 19:46:37.000517 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Sep 4 19:46:37.000565 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Sep 4 19:46:37.000617 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Sep 4 19:46:37.000669 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Sep 4 19:46:37.000718 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Sep 4 19:46:37.000767 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Sep 4 19:46:37.000814 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Sep 4 19:46:37.000863 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Sep 4 19:46:37.000911 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Sep 4 19:46:37.000958 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Sep 4 19:46:37.001009 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Sep 4 19:46:37.001062 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Sep 4 19:46:37.001111 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Sep 4 19:46:37.001159 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Sep 4 19:46:37.001210 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Sep 4 19:46:37.001293 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Sep 4 19:46:37.001342 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Sep 4 19:46:37.001391 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Sep 4 19:46:37.001442 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Sep 4 19:46:37.001490 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Sep 4 19:46:37.001536 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Sep 4 19:46:37.001589 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Sep 4 19:46:37.001637 kernel: pci 0000:06:00.0: enabling Extended Tags Sep 4 19:46:37.001686 kernel: pci 0000:06:00.0: supports D1 D2 Sep 4 19:46:37.001734 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 4 19:46:37.001785 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Sep 4 19:46:37.001832 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Sep 4 19:46:37.001880 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Sep 4 19:46:37.001935 kernel: pci_bus 0000:07: extended config space not accessible Sep 4 19:46:37.001989 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Sep 4 19:46:37.002040 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Sep 4 19:46:37.002089 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Sep 4 19:46:37.002142 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Sep 4 19:46:37.002192 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 4 19:46:37.002285 kernel: pci 0000:07:00.0: supports D1 D2 Sep 4 19:46:37.002336 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 4 19:46:37.002384 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Sep 4 19:46:37.002433 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Sep 4 19:46:37.002481 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Sep 4 19:46:37.002490 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Sep 4 19:46:37.002497 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Sep 4 19:46:37.002503 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Sep 4 19:46:37.002509 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Sep 4 19:46:37.002514 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Sep 4 19:46:37.002520 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Sep 4 19:46:37.002525 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Sep 4 19:46:37.002531 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Sep 4 19:46:37.002536 kernel: iommu: Default domain type: Translated Sep 4 19:46:37.002542 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 19:46:37.002549 kernel: PCI: Using ACPI for IRQ routing Sep 4 19:46:37.002554 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 4 19:46:37.002560 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Sep 4 19:46:37.002565 kernel: e820: reserve RAM buffer [mem 0x819cf000-0x83ffffff] Sep 4 19:46:37.002572 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Sep 4 19:46:37.002577 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Sep 4 19:46:37.002583 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Sep 4 19:46:37.002588 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Sep 4 19:46:37.002638 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Sep 4 19:46:37.002691 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Sep 4 19:46:37.002741 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 4 19:46:37.002749 kernel: vgaarb: loaded Sep 4 19:46:37.002755 kernel: clocksource: Switched to clocksource tsc-early Sep 4 19:46:37.002761 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 19:46:37.002767 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 19:46:37.002772 kernel: pnp: PnP ACPI init Sep 4 19:46:37.002821 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Sep 4 19:46:37.002872 kernel: pnp 00:02: [dma 0 disabled] Sep 4 19:46:37.002920 kernel: pnp 00:03: [dma 0 disabled] Sep 4 19:46:37.002970 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Sep 4 19:46:37.003014 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Sep 4 19:46:37.003059 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Sep 4 19:46:37.003106 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Sep 4 19:46:37.003152 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Sep 4 19:46:37.003195 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Sep 4 19:46:37.003287 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Sep 4 19:46:37.003331 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Sep 4 19:46:37.003374 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Sep 4 19:46:37.003416 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Sep 4 19:46:37.003459 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Sep 4 19:46:37.003508 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Sep 4 19:46:37.003552 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Sep 4 19:46:37.003597 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Sep 4 19:46:37.003642 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Sep 4 19:46:37.003684 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Sep 4 19:46:37.003727 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Sep 4 19:46:37.003770 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Sep 4 19:46:37.003819 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Sep 4 19:46:37.003828 kernel: pnp: PnP ACPI: found 10 devices Sep 4 19:46:37.003834 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 19:46:37.003839 kernel: NET: Registered PF_INET protocol family Sep 4 19:46:37.003845 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 19:46:37.003851 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Sep 4 19:46:37.003856 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 19:46:37.003862 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 19:46:37.003869 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 4 19:46:37.003875 kernel: TCP: Hash tables configured (established 262144 bind 65536) Sep 4 19:46:37.003880 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 4 19:46:37.003886 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 4 19:46:37.003891 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 19:46:37.003897 kernel: NET: Registered PF_XDP protocol family Sep 4 19:46:37.003944 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Sep 4 19:46:37.003994 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Sep 4 19:46:37.004041 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Sep 4 19:46:37.004094 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Sep 4 19:46:37.004144 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Sep 4 19:46:37.004194 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Sep 4 19:46:37.004292 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Sep 4 19:46:37.004339 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 4 19:46:37.004388 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Sep 4 19:46:37.004435 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Sep 4 19:46:37.004483 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Sep 4 19:46:37.004533 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Sep 4 19:46:37.004581 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Sep 4 19:46:37.004629 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Sep 4 19:46:37.004677 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Sep 4 19:46:37.004727 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Sep 4 19:46:37.004775 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Sep 4 19:46:37.004823 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Sep 4 19:46:37.004873 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Sep 4 19:46:37.004922 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Sep 4 19:46:37.004973 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Sep 4 19:46:37.005021 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Sep 4 19:46:37.005069 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Sep 4 19:46:37.005117 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Sep 4 19:46:37.005163 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Sep 4 19:46:37.005231 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 4 19:46:37.005294 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 4 19:46:37.005336 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 4 19:46:37.005377 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Sep 4 19:46:37.005419 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Sep 4 19:46:37.005466 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Sep 4 19:46:37.005511 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Sep 4 19:46:37.005563 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Sep 4 19:46:37.005607 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Sep 4 19:46:37.005655 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Sep 4 19:46:37.005700 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Sep 4 19:46:37.005748 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Sep 4 19:46:37.005790 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Sep 4 19:46:37.005839 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Sep 4 19:46:37.005883 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Sep 4 19:46:37.005891 kernel: PCI: CLS 64 bytes, default 64 Sep 4 19:46:37.005897 kernel: DMAR: No ATSR found Sep 4 19:46:37.005903 kernel: DMAR: No SATC found Sep 4 19:46:37.005909 kernel: DMAR: dmar0: Using Queued invalidation Sep 4 19:46:37.005956 kernel: pci 0000:00:00.0: Adding to iommu group 0 Sep 4 19:46:37.006004 kernel: pci 0000:00:01.0: Adding to iommu group 1 Sep 4 19:46:37.006055 kernel: pci 0000:00:08.0: Adding to iommu group 2 Sep 4 19:46:37.006103 kernel: pci 0000:00:12.0: Adding to iommu group 3 Sep 4 19:46:37.006151 kernel: pci 0000:00:14.0: Adding to iommu group 4 Sep 4 19:46:37.006197 kernel: pci 0000:00:14.2: Adding to iommu group 4 Sep 4 19:46:37.006293 kernel: pci 0000:00:15.0: Adding to iommu group 5 Sep 4 19:46:37.006339 kernel: pci 0000:00:15.1: Adding to iommu group 5 Sep 4 19:46:37.006387 kernel: pci 0000:00:16.0: Adding to iommu group 6 Sep 4 19:46:37.006435 kernel: pci 0000:00:16.1: Adding to iommu group 6 Sep 4 19:46:37.006484 kernel: pci 0000:00:16.4: Adding to iommu group 6 Sep 4 19:46:37.006531 kernel: pci 0000:00:17.0: Adding to iommu group 7 Sep 4 19:46:37.006579 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Sep 4 19:46:37.006626 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Sep 4 19:46:37.006673 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Sep 4 19:46:37.006721 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Sep 4 19:46:37.006768 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Sep 4 19:46:37.006816 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Sep 4 19:46:37.006866 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Sep 4 19:46:37.006913 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Sep 4 19:46:37.006961 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Sep 4 19:46:37.007010 kernel: pci 0000:01:00.0: Adding to iommu group 1 Sep 4 19:46:37.007059 kernel: pci 0000:01:00.1: Adding to iommu group 1 Sep 4 19:46:37.007109 kernel: pci 0000:03:00.0: Adding to iommu group 15 Sep 4 19:46:37.007157 kernel: pci 0000:04:00.0: Adding to iommu group 16 Sep 4 19:46:37.007231 kernel: pci 0000:06:00.0: Adding to iommu group 17 Sep 4 19:46:37.007306 kernel: pci 0000:07:00.0: Adding to iommu group 17 Sep 4 19:46:37.007314 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Sep 4 19:46:37.007320 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 4 19:46:37.007326 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Sep 4 19:46:37.007332 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Sep 4 19:46:37.007337 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Sep 4 19:46:37.007343 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Sep 4 19:46:37.007349 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Sep 4 19:46:37.007399 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Sep 4 19:46:37.007409 kernel: Initialise system trusted keyrings Sep 4 19:46:37.007415 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Sep 4 19:46:37.007421 kernel: Key type asymmetric registered Sep 4 19:46:37.007426 kernel: Asymmetric key parser 'x509' registered Sep 4 19:46:37.007432 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 4 19:46:37.007438 kernel: io scheduler mq-deadline registered Sep 4 19:46:37.007443 kernel: io scheduler kyber registered Sep 4 19:46:37.007449 kernel: io scheduler bfq registered Sep 4 19:46:37.007497 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Sep 4 19:46:37.007545 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Sep 4 19:46:37.007594 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Sep 4 19:46:37.007641 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Sep 4 19:46:37.007689 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Sep 4 19:46:37.007736 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Sep 4 19:46:37.007789 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Sep 4 19:46:37.007797 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Sep 4 19:46:37.007805 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Sep 4 19:46:37.007810 kernel: pstore: Using crash dump compression: deflate Sep 4 19:46:37.007816 kernel: pstore: Registered erst as persistent store backend Sep 4 19:46:37.007822 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 19:46:37.007827 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 19:46:37.007833 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 19:46:37.007839 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 4 19:46:37.007845 kernel: hpet_acpi_add: no address or irqs in _CRS Sep 4 19:46:37.007895 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Sep 4 19:46:37.007903 kernel: i8042: PNP: No PS/2 controller found. Sep 4 19:46:37.007945 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Sep 4 19:46:37.007990 kernel: rtc_cmos rtc_cmos: registered as rtc0 Sep 4 19:46:37.008034 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-09-04T19:46:35 UTC (1725479195) Sep 4 19:46:37.008077 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Sep 4 19:46:37.008085 kernel: intel_pstate: Intel P-state driver initializing Sep 4 19:46:37.008091 kernel: intel_pstate: Disabling energy efficiency optimization Sep 4 19:46:37.008098 kernel: intel_pstate: HWP enabled Sep 4 19:46:37.008104 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Sep 4 19:46:37.008110 kernel: vesafb: scrolling: redraw Sep 4 19:46:37.008115 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Sep 4 19:46:37.008121 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x00000000ed0dd379, using 768k, total 768k Sep 4 19:46:37.008127 kernel: Console: switching to colour frame buffer device 128x48 Sep 4 19:46:37.008132 kernel: fb0: VESA VGA frame buffer device Sep 4 19:46:37.008138 kernel: NET: Registered PF_INET6 protocol family Sep 4 19:46:37.008143 kernel: Segment Routing with IPv6 Sep 4 19:46:37.008150 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 19:46:37.008155 kernel: NET: Registered PF_PACKET protocol family Sep 4 19:46:37.008161 kernel: Key type dns_resolver registered Sep 4 19:46:37.008166 kernel: microcode: Microcode Update Driver: v2.2. Sep 4 19:46:37.008172 kernel: IPI shorthand broadcast: enabled Sep 4 19:46:37.008178 kernel: sched_clock: Marking stable (2475000698, 1379895731)->(4393541428, -538644999) Sep 4 19:46:37.008183 kernel: registered taskstats version 1 Sep 4 19:46:37.008189 kernel: Loading compiled-in X.509 certificates Sep 4 19:46:37.008194 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: 8669771ab5e11f458b79e6634fe685dacc266b18' Sep 4 19:46:37.008203 kernel: Key type .fscrypt registered Sep 4 19:46:37.008209 kernel: Key type fscrypt-provisioning registered Sep 4 19:46:37.008240 kernel: ima: Allocated hash algorithm: sha1 Sep 4 19:46:37.008245 kernel: ima: No architecture policies found Sep 4 19:46:37.008251 kernel: clk: Disabling unused clocks Sep 4 19:46:37.008257 kernel: Freeing unused kernel image (initmem) memory: 42704K Sep 4 19:46:37.008282 kernel: Write protecting the kernel read-only data: 36864k Sep 4 19:46:37.008288 kernel: Freeing unused kernel image (rodata/data gap) memory: 1868K Sep 4 19:46:37.008293 kernel: Run /init as init process Sep 4 19:46:37.008300 kernel: with arguments: Sep 4 19:46:37.008305 kernel: /init Sep 4 19:46:37.008311 kernel: with environment: Sep 4 19:46:37.008316 kernel: HOME=/ Sep 4 19:46:37.008322 kernel: TERM=linux Sep 4 19:46:37.008327 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 19:46:37.008334 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 19:46:37.008341 systemd[1]: Detected architecture x86-64. Sep 4 19:46:37.008348 systemd[1]: Running in initrd. Sep 4 19:46:37.008354 systemd[1]: No hostname configured, using default hostname. Sep 4 19:46:37.008360 systemd[1]: Hostname set to . Sep 4 19:46:37.008365 systemd[1]: Initializing machine ID from random generator. Sep 4 19:46:37.008371 systemd[1]: Queued start job for default target initrd.target. Sep 4 19:46:37.008377 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 19:46:37.008383 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 19:46:37.008390 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 19:46:37.008396 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 19:46:37.008402 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-ROOT.device - /dev/disk/by-partlabel/ROOT... Sep 4 19:46:37.008408 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 19:46:37.008414 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 19:46:37.008421 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 19:46:37.008426 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Sep 4 19:46:37.008433 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Sep 4 19:46:37.008439 kernel: clocksource: Switched to clocksource tsc Sep 4 19:46:37.008444 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 19:46:37.008450 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 19:46:37.008456 systemd[1]: Reached target ignition-diskful-subsequent.target - Ignition Subsequent Boot Disk Setup. Sep 4 19:46:37.008462 systemd[1]: Reached target paths.target - Path Units. Sep 4 19:46:37.008468 systemd[1]: Reached target slices.target - Slice Units. Sep 4 19:46:37.008474 systemd[1]: Reached target swap.target - Swaps. Sep 4 19:46:37.008480 systemd[1]: Reached target timers.target - Timer Units. Sep 4 19:46:37.008486 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 19:46:37.008492 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 19:46:37.008498 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 19:46:37.008504 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 19:46:37.008510 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 19:46:37.008516 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 19:46:37.008522 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 19:46:37.008527 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 19:46:37.008534 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 19:46:37.008540 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 19:46:37.008546 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 19:46:37.008562 systemd-journald[260]: Collecting audit messages is disabled. Sep 4 19:46:37.008577 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 19:46:37.008584 systemd-journald[260]: Journal started Sep 4 19:46:37.008597 systemd-journald[260]: Runtime Journal (/run/log/journal/aa342a20d832441c971b8d2ad26c8d4e) is 8.0M, max 639.9M, 631.9M free. Sep 4 19:46:37.022715 systemd-modules-load[262]: Inserted module 'overlay' Sep 4 19:46:37.045207 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 19:46:37.074017 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 19:46:37.116393 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 19:46:37.116406 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 19:46:37.135122 systemd-modules-load[262]: Inserted module 'br_netfilter' Sep 4 19:46:37.147510 kernel: Bridge firewalling registered Sep 4 19:46:37.138590 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 19:46:37.158547 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 19:46:37.177586 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 19:46:37.194612 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 19:46:37.231442 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 19:46:37.242866 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 19:46:37.255023 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 19:46:37.255721 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 19:46:37.260331 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 19:46:37.261409 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 19:46:37.261976 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 19:46:37.262802 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 19:46:37.266486 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 19:46:37.290565 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 19:46:37.311355 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 19:46:37.368063 dracut-cmdline[299]: dracut-dracut-053 Sep 4 19:46:37.375431 dracut-cmdline[299]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.oem.id=packet flatcar.autologin verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 19:46:37.576248 kernel: SCSI subsystem initialized Sep 4 19:46:37.599230 kernel: Loading iSCSI transport class v2.0-870. Sep 4 19:46:37.623279 kernel: iscsi: registered transport (tcp) Sep 4 19:46:37.654971 kernel: iscsi: registered transport (qla4xxx) Sep 4 19:46:37.654988 kernel: QLogic iSCSI HBA Driver Sep 4 19:46:37.688326 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 19:46:37.701326 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 19:46:37.787999 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 19:46:37.788022 kernel: device-mapper: uevent: version 1.0.3 Sep 4 19:46:37.807788 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 19:46:37.867268 kernel: raid6: avx2x4 gen() 53475 MB/s Sep 4 19:46:37.898274 kernel: raid6: avx2x2 gen() 53189 MB/s Sep 4 19:46:37.934938 kernel: raid6: avx2x1 gen() 45272 MB/s Sep 4 19:46:37.934954 kernel: raid6: using algorithm avx2x4 gen() 53475 MB/s Sep 4 19:46:37.982828 kernel: raid6: .... xor() 12849 MB/s, rmw enabled Sep 4 19:46:37.982845 kernel: raid6: using avx2x2 recovery algorithm Sep 4 19:46:38.024251 kernel: xor: automatically using best checksumming function avx Sep 4 19:46:38.141266 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 19:46:38.147339 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 19:46:38.167576 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 19:46:38.208853 systemd-udevd[485]: Using default interface naming scheme 'v255'. Sep 4 19:46:38.211316 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 19:46:38.231315 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 19:46:38.274424 dracut-pre-trigger[496]: rd.md=0: removing MD RAID activation Sep 4 19:46:38.291777 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 19:46:38.314491 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 19:46:38.398704 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 19:46:38.442633 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 4 19:46:38.442678 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 4 19:46:38.423896 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 19:46:38.459136 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 19:46:38.423932 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 19:46:38.526162 kernel: ACPI: bus type USB registered Sep 4 19:46:38.526173 kernel: usbcore: registered new interface driver usbfs Sep 4 19:46:38.526184 kernel: usbcore: registered new interface driver hub Sep 4 19:46:38.526192 kernel: usbcore: registered new device driver usb Sep 4 19:46:38.501593 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 19:46:38.721320 kernel: PTP clock support registered Sep 4 19:46:38.721337 kernel: libata version 3.00 loaded. Sep 4 19:46:38.721345 kernel: AVX2 version of gcm_enc/dec engaged. Sep 4 19:46:38.721353 kernel: AES CTR mode by8 optimization enabled Sep 4 19:46:38.721360 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Sep 4 19:46:38.721457 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Sep 4 19:46:38.721525 kernel: ahci 0000:00:17.0: version 3.0 Sep 4 19:46:38.721589 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Sep 4 19:46:38.721650 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Sep 4 19:46:38.721710 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Sep 4 19:46:38.721769 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Sep 4 19:46:38.721828 kernel: scsi host0: ahci Sep 4 19:46:38.721891 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Sep 4 19:46:38.721952 kernel: scsi host1: ahci Sep 4 19:46:38.722011 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Sep 4 19:46:38.722071 kernel: scsi host2: ahci Sep 4 19:46:38.722132 kernel: hub 1-0:1.0: USB hub found Sep 4 19:46:38.722197 kernel: scsi host3: ahci Sep 4 19:46:38.722259 kernel: hub 1-0:1.0: 16 ports detected Sep 4 19:46:38.722319 kernel: scsi host4: ahci Sep 4 19:46:38.722376 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Sep 4 19:46:38.722385 kernel: scsi host5: ahci Sep 4 19:46:38.722399 kernel: hub 2-0:1.0: USB hub found Sep 4 19:46:38.722460 kernel: hub 2-0:1.0: 10 ports detected Sep 4 19:46:38.722515 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Sep 4 19:46:38.560858 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 19:46:39.387182 kernel: pps pps0: new PPS source ptp0 Sep 4 19:46:39.387291 kernel: scsi host6: ahci Sep 4 19:46:39.387393 kernel: igb 0000:03:00.0: added PHC on eth0 Sep 4 19:46:39.387502 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Sep 4 19:46:39.387516 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Sep 4 19:46:39.387620 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Sep 4 19:46:39.387634 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:d8:54 Sep 4 19:46:39.387701 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Sep 4 19:46:39.387710 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Sep 4 19:46:39.387773 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Sep 4 19:46:39.387781 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Sep 4 19:46:39.387842 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Sep 4 19:46:39.387850 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Sep 4 19:46:39.387858 kernel: pps pps1: new PPS source ptp1 Sep 4 19:46:39.387919 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Sep 4 19:46:39.387927 kernel: igb 0000:04:00.0: added PHC on eth1 Sep 4 19:46:39.387993 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Sep 4 19:46:39.388090 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Sep 4 19:46:39.388155 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:d8:55 Sep 4 19:46:39.388232 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Sep 4 19:46:39.388295 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 4 19:46:39.388303 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Sep 4 19:46:39.388383 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 4 19:46:39.388392 kernel: hub 1-14:1.0: USB hub found Sep 4 19:46:39.388466 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Sep 4 19:46:39.388475 kernel: hub 1-14:1.0: 4 ports detected Sep 4 19:46:39.388542 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Sep 4 19:46:39.388551 kernel: ata7: SATA link down (SStatus 0 SControl 300) Sep 4 19:46:39.388558 kernel: ata2.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Sep 4 19:46:39.388567 kernel: ata3: SATA link down (SStatus 0 SControl 300) Sep 4 19:46:39.388574 kernel: ata1.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Sep 4 19:46:39.388581 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 4 19:46:38.560949 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 19:46:39.535901 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Sep 4 19:46:39.535913 kernel: mlx5_core 0000:01:00.0: firmware version: 14.31.1014 Sep 4 19:46:39.535995 kernel: ata2.00: Features: NCQ-prio Sep 4 19:46:39.536004 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Sep 4 19:46:39.536014 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Sep 4 19:46:39.536082 kernel: ata1.00: Features: NCQ-prio Sep 4 19:46:39.536090 kernel: ata2.00: configured for UDMA/133 Sep 4 19:46:39.536097 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Sep 4 19:46:39.536113 kernel: ata1.00: configured for UDMA/133 Sep 4 19:46:39.536121 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Sep 4 19:46:38.575172 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 19:46:39.487334 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 19:46:39.555264 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Sep 4 19:46:39.593972 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 19:46:39.594204 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Sep 4 19:46:39.617205 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Sep 4 19:46:39.617328 kernel: ata1.00: Enabling discard_zeroes_data Sep 4 19:46:39.637049 kernel: ata2.00: Enabling discard_zeroes_data Sep 4 19:46:39.637065 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Sep 4 19:46:39.637187 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Sep 4 19:46:39.652014 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 4 19:46:39.652127 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Sep 4 19:46:39.658209 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 4 19:46:39.663205 kernel: sd 1:0:0:0: [sdb] Write Protect is off Sep 4 19:46:39.668265 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Sep 4 19:46:39.668358 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 4 19:46:39.672183 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Sep 4 19:46:39.682652 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 4 19:46:39.685204 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Sep 4 19:46:39.687808 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 4 19:46:39.689301 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Sep 4 19:46:39.707230 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Sep 4 19:46:39.707325 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Sep 4 19:46:39.716394 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 19:46:39.951166 kernel: ata1.00: Enabling discard_zeroes_data Sep 4 19:46:39.951183 kernel: ata2.00: Enabling discard_zeroes_data Sep 4 19:46:39.951191 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 4 19:46:39.951279 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Sep 4 19:46:39.951288 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Sep 4 19:46:39.951359 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Sep 4 19:46:39.960211 kernel: usbcore: registered new interface driver usbhid Sep 4 19:46:39.960230 kernel: mlx5_core 0000:01:00.1: firmware version: 14.31.1014 Sep 4 19:46:39.960315 kernel: usbhid: USB HID core driver Sep 4 19:46:39.979250 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/sdb6 scanned by (udev-worker) (548) Sep 4 19:46:39.979271 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Sep 4 19:46:39.979705 systemd[1]: Found device dev-disk-by\x2dpartlabel-ROOT.device - Micron_5200_MTFDDAK480TDN ROOT. Sep 4 19:46:40.100018 kernel: BTRFS: device fsid 0dc40443-7f77-4fa7-b5e4-579d4bba0772 devid 1 transid 37 /dev/sdb3 scanned by (udev-worker) (650) Sep 4 19:46:40.100031 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Sep 4 19:46:40.120316 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5200_MTFDDAK480TDN ROOT. Sep 4 19:46:40.133610 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5200_MTFDDAK480TDN OEM. Sep 4 19:46:40.198660 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Sep 4 19:46:40.198755 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Sep 4 19:46:40.144402 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 19:46:40.278290 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Sep 4 19:46:40.270380 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5200_MTFDDAK480TDN USR-A. Sep 4 19:46:40.337093 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Sep 4 19:46:40.337183 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Sep 4 19:46:40.289280 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5200_MTFDDAK480TDN USR-A. Sep 4 19:46:40.289336 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 19:46:40.355471 systemd[1]: Starting decrypt-root.service - Generate and execute a systemd-cryptsetup service to decrypt the ROOT partition... Sep 4 19:46:40.385668 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 19:46:40.411522 systemd[1]: Finished decrypt-root.service - Generate and execute a systemd-cryptsetup service to decrypt the ROOT partition. Sep 4 19:46:40.434491 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 19:46:40.434564 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 19:46:40.460444 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 19:46:40.478416 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 19:46:40.499424 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 19:46:40.521488 systemd[1]: Reached target basic.target - Basic System. Sep 4 19:46:40.555455 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 19:46:40.559851 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 19:46:40.586544 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 19:46:40.643327 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Sep 4 19:46:40.643415 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Sep 4 19:46:40.620124 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 19:46:40.704285 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Sep 4 19:46:40.704424 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 4 19:46:40.661461 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 19:46:40.704718 sh[711]: Success Sep 4 19:46:40.724271 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 19:46:40.748487 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 19:46:40.760517 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 19:46:40.770001 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 19:46:40.801563 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 19:46:40.811524 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 19:46:40.944332 kernel: BTRFS info (device dm-0): first mount of filesystem 0dc40443-7f77-4fa7-b5e4-579d4bba0772 Sep 4 19:46:40.944346 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 19:46:40.944354 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 19:46:40.944361 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 19:46:40.944368 kernel: BTRFS info (device dm-0): using free space tree Sep 4 19:46:40.944375 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 4 19:46:40.940622 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 19:46:40.955724 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 19:46:40.980645 systemd-fsck[758]: ROOT: clean, 85/553520 files, 83083/553472 blocks Sep 4 19:46:40.990866 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 19:46:41.015332 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 19:46:41.114204 kernel: EXT4-fs (sdb9): mounted filesystem bdbe0f61-2675-40b7-b9ae-5653402e9b23 r/w with ordered data mode. Quota mode: none. Sep 4 19:46:41.114540 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 19:46:41.123610 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 19:46:41.163429 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 19:46:41.172492 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 19:46:41.277225 kernel: BTRFS info (device sdb6): first mount of filesystem 8660995c-3c86-4382-a83f-9cda48a1d7fd Sep 4 19:46:41.277242 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Sep 4 19:46:41.277250 kernel: BTRFS info (device sdb6): using free space tree Sep 4 19:46:41.277256 kernel: BTRFS info (device sdb6): enabling ssd optimizations Sep 4 19:46:41.277263 kernel: BTRFS info (device sdb6): auto enabling async discard Sep 4 19:46:41.277924 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 19:46:41.293956 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 19:46:41.308564 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 19:46:41.397401 initrd-setup-root[791]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 19:46:41.407354 initrd-setup-root[798]: cut: /sysroot/etc/group: No such file or directory Sep 4 19:46:41.418325 initrd-setup-root[805]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 19:46:41.429305 initrd-setup-root[812]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 19:46:41.505078 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 19:46:41.535517 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 19:46:41.547659 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 19:46:41.581561 initrd-setup-root-after-ignition[956]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 19:46:41.581561 initrd-setup-root-after-ignition[956]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 19:46:41.571345 systemd[1]: Reached target ignition-subsequent.target - Subsequent (Not Ignition) boot complete. Sep 4 19:46:41.643373 initrd-setup-root-after-ignition[960]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 19:46:41.606498 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 19:46:41.671627 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 19:46:41.671684 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 19:46:41.704311 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 19:46:41.704534 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 19:46:41.731560 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 19:46:41.739542 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 19:46:41.814012 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 19:46:41.844630 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 19:46:41.868948 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 19:46:41.880817 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 19:46:41.900757 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 19:46:41.901118 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 19:46:41.929881 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 19:46:41.950784 systemd[1]: Stopped target basic.target - Basic System. Sep 4 19:46:41.969793 systemd[1]: Stopped target ignition-subsequent.target - Subsequent (Not Ignition) boot complete. Sep 4 19:46:41.989780 systemd[1]: Stopped target ignition-diskful-subsequent.target - Ignition Subsequent Boot Disk Setup. Sep 4 19:46:42.013789 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 19:46:42.037778 systemd[1]: Stopped target paths.target - Path Units. Sep 4 19:46:42.057892 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 19:46:42.076784 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 19:46:42.097797 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 19:46:42.118766 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 19:46:42.137893 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 19:46:42.156798 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 19:46:42.176798 systemd[1]: Stopped target swap.target - Swaps. Sep 4 19:46:42.194760 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 19:46:42.195046 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 19:46:42.211859 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 19:46:42.212150 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 19:46:42.229820 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 19:46:42.230195 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 19:46:42.258013 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 19:46:42.277706 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 19:46:42.278137 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 19:46:42.296773 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 19:46:42.317702 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 19:46:42.318079 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 19:46:42.338702 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 19:46:42.339059 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 19:46:42.371707 systemd[1]: decrypt-root.service: Deactivated successfully. Sep 4 19:46:42.372144 systemd[1]: Stopped decrypt-root.service - Generate and execute a systemd-cryptsetup service to decrypt the ROOT partition. Sep 4 19:46:42.394894 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 19:46:42.395302 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 19:46:42.413886 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 19:46:42.414300 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 19:46:42.437899 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 19:46:42.438300 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 19:46:42.456873 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 19:46:42.457271 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 19:46:42.477875 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 19:46:42.478264 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 19:46:42.495847 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 19:46:42.496191 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 19:46:42.519880 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 19:46:42.520279 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 19:46:42.540890 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 19:46:42.541292 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 19:46:42.561904 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 19:46:42.562307 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 19:46:42.595326 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 19:46:42.626091 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 19:46:42.626185 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 19:46:42.634070 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 19:46:42.634170 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 19:46:42.662768 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 19:46:42.662860 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 19:46:42.680543 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 19:46:42.680681 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 19:46:42.710847 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 19:46:42.711016 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 19:46:42.737803 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 19:46:43.014281 systemd-journald[260]: Received SIGTERM from PID 1 (systemd). Sep 4 19:46:42.737984 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 19:46:42.789460 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 19:46:42.821393 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 19:46:42.821563 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 19:46:42.840604 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 19:46:42.840730 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 19:46:42.863625 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 19:46:42.863965 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 19:46:42.885424 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 19:46:42.885672 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 19:46:42.910223 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 19:46:42.940536 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 19:46:42.961694 systemd[1]: Switching root. Sep 4 19:46:43.014575 systemd-journald[260]: Journal stopped Sep 4 19:46:36.994817 kernel: microcode: updated early: 0xf4 -> 0xfc, date = 2023-07-27 Sep 4 19:46:36.994831 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 4 15:54:07 -00 2024 Sep 4 19:46:36.994838 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.oem.id=packet flatcar.autologin verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 19:46:36.994843 kernel: BIOS-provided physical RAM map: Sep 4 19:46:36.994847 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Sep 4 19:46:36.994851 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Sep 4 19:46:36.994856 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Sep 4 19:46:36.994860 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Sep 4 19:46:36.994864 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Sep 4 19:46:36.994868 kernel: BIOS-e820: [mem 0x0000000040400000-0x00000000819cefff] usable Sep 4 19:46:36.994872 kernel: BIOS-e820: [mem 0x00000000819cf000-0x00000000819cffff] ACPI NVS Sep 4 19:46:36.994877 kernel: BIOS-e820: [mem 0x00000000819d0000-0x00000000819d0fff] reserved Sep 4 19:46:36.994881 kernel: BIOS-e820: [mem 0x00000000819d1000-0x000000008afccfff] usable Sep 4 19:46:36.994885 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Sep 4 19:46:36.994890 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Sep 4 19:46:36.994895 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Sep 4 19:46:36.994900 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Sep 4 19:46:36.994905 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Sep 4 19:46:36.994909 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Sep 4 19:46:36.994914 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 4 19:46:36.994918 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Sep 4 19:46:36.994922 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Sep 4 19:46:36.994927 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Sep 4 19:46:36.994931 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Sep 4 19:46:36.994936 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Sep 4 19:46:36.994940 kernel: NX (Execute Disable) protection: active Sep 4 19:46:36.994945 kernel: APIC: Static calls initialized Sep 4 19:46:36.994949 kernel: SMBIOS 3.2.1 present. Sep 4 19:46:36.994955 kernel: DMI: Supermicro X11SCM-F/X11SCM-F, BIOS 1.9 09/16/2022 Sep 4 19:46:36.994960 kernel: tsc: Detected 3400.000 MHz processor Sep 4 19:46:36.994964 kernel: tsc: Detected 3399.906 MHz TSC Sep 4 19:46:36.994969 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 19:46:36.994974 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 19:46:36.994978 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Sep 4 19:46:36.994983 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Sep 4 19:46:36.994988 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 19:46:36.994992 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Sep 4 19:46:36.994997 kernel: Using GB pages for direct mapping Sep 4 19:46:36.995002 kernel: ACPI: Early table checksum verification disabled Sep 4 19:46:36.995007 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Sep 4 19:46:36.995014 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Sep 4 19:46:36.995019 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Sep 4 19:46:36.995024 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Sep 4 19:46:36.995029 kernel: ACPI: FACS 0x000000008C66CF80 000040 Sep 4 19:46:36.995035 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Sep 4 19:46:36.995040 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Sep 4 19:46:36.995045 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Sep 4 19:46:36.995049 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Sep 4 19:46:36.995054 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Sep 4 19:46:36.995059 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Sep 4 19:46:36.995064 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Sep 4 19:46:36.995070 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Sep 4 19:46:36.995075 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 4 19:46:36.995080 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Sep 4 19:46:36.995084 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Sep 4 19:46:36.995089 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 4 19:46:36.995094 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 4 19:46:36.995099 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Sep 4 19:46:36.995104 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Sep 4 19:46:36.995109 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 4 19:46:36.995115 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Sep 4 19:46:36.995120 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Sep 4 19:46:36.995124 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Sep 4 19:46:36.995129 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Sep 4 19:46:36.995134 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Sep 4 19:46:36.995139 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Sep 4 19:46:36.995144 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Sep 4 19:46:36.995149 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Sep 4 19:46:36.995155 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Sep 4 19:46:36.995160 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Sep 4 19:46:36.995165 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Sep 4 19:46:36.995170 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Sep 4 19:46:36.995175 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Sep 4 19:46:36.995180 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Sep 4 19:46:36.995184 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Sep 4 19:46:36.995189 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Sep 4 19:46:36.995194 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Sep 4 19:46:36.995208 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Sep 4 19:46:36.995213 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Sep 4 19:46:36.995236 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Sep 4 19:46:36.995241 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Sep 4 19:46:36.995246 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Sep 4 19:46:36.995251 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Sep 4 19:46:36.995256 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Sep 4 19:46:36.995277 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Sep 4 19:46:36.995282 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Sep 4 19:46:36.995287 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Sep 4 19:46:36.995293 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Sep 4 19:46:36.995297 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Sep 4 19:46:36.995302 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Sep 4 19:46:36.995307 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Sep 4 19:46:36.995312 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Sep 4 19:46:36.995317 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Sep 4 19:46:36.995322 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Sep 4 19:46:36.995327 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Sep 4 19:46:36.995331 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Sep 4 19:46:36.995337 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Sep 4 19:46:36.995342 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Sep 4 19:46:36.995347 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Sep 4 19:46:36.995352 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Sep 4 19:46:36.995356 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Sep 4 19:46:36.995361 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Sep 4 19:46:36.995366 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Sep 4 19:46:36.995371 kernel: No NUMA configuration found Sep 4 19:46:36.995376 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Sep 4 19:46:36.995382 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Sep 4 19:46:36.995387 kernel: Zone ranges: Sep 4 19:46:36.995392 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 19:46:36.995396 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 4 19:46:36.995401 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Sep 4 19:46:36.995406 kernel: Movable zone start for each node Sep 4 19:46:36.995411 kernel: Early memory node ranges Sep 4 19:46:36.995416 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Sep 4 19:46:36.995421 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Sep 4 19:46:36.995426 kernel: node 0: [mem 0x0000000040400000-0x00000000819cefff] Sep 4 19:46:36.995432 kernel: node 0: [mem 0x00000000819d1000-0x000000008afccfff] Sep 4 19:46:36.995436 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Sep 4 19:46:36.995441 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Sep 4 19:46:36.995450 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Sep 4 19:46:36.995456 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Sep 4 19:46:36.995461 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 19:46:36.995466 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Sep 4 19:46:36.995472 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Sep 4 19:46:36.995478 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Sep 4 19:46:36.995483 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Sep 4 19:46:36.995488 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Sep 4 19:46:36.995493 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Sep 4 19:46:36.995499 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Sep 4 19:46:36.995504 kernel: ACPI: PM-Timer IO Port: 0x1808 Sep 4 19:46:36.995509 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Sep 4 19:46:36.995515 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Sep 4 19:46:36.995521 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Sep 4 19:46:36.995526 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Sep 4 19:46:36.995531 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Sep 4 19:46:36.995536 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Sep 4 19:46:36.995542 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Sep 4 19:46:36.995547 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Sep 4 19:46:36.995552 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Sep 4 19:46:36.995557 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Sep 4 19:46:36.995562 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Sep 4 19:46:36.995567 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Sep 4 19:46:36.995574 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Sep 4 19:46:36.995579 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Sep 4 19:46:36.995584 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Sep 4 19:46:36.995589 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Sep 4 19:46:36.995594 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Sep 4 19:46:36.995599 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 4 19:46:36.995605 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 19:46:36.995610 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 19:46:36.995615 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 4 19:46:36.995621 kernel: TSC deadline timer available Sep 4 19:46:36.995627 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Sep 4 19:46:36.995632 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Sep 4 19:46:36.995637 kernel: Booting paravirtualized kernel on bare hardware Sep 4 19:46:36.995642 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 19:46:36.995648 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Sep 4 19:46:36.995653 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u262144 Sep 4 19:46:36.995658 kernel: pcpu-alloc: s196904 r8192 d32472 u262144 alloc=1*2097152 Sep 4 19:46:36.995663 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Sep 4 19:46:36.995670 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.oem.id=packet flatcar.autologin verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 19:46:36.995676 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 19:46:36.995681 kernel: random: crng init done Sep 4 19:46:36.995686 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Sep 4 19:46:36.995691 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Sep 4 19:46:36.995697 kernel: Fallback order for Node 0: 0 Sep 4 19:46:36.995702 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Sep 4 19:46:36.995707 kernel: Policy zone: Normal Sep 4 19:46:36.995713 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 19:46:36.995719 kernel: software IO TLB: area num 16. Sep 4 19:46:36.995724 kernel: Memory: 32720308K/33452980K available (12288K kernel code, 2304K rwdata, 22708K rodata, 42704K init, 2488K bss, 732412K reserved, 0K cma-reserved) Sep 4 19:46:36.995730 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Sep 4 19:46:36.995735 kernel: ftrace: allocating 37748 entries in 148 pages Sep 4 19:46:36.995740 kernel: ftrace: allocated 148 pages with 3 groups Sep 4 19:46:36.995745 kernel: Dynamic Preempt: voluntary Sep 4 19:46:36.995751 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 19:46:36.995756 kernel: rcu: RCU event tracing is enabled. Sep 4 19:46:36.995763 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Sep 4 19:46:36.995768 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 19:46:36.995773 kernel: Rude variant of Tasks RCU enabled. Sep 4 19:46:36.995779 kernel: Tracing variant of Tasks RCU enabled. Sep 4 19:46:36.995784 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 19:46:36.995789 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Sep 4 19:46:36.995794 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Sep 4 19:46:36.995799 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 19:46:36.995805 kernel: Console: colour dummy device 80x25 Sep 4 19:46:36.995811 kernel: printk: console [tty0] enabled Sep 4 19:46:36.995816 kernel: printk: console [ttyS1] enabled Sep 4 19:46:36.995821 kernel: ACPI: Core revision 20230628 Sep 4 19:46:36.995827 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Sep 4 19:46:36.995832 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 19:46:36.995837 kernel: DMAR: Host address width 39 Sep 4 19:46:36.995842 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Sep 4 19:46:36.995848 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Sep 4 19:46:36.995853 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Sep 4 19:46:36.995859 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Sep 4 19:46:36.995864 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Sep 4 19:46:36.995870 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Sep 4 19:46:36.995875 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Sep 4 19:46:36.995880 kernel: x2apic enabled Sep 4 19:46:36.995886 kernel: APIC: Switched APIC routing to: cluster x2apic Sep 4 19:46:36.995891 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Sep 4 19:46:36.995896 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Sep 4 19:46:36.995902 kernel: CPU0: Thermal monitoring enabled (TM1) Sep 4 19:46:36.995908 kernel: process: using mwait in idle threads Sep 4 19:46:36.995913 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 4 19:46:36.995919 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Sep 4 19:46:36.995924 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 19:46:36.995929 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Sep 4 19:46:36.995934 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Sep 4 19:46:36.995939 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Sep 4 19:46:36.995945 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Sep 4 19:46:36.995950 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Sep 4 19:46:36.995955 kernel: RETBleed: Mitigation: Enhanced IBRS Sep 4 19:46:36.995960 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 4 19:46:36.995966 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 4 19:46:36.995972 kernel: TAA: Mitigation: TSX disabled Sep 4 19:46:36.995977 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Sep 4 19:46:36.995982 kernel: SRBDS: Mitigation: Microcode Sep 4 19:46:36.995987 kernel: GDS: Mitigation: Microcode Sep 4 19:46:36.995992 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 19:46:36.995998 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 19:46:36.996003 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 19:46:36.996008 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 4 19:46:36.996013 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 4 19:46:36.996018 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 19:46:36.996025 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 4 19:46:36.996030 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 4 19:46:36.996035 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Sep 4 19:46:36.996040 kernel: Freeing SMP alternatives memory: 32K Sep 4 19:46:36.996046 kernel: pid_max: default: 32768 minimum: 301 Sep 4 19:46:36.996051 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 4 19:46:36.996056 kernel: landlock: Up and running. Sep 4 19:46:36.996061 kernel: SELinux: Initializing. Sep 4 19:46:36.996067 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 19:46:36.996072 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 19:46:36.996077 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Sep 4 19:46:36.996082 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1. Sep 4 19:46:36.996089 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1. Sep 4 19:46:36.996094 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1. Sep 4 19:46:36.996099 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Sep 4 19:46:36.996105 kernel: ... version: 4 Sep 4 19:46:36.996110 kernel: ... bit width: 48 Sep 4 19:46:36.996115 kernel: ... generic registers: 4 Sep 4 19:46:36.996120 kernel: ... value mask: 0000ffffffffffff Sep 4 19:46:36.996125 kernel: ... max period: 00007fffffffffff Sep 4 19:46:36.996131 kernel: ... fixed-purpose events: 3 Sep 4 19:46:36.996137 kernel: ... event mask: 000000070000000f Sep 4 19:46:36.996142 kernel: signal: max sigframe size: 2032 Sep 4 19:46:36.996147 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Sep 4 19:46:36.996153 kernel: rcu: Hierarchical SRCU implementation. Sep 4 19:46:36.996158 kernel: rcu: Max phase no-delay instances is 400. Sep 4 19:46:36.996163 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Sep 4 19:46:36.996168 kernel: smp: Bringing up secondary CPUs ... Sep 4 19:46:36.996174 kernel: smpboot: x86: Booting SMP configuration: Sep 4 19:46:36.996179 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Sep 4 19:46:36.996186 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 4 19:46:36.996191 kernel: smp: Brought up 1 node, 16 CPUs Sep 4 19:46:36.996197 kernel: smpboot: Max logical packages: 1 Sep 4 19:46:36.996204 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Sep 4 19:46:36.996209 kernel: devtmpfs: initialized Sep 4 19:46:36.996214 kernel: x86/mm: Memory block size: 128MB Sep 4 19:46:36.996242 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x819cf000-0x819cffff] (4096 bytes) Sep 4 19:46:36.996247 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Sep 4 19:46:36.996269 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 19:46:36.996274 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Sep 4 19:46:36.996279 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 19:46:36.996284 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 19:46:36.996289 kernel: audit: initializing netlink subsys (disabled) Sep 4 19:46:36.996295 kernel: audit: type=2000 audit(1725479191.039:1): state=initialized audit_enabled=0 res=1 Sep 4 19:46:36.996300 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 19:46:36.996305 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 19:46:36.996310 kernel: cpuidle: using governor menu Sep 4 19:46:36.996316 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 19:46:36.996322 kernel: dca service started, version 1.12.1 Sep 4 19:46:36.996327 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Sep 4 19:46:36.996332 kernel: PCI: Using configuration type 1 for base access Sep 4 19:46:36.996338 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Sep 4 19:46:36.996343 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 19:46:36.996348 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 19:46:36.996353 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 19:46:36.996358 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 19:46:36.996365 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 19:46:36.996370 kernel: ACPI: Added _OSI(Module Device) Sep 4 19:46:36.996375 kernel: ACPI: Added _OSI(Processor Device) Sep 4 19:46:36.996381 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 19:46:36.996386 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 19:46:36.996391 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Sep 4 19:46:36.996396 kernel: ACPI: Dynamic OEM Table Load: Sep 4 19:46:36.996402 kernel: ACPI: SSDT 0xFFFF88BB41ECD800 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Sep 4 19:46:36.996407 kernel: ACPI: Dynamic OEM Table Load: Sep 4 19:46:36.996413 kernel: ACPI: SSDT 0xFFFF88BB41EC6000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Sep 4 19:46:36.996418 kernel: ACPI: Dynamic OEM Table Load: Sep 4 19:46:36.996424 kernel: ACPI: SSDT 0xFFFF88BB41537400 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Sep 4 19:46:36.996429 kernel: ACPI: Dynamic OEM Table Load: Sep 4 19:46:36.996434 kernel: ACPI: SSDT 0xFFFF88BB41EC4800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Sep 4 19:46:36.996439 kernel: ACPI: Dynamic OEM Table Load: Sep 4 19:46:36.996444 kernel: ACPI: SSDT 0xFFFF88BB41ED1000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Sep 4 19:46:36.996449 kernel: ACPI: Dynamic OEM Table Load: Sep 4 19:46:36.996455 kernel: ACPI: SSDT 0xFFFF88BB41ECDC00 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Sep 4 19:46:36.996460 kernel: ACPI: _OSC evaluated successfully for all CPUs Sep 4 19:46:36.996466 kernel: ACPI: Interpreter enabled Sep 4 19:46:36.996471 kernel: ACPI: PM: (supports S0 S5) Sep 4 19:46:36.996476 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 19:46:36.996482 kernel: HEST: Enabling Firmware First mode for corrected errors. Sep 4 19:46:36.996487 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Sep 4 19:46:36.996492 kernel: HEST: Table parsing has been initialized. Sep 4 19:46:36.996497 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Sep 4 19:46:36.996503 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 19:46:36.996508 kernel: PCI: Using E820 reservations for host bridge windows Sep 4 19:46:36.996514 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Sep 4 19:46:36.996519 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Sep 4 19:46:36.996525 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Sep 4 19:46:36.996530 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Sep 4 19:46:36.996535 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Sep 4 19:46:36.996541 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Sep 4 19:46:36.996546 kernel: ACPI: \_TZ_.FN00: New power resource Sep 4 19:46:36.996551 kernel: ACPI: \_TZ_.FN01: New power resource Sep 4 19:46:36.996556 kernel: ACPI: \_TZ_.FN02: New power resource Sep 4 19:46:36.996562 kernel: ACPI: \_TZ_.FN03: New power resource Sep 4 19:46:36.996568 kernel: ACPI: \_TZ_.FN04: New power resource Sep 4 19:46:36.996573 kernel: ACPI: \PIN_: New power resource Sep 4 19:46:36.996578 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Sep 4 19:46:36.996654 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 19:46:36.996707 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Sep 4 19:46:36.996753 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Sep 4 19:46:36.996763 kernel: PCI host bridge to bus 0000:00 Sep 4 19:46:36.996816 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 4 19:46:36.996858 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 4 19:46:36.996901 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 4 19:46:36.996941 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Sep 4 19:46:36.996982 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Sep 4 19:46:36.997022 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Sep 4 19:46:36.997081 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Sep 4 19:46:36.997137 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Sep 4 19:46:36.997188 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Sep 4 19:46:36.997278 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Sep 4 19:46:36.997326 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Sep 4 19:46:36.997377 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Sep 4 19:46:36.997428 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Sep 4 19:46:36.997479 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Sep 4 19:46:36.997526 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Sep 4 19:46:36.997572 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Sep 4 19:46:36.997623 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Sep 4 19:46:36.997671 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Sep 4 19:46:36.997720 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Sep 4 19:46:36.997771 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Sep 4 19:46:36.997817 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 4 19:46:36.997871 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Sep 4 19:46:36.997917 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 4 19:46:36.997968 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Sep 4 19:46:36.998017 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Sep 4 19:46:36.998066 kernel: pci 0000:00:16.0: PME# supported from D3hot Sep 4 19:46:36.998123 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Sep 4 19:46:36.998173 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Sep 4 19:46:36.998257 kernel: pci 0000:00:16.1: PME# supported from D3hot Sep 4 19:46:36.998308 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Sep 4 19:46:36.998354 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Sep 4 19:46:36.998403 kernel: pci 0000:00:16.4: PME# supported from D3hot Sep 4 19:46:36.998456 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Sep 4 19:46:36.998503 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Sep 4 19:46:36.998550 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Sep 4 19:46:36.998595 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Sep 4 19:46:36.998642 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Sep 4 19:46:36.998688 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Sep 4 19:46:36.998738 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Sep 4 19:46:36.998785 kernel: pci 0000:00:17.0: PME# supported from D3hot Sep 4 19:46:36.998838 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Sep 4 19:46:36.998886 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Sep 4 19:46:36.998942 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Sep 4 19:46:36.998992 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Sep 4 19:46:36.999045 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Sep 4 19:46:36.999092 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Sep 4 19:46:36.999144 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Sep 4 19:46:36.999192 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Sep 4 19:46:36.999284 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Sep 4 19:46:36.999333 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Sep 4 19:46:36.999385 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Sep 4 19:46:36.999433 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 4 19:46:36.999483 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Sep 4 19:46:36.999535 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Sep 4 19:46:36.999585 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Sep 4 19:46:36.999633 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Sep 4 19:46:36.999685 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Sep 4 19:46:36.999733 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Sep 4 19:46:36.999788 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Sep 4 19:46:36.999837 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Sep 4 19:46:36.999886 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Sep 4 19:46:36.999937 kernel: pci 0000:01:00.0: PME# supported from D3cold Sep 4 19:46:36.999986 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Sep 4 19:46:37.000034 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Sep 4 19:46:37.000087 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Sep 4 19:46:37.000135 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Sep 4 19:46:37.000184 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Sep 4 19:46:37.000276 kernel: pci 0000:01:00.1: PME# supported from D3cold Sep 4 19:46:37.000324 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Sep 4 19:46:37.000373 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Sep 4 19:46:37.000422 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 4 19:46:37.000470 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Sep 4 19:46:37.000517 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Sep 4 19:46:37.000565 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Sep 4 19:46:37.000617 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Sep 4 19:46:37.000669 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Sep 4 19:46:37.000718 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Sep 4 19:46:37.000767 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Sep 4 19:46:37.000814 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Sep 4 19:46:37.000863 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Sep 4 19:46:37.000911 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Sep 4 19:46:37.000958 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Sep 4 19:46:37.001009 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Sep 4 19:46:37.001062 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Sep 4 19:46:37.001111 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Sep 4 19:46:37.001159 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Sep 4 19:46:37.001210 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Sep 4 19:46:37.001293 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Sep 4 19:46:37.001342 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Sep 4 19:46:37.001391 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Sep 4 19:46:37.001442 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Sep 4 19:46:37.001490 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Sep 4 19:46:37.001536 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Sep 4 19:46:37.001589 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Sep 4 19:46:37.001637 kernel: pci 0000:06:00.0: enabling Extended Tags Sep 4 19:46:37.001686 kernel: pci 0000:06:00.0: supports D1 D2 Sep 4 19:46:37.001734 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 4 19:46:37.001785 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Sep 4 19:46:37.001832 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Sep 4 19:46:37.001880 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Sep 4 19:46:37.001935 kernel: pci_bus 0000:07: extended config space not accessible Sep 4 19:46:37.001989 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Sep 4 19:46:37.002040 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Sep 4 19:46:37.002089 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Sep 4 19:46:37.002142 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Sep 4 19:46:37.002192 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 4 19:46:37.002285 kernel: pci 0000:07:00.0: supports D1 D2 Sep 4 19:46:37.002336 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 4 19:46:37.002384 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Sep 4 19:46:37.002433 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Sep 4 19:46:37.002481 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Sep 4 19:46:37.002490 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Sep 4 19:46:37.002497 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Sep 4 19:46:37.002503 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Sep 4 19:46:37.002509 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Sep 4 19:46:37.002514 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Sep 4 19:46:37.002520 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Sep 4 19:46:37.002525 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Sep 4 19:46:37.002531 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Sep 4 19:46:37.002536 kernel: iommu: Default domain type: Translated Sep 4 19:46:37.002542 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 19:46:37.002549 kernel: PCI: Using ACPI for IRQ routing Sep 4 19:46:37.002554 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 4 19:46:37.002560 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Sep 4 19:46:37.002565 kernel: e820: reserve RAM buffer [mem 0x819cf000-0x83ffffff] Sep 4 19:46:37.002572 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Sep 4 19:46:37.002577 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Sep 4 19:46:37.002583 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Sep 4 19:46:37.002588 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Sep 4 19:46:37.002638 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Sep 4 19:46:37.002691 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Sep 4 19:46:37.002741 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 4 19:46:37.002749 kernel: vgaarb: loaded Sep 4 19:46:37.002755 kernel: clocksource: Switched to clocksource tsc-early Sep 4 19:46:37.002761 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 19:46:37.002767 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 19:46:37.002772 kernel: pnp: PnP ACPI init Sep 4 19:46:37.002821 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Sep 4 19:46:37.002872 kernel: pnp 00:02: [dma 0 disabled] Sep 4 19:46:37.002920 kernel: pnp 00:03: [dma 0 disabled] Sep 4 19:46:37.002970 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Sep 4 19:46:37.003014 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Sep 4 19:46:37.003059 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Sep 4 19:46:37.003106 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Sep 4 19:46:37.003152 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Sep 4 19:46:37.003195 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Sep 4 19:46:37.003287 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Sep 4 19:46:37.003331 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Sep 4 19:46:37.003374 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Sep 4 19:46:37.003416 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Sep 4 19:46:37.003459 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Sep 4 19:46:37.003508 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Sep 4 19:46:37.003552 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Sep 4 19:46:37.003597 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Sep 4 19:46:37.003642 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Sep 4 19:46:37.003684 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Sep 4 19:46:37.003727 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Sep 4 19:46:37.003770 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Sep 4 19:46:37.003819 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Sep 4 19:46:37.003828 kernel: pnp: PnP ACPI: found 10 devices Sep 4 19:46:37.003834 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 19:46:37.003839 kernel: NET: Registered PF_INET protocol family Sep 4 19:46:37.003845 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 19:46:37.003851 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Sep 4 19:46:37.003856 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 19:46:37.003862 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 19:46:37.003869 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 4 19:46:37.003875 kernel: TCP: Hash tables configured (established 262144 bind 65536) Sep 4 19:46:37.003880 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 4 19:46:37.003886 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 4 19:46:37.003891 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 19:46:37.003897 kernel: NET: Registered PF_XDP protocol family Sep 4 19:46:37.003944 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Sep 4 19:46:37.003994 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Sep 4 19:46:37.004041 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Sep 4 19:46:37.004094 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Sep 4 19:46:37.004144 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Sep 4 19:46:37.004194 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Sep 4 19:46:37.004292 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Sep 4 19:46:37.004339 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 4 19:46:37.004388 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Sep 4 19:46:37.004435 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Sep 4 19:46:37.004483 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Sep 4 19:46:37.004533 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Sep 4 19:46:37.004581 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Sep 4 19:46:37.004629 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Sep 4 19:46:37.004677 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Sep 4 19:46:37.004727 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Sep 4 19:46:37.004775 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Sep 4 19:46:37.004823 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Sep 4 19:46:37.004873 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Sep 4 19:46:37.004922 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Sep 4 19:46:37.004973 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Sep 4 19:46:37.005021 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Sep 4 19:46:37.005069 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Sep 4 19:46:37.005117 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Sep 4 19:46:37.005163 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Sep 4 19:46:37.005231 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 4 19:46:37.005294 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 4 19:46:37.005336 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 4 19:46:37.005377 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Sep 4 19:46:37.005419 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Sep 4 19:46:37.005466 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Sep 4 19:46:37.005511 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Sep 4 19:46:37.005563 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Sep 4 19:46:37.005607 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Sep 4 19:46:37.005655 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Sep 4 19:46:37.005700 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Sep 4 19:46:37.005748 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Sep 4 19:46:37.005790 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Sep 4 19:46:37.005839 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Sep 4 19:46:37.005883 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Sep 4 19:46:37.005891 kernel: PCI: CLS 64 bytes, default 64 Sep 4 19:46:37.005897 kernel: DMAR: No ATSR found Sep 4 19:46:37.005903 kernel: DMAR: No SATC found Sep 4 19:46:37.005909 kernel: DMAR: dmar0: Using Queued invalidation Sep 4 19:46:37.005956 kernel: pci 0000:00:00.0: Adding to iommu group 0 Sep 4 19:46:37.006004 kernel: pci 0000:00:01.0: Adding to iommu group 1 Sep 4 19:46:37.006055 kernel: pci 0000:00:08.0: Adding to iommu group 2 Sep 4 19:46:37.006103 kernel: pci 0000:00:12.0: Adding to iommu group 3 Sep 4 19:46:37.006151 kernel: pci 0000:00:14.0: Adding to iommu group 4 Sep 4 19:46:37.006197 kernel: pci 0000:00:14.2: Adding to iommu group 4 Sep 4 19:46:37.006293 kernel: pci 0000:00:15.0: Adding to iommu group 5 Sep 4 19:46:37.006339 kernel: pci 0000:00:15.1: Adding to iommu group 5 Sep 4 19:46:37.006387 kernel: pci 0000:00:16.0: Adding to iommu group 6 Sep 4 19:46:37.006435 kernel: pci 0000:00:16.1: Adding to iommu group 6 Sep 4 19:46:37.006484 kernel: pci 0000:00:16.4: Adding to iommu group 6 Sep 4 19:46:37.006531 kernel: pci 0000:00:17.0: Adding to iommu group 7 Sep 4 19:46:37.006579 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Sep 4 19:46:37.006626 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Sep 4 19:46:37.006673 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Sep 4 19:46:37.006721 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Sep 4 19:46:37.006768 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Sep 4 19:46:37.006816 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Sep 4 19:46:37.006866 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Sep 4 19:46:37.006913 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Sep 4 19:46:37.006961 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Sep 4 19:46:37.007010 kernel: pci 0000:01:00.0: Adding to iommu group 1 Sep 4 19:46:37.007059 kernel: pci 0000:01:00.1: Adding to iommu group 1 Sep 4 19:46:37.007109 kernel: pci 0000:03:00.0: Adding to iommu group 15 Sep 4 19:46:37.007157 kernel: pci 0000:04:00.0: Adding to iommu group 16 Sep 4 19:46:37.007231 kernel: pci 0000:06:00.0: Adding to iommu group 17 Sep 4 19:46:37.007306 kernel: pci 0000:07:00.0: Adding to iommu group 17 Sep 4 19:46:37.007314 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Sep 4 19:46:37.007320 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 4 19:46:37.007326 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Sep 4 19:46:37.007332 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Sep 4 19:46:37.007337 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Sep 4 19:46:37.007343 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Sep 4 19:46:37.007349 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Sep 4 19:46:37.007399 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Sep 4 19:46:37.007409 kernel: Initialise system trusted keyrings Sep 4 19:46:37.007415 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Sep 4 19:46:37.007421 kernel: Key type asymmetric registered Sep 4 19:46:37.007426 kernel: Asymmetric key parser 'x509' registered Sep 4 19:46:37.007432 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 4 19:46:37.007438 kernel: io scheduler mq-deadline registered Sep 4 19:46:37.007443 kernel: io scheduler kyber registered Sep 4 19:46:37.007449 kernel: io scheduler bfq registered Sep 4 19:46:37.007497 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Sep 4 19:46:37.007545 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Sep 4 19:46:37.007594 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Sep 4 19:46:37.007641 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Sep 4 19:46:37.007689 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Sep 4 19:46:37.007736 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Sep 4 19:46:37.007789 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Sep 4 19:46:37.007797 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Sep 4 19:46:37.007805 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Sep 4 19:46:37.007810 kernel: pstore: Using crash dump compression: deflate Sep 4 19:46:37.007816 kernel: pstore: Registered erst as persistent store backend Sep 4 19:46:37.007822 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 19:46:37.007827 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 19:46:37.007833 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 19:46:37.007839 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 4 19:46:37.007845 kernel: hpet_acpi_add: no address or irqs in _CRS Sep 4 19:46:37.007895 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Sep 4 19:46:37.007903 kernel: i8042: PNP: No PS/2 controller found. Sep 4 19:46:37.007945 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Sep 4 19:46:37.007990 kernel: rtc_cmos rtc_cmos: registered as rtc0 Sep 4 19:46:37.008034 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-09-04T19:46:35 UTC (1725479195) Sep 4 19:46:37.008077 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Sep 4 19:46:37.008085 kernel: intel_pstate: Intel P-state driver initializing Sep 4 19:46:37.008091 kernel: intel_pstate: Disabling energy efficiency optimization Sep 4 19:46:37.008098 kernel: intel_pstate: HWP enabled Sep 4 19:46:37.008104 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Sep 4 19:46:37.008110 kernel: vesafb: scrolling: redraw Sep 4 19:46:37.008115 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Sep 4 19:46:37.008121 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x00000000ed0dd379, using 768k, total 768k Sep 4 19:46:37.008127 kernel: Console: switching to colour frame buffer device 128x48 Sep 4 19:46:37.008132 kernel: fb0: VESA VGA frame buffer device Sep 4 19:46:37.008138 kernel: NET: Registered PF_INET6 protocol family Sep 4 19:46:37.008143 kernel: Segment Routing with IPv6 Sep 4 19:46:37.008150 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 19:46:37.008155 kernel: NET: Registered PF_PACKET protocol family Sep 4 19:46:37.008161 kernel: Key type dns_resolver registered Sep 4 19:46:37.008166 kernel: microcode: Microcode Update Driver: v2.2. Sep 4 19:46:37.008172 kernel: IPI shorthand broadcast: enabled Sep 4 19:46:37.008178 kernel: sched_clock: Marking stable (2475000698, 1379895731)->(4393541428, -538644999) Sep 4 19:46:37.008183 kernel: registered taskstats version 1 Sep 4 19:46:37.008189 kernel: Loading compiled-in X.509 certificates Sep 4 19:46:37.008194 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: 8669771ab5e11f458b79e6634fe685dacc266b18' Sep 4 19:46:37.008203 kernel: Key type .fscrypt registered Sep 4 19:46:37.008209 kernel: Key type fscrypt-provisioning registered Sep 4 19:46:37.008240 kernel: ima: Allocated hash algorithm: sha1 Sep 4 19:46:37.008245 kernel: ima: No architecture policies found Sep 4 19:46:37.008251 kernel: clk: Disabling unused clocks Sep 4 19:46:37.008257 kernel: Freeing unused kernel image (initmem) memory: 42704K Sep 4 19:46:37.008282 kernel: Write protecting the kernel read-only data: 36864k Sep 4 19:46:37.008288 kernel: Freeing unused kernel image (rodata/data gap) memory: 1868K Sep 4 19:46:37.008293 kernel: Run /init as init process Sep 4 19:46:37.008300 kernel: with arguments: Sep 4 19:46:37.008305 kernel: /init Sep 4 19:46:37.008311 kernel: with environment: Sep 4 19:46:37.008316 kernel: HOME=/ Sep 4 19:46:37.008322 kernel: TERM=linux Sep 4 19:46:37.008327 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 19:46:37.008334 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 19:46:37.008341 systemd[1]: Detected architecture x86-64. Sep 4 19:46:37.008348 systemd[1]: Running in initrd. Sep 4 19:46:37.008354 systemd[1]: No hostname configured, using default hostname. Sep 4 19:46:37.008360 systemd[1]: Hostname set to . Sep 4 19:46:37.008365 systemd[1]: Initializing machine ID from random generator. Sep 4 19:46:37.008371 systemd[1]: Queued start job for default target initrd.target. Sep 4 19:46:37.008377 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 19:46:37.008383 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 19:46:37.008390 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 19:46:37.008396 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 19:46:37.008402 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-ROOT.device - /dev/disk/by-partlabel/ROOT... Sep 4 19:46:37.008408 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 19:46:37.008414 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 19:46:37.008421 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 19:46:37.008426 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Sep 4 19:46:37.008433 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Sep 4 19:46:37.008439 kernel: clocksource: Switched to clocksource tsc Sep 4 19:46:37.008444 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 19:46:37.008450 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 19:46:37.008456 systemd[1]: Reached target ignition-diskful-subsequent.target - Ignition Subsequent Boot Disk Setup. Sep 4 19:46:37.008462 systemd[1]: Reached target paths.target - Path Units. Sep 4 19:46:37.008468 systemd[1]: Reached target slices.target - Slice Units. Sep 4 19:46:37.008474 systemd[1]: Reached target swap.target - Swaps. Sep 4 19:46:37.008480 systemd[1]: Reached target timers.target - Timer Units. Sep 4 19:46:37.008486 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 19:46:37.008492 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 19:46:37.008498 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 19:46:37.008504 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 19:46:37.008510 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 19:46:37.008516 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 19:46:37.008522 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 19:46:37.008527 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 19:46:37.008534 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 19:46:37.008540 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 19:46:37.008546 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 19:46:37.008562 systemd-journald[260]: Collecting audit messages is disabled. Sep 4 19:46:37.008577 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 19:46:37.008584 systemd-journald[260]: Journal started Sep 4 19:46:37.008597 systemd-journald[260]: Runtime Journal (/run/log/journal/aa342a20d832441c971b8d2ad26c8d4e) is 8.0M, max 639.9M, 631.9M free. Sep 4 19:46:37.022715 systemd-modules-load[262]: Inserted module 'overlay' Sep 4 19:46:37.045207 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 19:46:37.074017 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 19:46:37.116393 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 19:46:37.116406 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 19:46:37.135122 systemd-modules-load[262]: Inserted module 'br_netfilter' Sep 4 19:46:37.147510 kernel: Bridge firewalling registered Sep 4 19:46:37.138590 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 19:46:37.158547 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 19:46:37.177586 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 19:46:37.194612 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 19:46:37.231442 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 19:46:37.242866 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 19:46:37.255023 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 19:46:37.255721 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 19:46:37.260331 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 19:46:37.261409 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 19:46:37.261976 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 19:46:37.262802 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 19:46:37.266486 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 19:46:37.290565 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 19:46:37.311355 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 19:46:37.368063 dracut-cmdline[299]: dracut-dracut-053 Sep 4 19:46:37.375431 dracut-cmdline[299]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.oem.id=packet flatcar.autologin verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 19:46:37.576248 kernel: SCSI subsystem initialized Sep 4 19:46:37.599230 kernel: Loading iSCSI transport class v2.0-870. Sep 4 19:46:37.623279 kernel: iscsi: registered transport (tcp) Sep 4 19:46:37.654971 kernel: iscsi: registered transport (qla4xxx) Sep 4 19:46:37.654988 kernel: QLogic iSCSI HBA Driver Sep 4 19:46:37.688326 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 19:46:37.701326 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 19:46:37.787999 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 19:46:37.788022 kernel: device-mapper: uevent: version 1.0.3 Sep 4 19:46:37.807788 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 19:46:37.867268 kernel: raid6: avx2x4 gen() 53475 MB/s Sep 4 19:46:37.898274 kernel: raid6: avx2x2 gen() 53189 MB/s Sep 4 19:46:37.934938 kernel: raid6: avx2x1 gen() 45272 MB/s Sep 4 19:46:37.934954 kernel: raid6: using algorithm avx2x4 gen() 53475 MB/s Sep 4 19:46:37.982828 kernel: raid6: .... xor() 12849 MB/s, rmw enabled Sep 4 19:46:37.982845 kernel: raid6: using avx2x2 recovery algorithm Sep 4 19:46:38.024251 kernel: xor: automatically using best checksumming function avx Sep 4 19:46:38.141266 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 19:46:38.147339 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 19:46:38.167576 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 19:46:38.208853 systemd-udevd[485]: Using default interface naming scheme 'v255'. Sep 4 19:46:38.211316 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 19:46:38.231315 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 19:46:38.274424 dracut-pre-trigger[496]: rd.md=0: removing MD RAID activation Sep 4 19:46:38.291777 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 19:46:38.314491 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 19:46:38.398704 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 19:46:38.442633 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 4 19:46:38.442678 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 4 19:46:38.423896 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 19:46:38.459136 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 19:46:38.423932 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 19:46:38.526162 kernel: ACPI: bus type USB registered Sep 4 19:46:38.526173 kernel: usbcore: registered new interface driver usbfs Sep 4 19:46:38.526184 kernel: usbcore: registered new interface driver hub Sep 4 19:46:38.526192 kernel: usbcore: registered new device driver usb Sep 4 19:46:38.501593 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 19:46:38.721320 kernel: PTP clock support registered Sep 4 19:46:38.721337 kernel: libata version 3.00 loaded. Sep 4 19:46:38.721345 kernel: AVX2 version of gcm_enc/dec engaged. Sep 4 19:46:38.721353 kernel: AES CTR mode by8 optimization enabled Sep 4 19:46:38.721360 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Sep 4 19:46:38.721457 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Sep 4 19:46:38.721525 kernel: ahci 0000:00:17.0: version 3.0 Sep 4 19:46:38.721589 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Sep 4 19:46:38.721650 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Sep 4 19:46:38.721710 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Sep 4 19:46:38.721769 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Sep 4 19:46:38.721828 kernel: scsi host0: ahci Sep 4 19:46:38.721891 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Sep 4 19:46:38.721952 kernel: scsi host1: ahci Sep 4 19:46:38.722011 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Sep 4 19:46:38.722071 kernel: scsi host2: ahci Sep 4 19:46:38.722132 kernel: hub 1-0:1.0: USB hub found Sep 4 19:46:38.722197 kernel: scsi host3: ahci Sep 4 19:46:38.722259 kernel: hub 1-0:1.0: 16 ports detected Sep 4 19:46:38.722319 kernel: scsi host4: ahci Sep 4 19:46:38.722376 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Sep 4 19:46:38.722385 kernel: scsi host5: ahci Sep 4 19:46:38.722399 kernel: hub 2-0:1.0: USB hub found Sep 4 19:46:38.722460 kernel: hub 2-0:1.0: 10 ports detected Sep 4 19:46:38.722515 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Sep 4 19:46:38.560858 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 19:46:39.387182 kernel: pps pps0: new PPS source ptp0 Sep 4 19:46:39.387291 kernel: scsi host6: ahci Sep 4 19:46:39.387393 kernel: igb 0000:03:00.0: added PHC on eth0 Sep 4 19:46:39.387502 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Sep 4 19:46:39.387516 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Sep 4 19:46:39.387620 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Sep 4 19:46:39.387634 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:d8:54 Sep 4 19:46:39.387701 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Sep 4 19:46:39.387710 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Sep 4 19:46:39.387773 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Sep 4 19:46:39.387781 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Sep 4 19:46:39.387842 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Sep 4 19:46:39.387850 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Sep 4 19:46:39.387858 kernel: pps pps1: new PPS source ptp1 Sep 4 19:46:39.387919 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Sep 4 19:46:39.387927 kernel: igb 0000:04:00.0: added PHC on eth1 Sep 4 19:46:39.387993 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Sep 4 19:46:39.388090 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Sep 4 19:46:39.388155 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:d8:55 Sep 4 19:46:39.388232 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Sep 4 19:46:39.388295 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 4 19:46:39.388303 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Sep 4 19:46:39.388383 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 4 19:46:39.388392 kernel: hub 1-14:1.0: USB hub found Sep 4 19:46:39.388466 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Sep 4 19:46:39.388475 kernel: hub 1-14:1.0: 4 ports detected Sep 4 19:46:39.388542 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Sep 4 19:46:39.388551 kernel: ata7: SATA link down (SStatus 0 SControl 300) Sep 4 19:46:39.388558 kernel: ata2.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Sep 4 19:46:39.388567 kernel: ata3: SATA link down (SStatus 0 SControl 300) Sep 4 19:46:39.388574 kernel: ata1.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Sep 4 19:46:39.388581 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 4 19:46:38.560949 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 19:46:39.535901 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Sep 4 19:46:39.535913 kernel: mlx5_core 0000:01:00.0: firmware version: 14.31.1014 Sep 4 19:46:39.535995 kernel: ata2.00: Features: NCQ-prio Sep 4 19:46:39.536004 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Sep 4 19:46:39.536014 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Sep 4 19:46:39.536082 kernel: ata1.00: Features: NCQ-prio Sep 4 19:46:39.536090 kernel: ata2.00: configured for UDMA/133 Sep 4 19:46:39.536097 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Sep 4 19:46:39.536113 kernel: ata1.00: configured for UDMA/133 Sep 4 19:46:39.536121 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Sep 4 19:46:38.575172 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 19:46:39.487334 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 19:46:39.555264 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Sep 4 19:46:39.593972 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 19:46:39.594204 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Sep 4 19:46:39.617205 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Sep 4 19:46:39.617328 kernel: ata1.00: Enabling discard_zeroes_data Sep 4 19:46:39.637049 kernel: ata2.00: Enabling discard_zeroes_data Sep 4 19:46:39.637065 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Sep 4 19:46:39.637187 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Sep 4 19:46:39.652014 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 4 19:46:39.652127 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Sep 4 19:46:39.658209 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 4 19:46:39.663205 kernel: sd 1:0:0:0: [sdb] Write Protect is off Sep 4 19:46:39.668265 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Sep 4 19:46:39.668358 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 4 19:46:39.672183 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Sep 4 19:46:39.682652 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 4 19:46:39.685204 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Sep 4 19:46:39.687808 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 4 19:46:39.689301 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Sep 4 19:46:39.707230 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Sep 4 19:46:39.707325 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Sep 4 19:46:39.716394 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 19:46:39.951166 kernel: ata1.00: Enabling discard_zeroes_data Sep 4 19:46:39.951183 kernel: ata2.00: Enabling discard_zeroes_data Sep 4 19:46:39.951191 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 4 19:46:39.951279 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Sep 4 19:46:39.951288 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Sep 4 19:46:39.951359 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Sep 4 19:46:39.960211 kernel: usbcore: registered new interface driver usbhid Sep 4 19:46:39.960230 kernel: mlx5_core 0000:01:00.1: firmware version: 14.31.1014 Sep 4 19:46:39.960315 kernel: usbhid: USB HID core driver Sep 4 19:46:39.979250 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/sdb6 scanned by (udev-worker) (548) Sep 4 19:46:39.979271 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Sep 4 19:46:39.979705 systemd[1]: Found device dev-disk-by\x2dpartlabel-ROOT.device - Micron_5200_MTFDDAK480TDN ROOT. Sep 4 19:46:40.100018 kernel: BTRFS: device fsid 0dc40443-7f77-4fa7-b5e4-579d4bba0772 devid 1 transid 37 /dev/sdb3 scanned by (udev-worker) (650) Sep 4 19:46:40.100031 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Sep 4 19:46:40.120316 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5200_MTFDDAK480TDN ROOT. Sep 4 19:46:40.133610 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5200_MTFDDAK480TDN OEM. Sep 4 19:46:40.198660 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Sep 4 19:46:40.198755 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Sep 4 19:46:40.144402 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 19:46:40.278290 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Sep 4 19:46:40.270380 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5200_MTFDDAK480TDN USR-A. Sep 4 19:46:40.337093 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Sep 4 19:46:40.337183 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Sep 4 19:46:40.289280 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5200_MTFDDAK480TDN USR-A. Sep 4 19:46:40.289336 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 19:46:40.355471 systemd[1]: Starting decrypt-root.service - Generate and execute a systemd-cryptsetup service to decrypt the ROOT partition... Sep 4 19:46:40.385668 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 19:46:40.411522 systemd[1]: Finished decrypt-root.service - Generate and execute a systemd-cryptsetup service to decrypt the ROOT partition. Sep 4 19:46:40.434491 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 19:46:40.434564 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 19:46:40.460444 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 19:46:40.478416 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 19:46:40.499424 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 19:46:40.521488 systemd[1]: Reached target basic.target - Basic System. Sep 4 19:46:40.555455 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 19:46:40.559851 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 19:46:40.586544 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 19:46:40.643327 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Sep 4 19:46:40.643415 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Sep 4 19:46:40.620124 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 19:46:40.704285 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Sep 4 19:46:40.704424 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 4 19:46:40.661461 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 19:46:40.704718 sh[711]: Success Sep 4 19:46:40.724271 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 19:46:40.748487 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 19:46:40.760517 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 19:46:40.770001 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 19:46:40.801563 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 19:46:40.811524 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 19:46:40.944332 kernel: BTRFS info (device dm-0): first mount of filesystem 0dc40443-7f77-4fa7-b5e4-579d4bba0772 Sep 4 19:46:40.944346 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 19:46:40.944354 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 19:46:40.944361 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 19:46:40.944368 kernel: BTRFS info (device dm-0): using free space tree Sep 4 19:46:40.944375 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 4 19:46:40.940622 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 19:46:40.955724 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 19:46:40.980645 systemd-fsck[758]: ROOT: clean, 85/553520 files, 83083/553472 blocks Sep 4 19:46:40.990866 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 19:46:41.015332 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 19:46:41.114204 kernel: EXT4-fs (sdb9): mounted filesystem bdbe0f61-2675-40b7-b9ae-5653402e9b23 r/w with ordered data mode. Quota mode: none. Sep 4 19:46:41.114540 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 19:46:41.123610 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 19:46:41.163429 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 19:46:41.172492 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 19:46:41.277225 kernel: BTRFS info (device sdb6): first mount of filesystem 8660995c-3c86-4382-a83f-9cda48a1d7fd Sep 4 19:46:41.277242 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Sep 4 19:46:41.277250 kernel: BTRFS info (device sdb6): using free space tree Sep 4 19:46:41.277256 kernel: BTRFS info (device sdb6): enabling ssd optimizations Sep 4 19:46:41.277263 kernel: BTRFS info (device sdb6): auto enabling async discard Sep 4 19:46:41.277924 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 19:46:41.293956 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 19:46:41.308564 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 19:46:41.397401 initrd-setup-root[791]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 19:46:41.407354 initrd-setup-root[798]: cut: /sysroot/etc/group: No such file or directory Sep 4 19:46:41.418325 initrd-setup-root[805]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 19:46:41.429305 initrd-setup-root[812]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 19:46:41.505078 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 19:46:41.535517 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 19:46:41.547659 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 19:46:41.581561 initrd-setup-root-after-ignition[956]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 19:46:41.581561 initrd-setup-root-after-ignition[956]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 19:46:41.571345 systemd[1]: Reached target ignition-subsequent.target - Subsequent (Not Ignition) boot complete. Sep 4 19:46:41.643373 initrd-setup-root-after-ignition[960]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 19:46:41.606498 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 19:46:41.671627 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 19:46:41.671684 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 19:46:41.704311 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 19:46:41.704534 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 19:46:41.731560 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 19:46:41.739542 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 19:46:41.814012 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 19:46:41.844630 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 19:46:41.868948 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 19:46:41.880817 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 19:46:41.900757 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 19:46:41.901118 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 19:46:41.929881 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 19:46:41.950784 systemd[1]: Stopped target basic.target - Basic System. Sep 4 19:46:41.969793 systemd[1]: Stopped target ignition-subsequent.target - Subsequent (Not Ignition) boot complete. Sep 4 19:46:41.989780 systemd[1]: Stopped target ignition-diskful-subsequent.target - Ignition Subsequent Boot Disk Setup. Sep 4 19:46:42.013789 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 19:46:42.037778 systemd[1]: Stopped target paths.target - Path Units. Sep 4 19:46:42.057892 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 19:46:42.076784 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 19:46:42.097797 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 19:46:42.118766 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 19:46:42.137893 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 19:46:42.156798 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 19:46:42.176798 systemd[1]: Stopped target swap.target - Swaps. Sep 4 19:46:42.194760 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 19:46:42.195046 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 19:46:42.211859 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 19:46:42.212150 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 19:46:42.229820 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 19:46:42.230195 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 19:46:42.258013 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 19:46:42.277706 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 19:46:42.278137 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 19:46:42.296773 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 19:46:42.317702 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 19:46:42.318079 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 19:46:42.338702 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 19:46:42.339059 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 19:46:42.371707 systemd[1]: decrypt-root.service: Deactivated successfully. Sep 4 19:46:42.372144 systemd[1]: Stopped decrypt-root.service - Generate and execute a systemd-cryptsetup service to decrypt the ROOT partition. Sep 4 19:46:42.394894 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 19:46:42.395302 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 19:46:42.413886 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 19:46:42.414300 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 19:46:42.437899 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 19:46:42.438300 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 19:46:42.456873 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 19:46:42.457271 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 19:46:42.477875 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 19:46:42.478264 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 19:46:42.495847 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 19:46:42.496191 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 19:46:42.519880 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 19:46:42.520279 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 19:46:42.540890 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 19:46:42.541292 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 19:46:42.561904 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 19:46:42.562307 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 19:46:42.595326 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 19:46:42.626091 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 19:46:42.626185 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 19:46:42.634070 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 19:46:42.634170 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 19:46:42.662768 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 19:46:42.662860 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 19:46:42.680543 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 19:46:42.680681 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 19:46:42.710847 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 19:46:42.711016 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 19:46:42.737803 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 19:46:43.014281 systemd-journald[260]: Received SIGTERM from PID 1 (systemd). Sep 4 19:46:42.737984 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 19:46:42.789460 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 19:46:42.821393 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 19:46:42.821563 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 19:46:42.840604 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 19:46:42.840730 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 19:46:42.863625 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 19:46:42.863965 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 19:46:42.885424 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 19:46:42.885672 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 19:46:42.910223 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 19:46:42.940536 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 19:46:42.961694 systemd[1]: Switching root. Sep 4 19:46:43.014575 systemd-journald[260]: Journal stopped Sep 4 19:46:45.686907 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 19:46:45.686922 kernel: SELinux: policy capability open_perms=1 Sep 4 19:46:45.686929 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 19:46:45.686935 kernel: SELinux: policy capability always_check_network=0 Sep 4 19:46:45.686941 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 19:46:45.686946 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 19:46:45.686952 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 19:46:45.686957 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 19:46:45.686963 kernel: audit: type=1403 audit(1725479203.228:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 19:46:45.686970 systemd[1]: Successfully loaded SELinux policy in 175.018ms. Sep 4 19:46:45.686978 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.814ms. Sep 4 19:46:45.686984 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 19:46:45.686990 systemd[1]: Detected architecture x86-64. Sep 4 19:46:45.686996 systemd[1]: Detected first boot. Sep 4 19:46:45.687003 systemd[1]: Hostname set to . Sep 4 19:46:45.687011 systemd[1]: Initializing machine ID from random generator. Sep 4 19:46:45.687017 zram_generator::config[1001]: No configuration found. Sep 4 19:46:45.687024 systemd[1]: Populated /etc with preset unit settings. Sep 4 19:46:45.687030 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 19:46:45.687036 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 19:46:45.687042 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 19:46:45.687049 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 19:46:45.687057 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 19:46:45.687064 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 19:46:45.687070 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 19:46:45.687077 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 19:46:45.687083 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 19:46:45.687090 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 19:46:45.687096 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 19:46:45.687104 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 19:46:45.687110 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 19:46:45.687117 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 19:46:45.687123 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 19:46:45.687130 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 19:46:45.687136 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 19:46:45.687143 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... Sep 4 19:46:45.687149 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 19:46:45.687156 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 19:46:45.687163 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 19:46:45.687169 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 19:46:45.687178 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 19:46:45.687184 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 19:46:45.687191 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 19:46:45.687198 systemd[1]: Reached target slices.target - Slice Units. Sep 4 19:46:45.687208 systemd[1]: Reached target swap.target - Swaps. Sep 4 19:46:45.687215 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 19:46:45.687221 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 19:46:45.687228 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 19:46:45.687235 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 19:46:45.687241 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 19:46:45.687250 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 19:46:45.687256 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 19:46:45.687263 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 19:46:45.687270 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 19:46:45.687277 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 19:46:45.687283 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 19:46:45.687290 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 19:46:45.687298 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 19:46:45.687305 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 19:46:45.687312 systemd[1]: Reached target machines.target - Containers. Sep 4 19:46:45.687319 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 19:46:45.687325 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 19:46:45.687332 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 19:46:45.687339 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 19:46:45.687347 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 19:46:45.687354 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 19:46:45.687361 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 19:46:45.687368 kernel: ACPI: bus type drm_connector registered Sep 4 19:46:45.687374 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 19:46:45.687381 kernel: fuse: init (API version 7.39) Sep 4 19:46:45.687387 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 19:46:45.687394 kernel: loop: module loaded Sep 4 19:46:45.687400 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 19:46:45.687407 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 19:46:45.687415 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 19:46:45.687422 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 19:46:45.687428 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 19:46:45.687435 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 19:46:45.687450 systemd-journald[1101]: Collecting audit messages is disabled. Sep 4 19:46:45.687465 systemd-journald[1101]: Journal started Sep 4 19:46:45.687479 systemd-journald[1101]: Runtime Journal (/run/log/journal/75259b436d4344e2b9d7ec31e8eb86d9) is 8.0M, max 639.9M, 631.9M free. Sep 4 19:46:43.769615 systemd[1]: Queued start job for default target multi-user.target. Sep 4 19:46:43.789026 systemd[1]: Unnecessary job was removed for dev-sdb6.device - /dev/sdb6. Sep 4 19:46:43.789259 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 19:46:45.715227 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 19:46:45.749334 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 19:46:45.783255 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 19:46:45.816247 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 19:46:45.845256 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 19:46:45.845285 systemd[1]: Stopped verity-setup.service. Sep 4 19:46:45.912250 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 19:46:45.933404 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 19:46:45.942897 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 19:46:45.953487 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 19:46:45.963492 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 19:46:45.973480 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 19:46:45.983460 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 19:46:45.993461 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 19:46:46.003571 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 19:46:46.014613 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 19:46:46.025813 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 19:46:46.026034 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 19:46:46.038091 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 19:46:46.038469 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 19:46:46.050378 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 19:46:46.050792 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 19:46:46.061150 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 19:46:46.061579 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 19:46:46.074140 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 19:46:46.074551 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 19:46:46.086127 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 19:46:46.086533 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 19:46:46.098130 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 19:46:46.110104 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 19:46:46.122090 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 19:46:46.134092 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 19:46:46.154934 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 19:46:46.173405 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 19:46:46.184112 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 19:46:46.193384 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 19:46:46.193411 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 19:46:46.194330 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 4 19:46:46.215991 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 19:46:46.230224 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 19:46:46.241708 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 19:46:46.244096 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 19:46:46.253974 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 19:46:46.265343 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 19:46:46.266059 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 19:46:46.268829 systemd-journald[1101]: Time spent on flushing to /var/log/journal/75259b436d4344e2b9d7ec31e8eb86d9 is 10.734ms for 1137 entries. Sep 4 19:46:46.268829 systemd-journald[1101]: System Journal (/var/log/journal/75259b436d4344e2b9d7ec31e8eb86d9) is 8.0M, max 195.6M, 187.6M free. Sep 4 19:46:46.304469 systemd-journald[1101]: Received client request to flush runtime journal. Sep 4 19:46:46.297348 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 19:46:46.307702 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 19:46:46.317900 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 19:46:46.328932 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 19:46:46.345868 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 19:46:46.354205 kernel: loop0: detected capacity change from 0 to 89336 Sep 4 19:46:46.354809 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 19:46:46.375410 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 19:46:46.392405 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 19:46:46.399206 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 19:46:46.409625 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 19:46:46.421366 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 19:46:46.439385 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 19:46:46.449204 kernel: loop1: detected capacity change from 0 to 140728 Sep 4 19:46:46.458433 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 19:46:46.472290 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 19:46:46.494426 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 4 19:46:46.506866 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 19:46:46.517815 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 19:46:46.518266 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 4 19:46:46.537208 kernel: loop2: detected capacity change from 0 to 211296 Sep 4 19:46:46.546669 systemd-tmpfiles[1152]: ACLs are not supported, ignoring. Sep 4 19:46:46.546679 systemd-tmpfiles[1152]: ACLs are not supported, ignoring. Sep 4 19:46:46.547805 udevadm[1137]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 4 19:46:46.549178 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 19:46:46.605245 kernel: loop3: detected capacity change from 0 to 8 Sep 4 19:46:46.610473 ldconfig[1127]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 19:46:46.612038 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 19:46:46.659368 kernel: loop4: detected capacity change from 0 to 89336 Sep 4 19:46:46.684994 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 19:46:46.689257 kernel: loop5: detected capacity change from 0 to 140728 Sep 4 19:46:46.719206 kernel: loop6: detected capacity change from 0 to 211296 Sep 4 19:46:46.720390 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 19:46:46.732357 systemd-udevd[1161]: Using default interface naming scheme 'v255'. Sep 4 19:46:46.753261 kernel: loop7: detected capacity change from 0 to 8 Sep 4 19:46:46.753677 (sd-merge)[1159]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Sep 4 19:46:46.753907 (sd-merge)[1159]: Merged extensions into '/usr'. Sep 4 19:46:46.756050 systemd[1]: Reloading requested from client PID 1133 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 19:46:46.756056 systemd[1]: Reloading... Sep 4 19:46:46.792210 zram_generator::config[1197]: No configuration found. Sep 4 19:46:46.792278 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 37 scanned by (udev-worker) (1166) Sep 4 19:46:46.813214 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1172) Sep 4 19:46:46.824282 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Sep 4 19:46:46.824354 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1172) Sep 4 19:46:46.841769 kernel: ACPI: button: Sleep Button [SLPB] Sep 4 19:46:46.902733 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 4 19:46:46.920244 kernel: IPMI message handler: version 39.2 Sep 4 19:46:46.920292 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 19:46:46.938305 kernel: ACPI: button: Power Button [PWRF] Sep 4 19:46:46.984478 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 19:46:46.998010 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Sep 4 19:46:46.998183 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Sep 4 19:46:47.010205 kernel: ipmi device interface Sep 4 19:46:47.013203 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Sep 4 19:46:47.037688 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5200_MTFDDAK480TDN OEM. Sep 4 19:46:47.058418 systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped. Sep 4 19:46:47.058596 systemd[1]: Reloading finished in 302 ms. Sep 4 19:46:47.065205 kernel: iTCO_vendor_support: vendor-support=0 Sep 4 19:46:47.066208 kernel: ipmi_si: IPMI System Interface driver Sep 4 19:46:47.066247 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Sep 4 19:46:47.066490 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Sep 4 19:46:47.076823 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Sep 4 19:46:47.141674 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Sep 4 19:46:47.158524 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Sep 4 19:46:47.175293 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Sep 4 19:46:47.193775 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Sep 4 19:46:47.213171 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Sep 4 19:46:47.229404 kernel: ipmi_si: Adding ACPI-specified kcs state machine Sep 4 19:46:47.249738 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Sep 4 19:46:47.287893 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Sep 4 19:46:47.288028 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Sep 4 19:46:47.299208 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Sep 4 19:46:47.375792 kernel: intel_rapl_common: Found RAPL domain package Sep 4 19:46:47.375826 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Sep 4 19:46:47.375930 kernel: intel_rapl_common: Found RAPL domain core Sep 4 19:46:47.375942 kernel: intel_rapl_common: Found RAPL domain dram Sep 4 19:46:47.425309 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 19:46:47.436405 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 19:46:47.465208 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Sep 4 19:46:47.482206 kernel: ipmi_ssif: IPMI SSIF Interface driver Sep 4 19:46:47.487513 systemd[1]: Starting ensure-sysext.service... Sep 4 19:46:47.495854 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 19:46:47.511839 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 19:46:47.523852 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 19:46:47.526496 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 19:46:47.532928 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 19:46:47.555061 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 19:46:47.566637 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 19:46:47.566840 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 19:46:47.567346 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 19:46:47.567513 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Sep 4 19:46:47.567548 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Sep 4 19:46:47.568108 systemd[1]: Reloading requested from client PID 1332 ('systemctl') (unit ensure-sysext.service)... Sep 4 19:46:47.568121 systemd[1]: Reloading... Sep 4 19:46:47.571253 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 19:46:47.571257 systemd-tmpfiles[1337]: Skipping /boot Sep 4 19:46:47.575709 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 19:46:47.575713 systemd-tmpfiles[1337]: Skipping /boot Sep 4 19:46:47.600210 zram_generator::config[1368]: No configuration found. Sep 4 19:46:47.650864 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 19:46:47.705236 systemd[1]: Reloading finished in 136 ms. Sep 4 19:46:47.741456 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 19:46:47.752492 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 19:46:47.778374 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 19:46:47.789231 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 19:46:47.797579 augenrules[1442]: No rules Sep 4 19:46:47.801969 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 19:46:47.813966 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 19:46:47.820823 lvm[1447]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 19:46:47.827328 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 19:46:47.851375 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 19:46:47.864168 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 19:46:47.874863 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 19:46:47.884475 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 19:46:47.894580 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 19:46:47.906464 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 19:46:47.917591 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 19:46:47.923248 systemd-networkd[1335]: lo: Link UP Sep 4 19:46:47.923252 systemd-networkd[1335]: lo: Gained carrier Sep 4 19:46:47.925774 systemd-networkd[1335]: bond0: netdev ready Sep 4 19:46:47.926162 systemd-networkd[1335]: Enumeration completed Sep 4 19:46:47.928404 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 19:46:47.935269 systemd-networkd[1335]: enp1s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:97:f6:50.network. Sep 4 19:46:47.942543 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 19:46:47.953316 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 19:46:47.953457 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 19:46:47.954252 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 19:46:47.966870 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 19:46:47.968745 lvm[1466]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 19:46:47.977969 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 19:46:47.978314 systemd-resolved[1449]: Positive Trust Anchors: Sep 4 19:46:47.978320 systemd-resolved[1449]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 19:46:47.978344 systemd-resolved[1449]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 19:46:47.981021 systemd-resolved[1449]: Using system hostname 'ci-4054.1.0-a-2707fc1066'. Sep 4 19:46:47.990847 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 19:46:48.001277 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 19:46:48.002038 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 19:46:48.014918 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 19:46:48.024293 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 19:46:48.024394 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 19:46:48.025664 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 19:46:48.037578 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 19:46:48.037654 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 19:46:48.049519 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 19:46:48.049591 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 19:46:48.061520 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 19:46:48.061592 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 19:46:48.072498 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 19:46:48.084020 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 19:46:48.098919 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 19:46:48.099032 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 19:46:48.111463 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 19:46:48.121916 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 19:46:48.140438 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 19:46:48.150313 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 19:46:48.150386 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 19:46:48.150434 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 19:46:48.150962 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 19:46:48.151035 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 19:46:48.162501 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 19:46:48.162569 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 19:46:48.173478 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 19:46:48.173545 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 19:46:48.185494 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 19:46:48.185616 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 19:46:48.196392 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 19:46:48.206858 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 19:46:48.216852 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 19:46:48.227850 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 19:46:48.237351 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 19:46:48.237448 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 19:46:48.237517 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 19:46:48.238240 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 19:46:48.238325 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 19:46:48.249546 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 19:46:48.249613 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 19:46:48.259532 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 19:46:48.259599 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 19:46:48.270488 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 19:46:48.270553 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 19:46:48.281355 systemd[1]: Finished ensure-sysext.service. Sep 4 19:46:48.290682 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 19:46:48.290713 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 19:46:48.300360 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 4 19:46:48.339956 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 4 19:46:48.351336 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 19:46:48.682240 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Sep 4 19:46:48.705257 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Sep 4 19:46:48.705876 systemd-networkd[1335]: enp1s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:97:f6:51.network. Sep 4 19:46:48.706782 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 19:46:48.717481 systemd[1]: Reached target network.target - Network. Sep 4 19:46:48.727417 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 19:46:48.738390 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 19:46:48.748489 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 19:46:48.759398 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 19:46:48.770486 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 19:46:48.780503 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 19:46:48.791407 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 19:46:48.802400 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 19:46:48.802471 systemd[1]: Reached target paths.target - Path Units. Sep 4 19:46:48.810431 systemd[1]: Reached target timers.target - Timer Units. Sep 4 19:46:48.819734 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 19:46:48.832467 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 19:46:48.853555 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 19:46:48.864877 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 19:46:48.874705 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 19:46:48.884452 systemd[1]: Reached target basic.target - Basic System. Sep 4 19:46:48.893552 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 19:46:48.893631 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 19:46:48.902271 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 19:46:48.914037 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 4 19:46:48.924058 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 19:46:48.934068 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 19:46:48.943985 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 19:46:48.946774 jq[1500]: false Sep 4 19:46:48.949730 dbus-daemon[1499]: [system] SELinux support is enabled Sep 4 19:46:48.950940 coreos-metadata[1498]: Sep 04 19:46:48.950 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Sep 4 19:46:48.951906 coreos-metadata[1498]: Sep 04 19:46:48.951 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Sep 4 19:46:48.953473 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 19:46:48.954088 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 19:46:48.961584 extend-filesystems[1503]: Found loop4 Sep 4 19:46:48.961584 extend-filesystems[1503]: Found loop5 Sep 4 19:46:48.961584 extend-filesystems[1503]: Found loop6 Sep 4 19:46:48.961584 extend-filesystems[1503]: Found loop7 Sep 4 19:46:49.080363 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Sep 4 19:46:49.080517 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks Sep 4 19:46:49.080529 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 37 scanned by (udev-worker) (1172) Sep 4 19:46:49.080539 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Sep 4 19:46:49.080547 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Sep 4 19:46:49.033633 systemd-networkd[1335]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Sep 4 19:46:49.080658 extend-filesystems[1503]: Found sda Sep 4 19:46:49.080658 extend-filesystems[1503]: Found sdb Sep 4 19:46:49.080658 extend-filesystems[1503]: Found sdb1 Sep 4 19:46:49.080658 extend-filesystems[1503]: Found sdb2 Sep 4 19:46:49.080658 extend-filesystems[1503]: Found sdb3 Sep 4 19:46:49.080658 extend-filesystems[1503]: Found usr Sep 4 19:46:49.080658 extend-filesystems[1503]: Found sdb4 Sep 4 19:46:49.080658 extend-filesystems[1503]: Found sdb6 Sep 4 19:46:49.080658 extend-filesystems[1503]: Found sdb7 Sep 4 19:46:49.080658 extend-filesystems[1503]: Found sdb9 Sep 4 19:46:49.080658 extend-filesystems[1503]: Checking size of /dev/sdb9 Sep 4 19:46:49.080658 extend-filesystems[1503]: Resized partition /dev/sdb9 Sep 4 19:46:49.240243 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex Sep 4 19:46:49.240261 kernel: bond0: active interface up! Sep 4 19:46:49.035692 systemd-networkd[1335]: enp1s0f0np0: Link UP Sep 4 19:46:49.240382 extend-filesystems[1511]: resize2fs 1.47.1 (20-May-2024) Sep 4 19:46:49.036894 systemd-networkd[1335]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:97:f6:50.network. Sep 4 19:46:49.037075 systemd-networkd[1335]: enp1s0f1np1: Link UP Sep 4 19:46:49.037253 systemd-networkd[1335]: enp1s0f0np0: Gained carrier Sep 4 19:46:49.259486 sshd_keygen[1529]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 19:46:49.061354 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 19:46:49.259559 update_engine[1530]: I0904 19:46:49.220435 1530 main.cc:92] Flatcar Update Engine starting Sep 4 19:46:49.259559 update_engine[1530]: I0904 19:46:49.221197 1530 update_check_scheduler.cc:74] Next update check in 11m52s Sep 4 19:46:49.063346 systemd-networkd[1335]: enp1s0f1np1: Gained carrier Sep 4 19:46:49.259739 jq[1538]: true Sep 4 19:46:49.069327 systemd-networkd[1335]: bond0: Link UP Sep 4 19:46:49.069568 systemd-networkd[1335]: bond0: Gained carrier Sep 4 19:46:49.069681 systemd-timesyncd[1493]: Network configuration changed, trying to establish connection. Sep 4 19:46:49.073860 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 19:46:49.096810 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 19:46:49.104594 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 19:46:49.118117 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... Sep 4 19:46:49.178458 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 19:46:49.178857 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 19:46:49.199203 systemd-logind[1525]: Watching system buttons on /dev/input/event3 (Power Button) Sep 4 19:46:49.199214 systemd-logind[1525]: Watching system buttons on /dev/input/event2 (Sleep Button) Sep 4 19:46:49.199224 systemd-logind[1525]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Sep 4 19:46:49.199550 systemd-logind[1525]: New seat seat0. Sep 4 19:46:49.213313 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 19:46:49.251602 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 19:46:49.276021 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 19:46:49.287203 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Sep 4 19:46:49.305463 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 19:46:49.305550 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 19:46:49.305718 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 19:46:49.305804 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 19:46:49.315686 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 19:46:49.315773 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 19:46:49.326433 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 19:46:49.339375 (ntainerd)[1543]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 19:46:49.340698 jq[1542]: false Sep 4 19:46:49.341690 systemd[1]: update-ssh-keys-after-ignition.service: Skipped due to 'exec-condition'. Sep 4 19:46:49.341785 systemd[1]: Condition check resulted in update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition being skipped. Sep 4 19:46:49.342419 dbus-daemon[1499]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 4 19:46:49.343917 tar[1540]: linux-amd64/helm Sep 4 19:46:49.349823 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Sep 4 19:46:49.349918 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. Sep 4 19:46:49.351805 systemd[1]: Started update-engine.service - Update Engine. Sep 4 19:46:49.375408 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 19:46:49.384201 systemd[1]: Starting sshkeys.service... Sep 4 19:46:49.391320 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 19:46:49.391420 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 19:46:49.402313 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 19:46:49.402391 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 19:46:49.432437 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 19:46:49.445185 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 19:46:49.445284 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 19:46:49.450738 locksmithd[1564]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 19:46:49.473538 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 19:46:49.483732 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 19:46:49.495418 containerd[1543]: time="2024-09-04T19:46:49.495322676Z" level=info msg="starting containerd" revision=8ccfc03e4e2b73c22899202ae09d0caf906d3863 version=v1.7.20 Sep 4 19:46:49.496685 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 4 19:46:49.508076 containerd[1543]: time="2024-09-04T19:46:49.508027294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 19:46:49.508845 containerd[1543]: time="2024-09-04T19:46:49.508799295Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 19:46:49.508845 containerd[1543]: time="2024-09-04T19:46:49.508816429Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 19:46:49.508845 containerd[1543]: time="2024-09-04T19:46:49.508826847Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 19:46:49.508959 containerd[1543]: time="2024-09-04T19:46:49.508927154Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 19:46:49.508959 containerd[1543]: time="2024-09-04T19:46:49.508937816Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 19:46:49.509000 containerd[1543]: time="2024-09-04T19:46:49.508981390Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 19:46:49.509000 containerd[1543]: time="2024-09-04T19:46:49.508989812Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 19:46:49.509105 containerd[1543]: time="2024-09-04T19:46:49.509095459Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 19:46:49.509121 containerd[1543]: time="2024-09-04T19:46:49.509105344Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 19:46:49.509134 containerd[1543]: time="2024-09-04T19:46:49.509129007Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 19:46:49.509148 containerd[1543]: time="2024-09-04T19:46:49.509135772Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 19:46:49.509194 containerd[1543]: time="2024-09-04T19:46:49.509186715Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 19:46:49.509488 containerd[1543]: time="2024-09-04T19:46:49.509449715Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 19:46:49.509519 containerd[1543]: time="2024-09-04T19:46:49.509510651Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 19:46:49.509544 containerd[1543]: time="2024-09-04T19:46:49.509519594Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 19:46:49.509609 containerd[1543]: time="2024-09-04T19:46:49.509567609Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 19:46:49.509609 containerd[1543]: time="2024-09-04T19:46:49.509602562Z" level=info msg="metadata content store policy set" policy=shared Sep 4 19:46:49.528205 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 Sep 4 19:46:49.529420 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 4 19:46:49.539917 coreos-metadata[1587]: Sep 04 19:46:49.539 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Sep 4 19:46:49.541958 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 19:46:49.548168 containerd[1543]: time="2024-09-04T19:46:49.548140837Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 19:46:49.548207 containerd[1543]: time="2024-09-04T19:46:49.548189982Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 19:46:49.548226 containerd[1543]: time="2024-09-04T19:46:49.548207287Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 19:46:49.548226 containerd[1543]: time="2024-09-04T19:46:49.548218549Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 19:46:49.548265 containerd[1543]: time="2024-09-04T19:46:49.548237751Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 19:46:49.548346 containerd[1543]: time="2024-09-04T19:46:49.548337865Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 19:46:49.548509 containerd[1543]: time="2024-09-04T19:46:49.548501116Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 19:46:49.548575 containerd[1543]: time="2024-09-04T19:46:49.548566844Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 19:46:49.548594 containerd[1543]: time="2024-09-04T19:46:49.548576980Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 19:46:49.548594 containerd[1543]: time="2024-09-04T19:46:49.548584438Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 19:46:49.548637 containerd[1543]: time="2024-09-04T19:46:49.548596721Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 19:46:49.548637 containerd[1543]: time="2024-09-04T19:46:49.548604851Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 19:46:49.548637 containerd[1543]: time="2024-09-04T19:46:49.548611890Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 19:46:49.548637 containerd[1543]: time="2024-09-04T19:46:49.548619059Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 19:46:49.548637 containerd[1543]: time="2024-09-04T19:46:49.548632186Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 19:46:49.548750 containerd[1543]: time="2024-09-04T19:46:49.548639649Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 19:46:49.548750 containerd[1543]: time="2024-09-04T19:46:49.548646353Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 19:46:49.548750 containerd[1543]: time="2024-09-04T19:46:49.548652924Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 19:46:49.548750 containerd[1543]: time="2024-09-04T19:46:49.548663768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 19:46:49.548750 containerd[1543]: time="2024-09-04T19:46:49.548671041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 19:46:49.548750 containerd[1543]: time="2024-09-04T19:46:49.548677699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 19:46:49.548750 containerd[1543]: time="2024-09-04T19:46:49.548685182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 19:46:49.548750 containerd[1543]: time="2024-09-04T19:46:49.548692210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 19:46:49.548750 containerd[1543]: time="2024-09-04T19:46:49.548699506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 19:46:49.548750 containerd[1543]: time="2024-09-04T19:46:49.548706568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 19:46:49.548750 containerd[1543]: time="2024-09-04T19:46:49.548713690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 19:46:49.548750 containerd[1543]: time="2024-09-04T19:46:49.548721170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 19:46:49.548750 containerd[1543]: time="2024-09-04T19:46:49.548729267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 19:46:49.548750 containerd[1543]: time="2024-09-04T19:46:49.548739451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 19:46:49.548940 extend-filesystems[1511]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required Sep 4 19:46:49.548940 extend-filesystems[1511]: old_desc_blocks = 1, new_desc_blocks = 56 Sep 4 19:46:49.548940 extend-filesystems[1511]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. Sep 4 19:46:49.580336 containerd[1543]: time="2024-09-04T19:46:49.548754072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 19:46:49.580336 containerd[1543]: time="2024-09-04T19:46:49.548762045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 19:46:49.580336 containerd[1543]: time="2024-09-04T19:46:49.548770932Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 19:46:49.580336 containerd[1543]: time="2024-09-04T19:46:49.548785184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 19:46:49.580336 containerd[1543]: time="2024-09-04T19:46:49.548795482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 19:46:49.580336 containerd[1543]: time="2024-09-04T19:46:49.548802062Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 19:46:49.580336 containerd[1543]: time="2024-09-04T19:46:49.548824809Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 19:46:49.580336 containerd[1543]: time="2024-09-04T19:46:49.548835425Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 19:46:49.580336 containerd[1543]: time="2024-09-04T19:46:49.548842287Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 19:46:49.580336 containerd[1543]: time="2024-09-04T19:46:49.548849393Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 19:46:49.580336 containerd[1543]: time="2024-09-04T19:46:49.548854767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 19:46:49.580336 containerd[1543]: time="2024-09-04T19:46:49.548861219Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 19:46:49.580336 containerd[1543]: time="2024-09-04T19:46:49.548867746Z" level=info msg="NRI interface is disabled by configuration." Sep 4 19:46:49.580336 containerd[1543]: time="2024-09-04T19:46:49.548873959Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 19:46:49.551348 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. Sep 4 19:46:49.580582 extend-filesystems[1503]: Resized filesystem in /dev/sdb9 Sep 4 19:46:49.580513 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 19:46:49.608352 containerd[1543]: time="2024-09-04T19:46:49.549044490Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 19:46:49.608352 containerd[1543]: time="2024-09-04T19:46:49.549080909Z" level=info msg="Connect containerd service" Sep 4 19:46:49.608352 containerd[1543]: time="2024-09-04T19:46:49.549094342Z" level=info msg="using legacy CRI server" Sep 4 19:46:49.608352 containerd[1543]: time="2024-09-04T19:46:49.549098253Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 19:46:49.608352 containerd[1543]: time="2024-09-04T19:46:49.549149379Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 19:46:49.608352 containerd[1543]: time="2024-09-04T19:46:49.549459689Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 19:46:49.608352 containerd[1543]: time="2024-09-04T19:46:49.549568681Z" level=info msg="Start subscribing containerd event" Sep 4 19:46:49.608352 containerd[1543]: time="2024-09-04T19:46:49.549600635Z" level=info msg="Start recovering state" Sep 4 19:46:49.608352 containerd[1543]: time="2024-09-04T19:46:49.549610521Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 19:46:49.608352 containerd[1543]: time="2024-09-04T19:46:49.549635592Z" level=info msg="Start event monitor" Sep 4 19:46:49.608352 containerd[1543]: time="2024-09-04T19:46:49.549635264Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 19:46:49.608352 containerd[1543]: time="2024-09-04T19:46:49.549647087Z" level=info msg="Start snapshots syncer" Sep 4 19:46:49.608352 containerd[1543]: time="2024-09-04T19:46:49.549653264Z" level=info msg="Start cni network conf syncer for default" Sep 4 19:46:49.608352 containerd[1543]: time="2024-09-04T19:46:49.549657379Z" level=info msg="Start streaming server" Sep 4 19:46:49.608352 containerd[1543]: time="2024-09-04T19:46:49.549688816Z" level=info msg="containerd successfully booted in 0.055426s" Sep 4 19:46:49.601027 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 19:46:49.618699 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 19:46:49.618826 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 19:46:49.660165 tar[1540]: linux-amd64/LICENSE Sep 4 19:46:49.660215 tar[1540]: linux-amd64/README.md Sep 4 19:46:49.672766 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 19:46:49.952040 coreos-metadata[1498]: Sep 04 19:46:49.951 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Sep 4 19:46:50.409639 systemd-networkd[1335]: bond0: Gained IPv6LL Sep 4 19:46:50.411332 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 19:46:50.422782 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 19:46:50.449540 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 19:46:50.460250 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 19:46:50.481491 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 19:46:51.117930 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 19:46:51.129757 (kubelet)[1620]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 19:46:51.622630 kubelet[1620]: E0904 19:46:51.622540 1620 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 19:46:51.623841 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 19:46:51.623916 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 19:46:52.176731 systemd-timesyncd[1493]: Contacted time server 129.146.193.200:123 (0.flatcar.pool.ntp.org). Sep 4 19:46:52.176777 systemd-timesyncd[1493]: Initial clock synchronization to Wed 2024-09-04 19:46:52.448099 UTC. Sep 4 19:46:52.599910 coreos-metadata[1587]: Sep 04 19:46:52.599 INFO Fetch successful Sep 4 19:46:52.629560 unknown[1587]: wrote ssh authorized keys file for user: core Sep 4 19:46:52.654094 update-ssh-keys[1638]: Updated "/home/core/.ssh/authorized_keys" Sep 4 19:46:52.654462 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 4 19:46:52.655449 coreos-metadata[1498]: Sep 04 19:46:52.655 INFO Fetch successful Sep 4 19:46:52.667391 systemd[1]: Finished sshkeys.service. Sep 4 19:46:52.702447 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 4 19:46:52.714540 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Sep 4 19:46:52.757103 kernel: mlx5_core 0000:01:00.0: lag map: port 1:1 port 2:2 Sep 4 19:46:52.757496 kernel: mlx5_core 0000:01:00.0: shared_fdb:0 mode:queue_affinity Sep 4 19:46:52.974079 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Sep 4 19:46:52.985862 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 19:46:52.995419 systemd[1]: Startup finished in 2.673s (kernel) + 7.198s (initrd) + 9.941s (userspace) = 19.813s. Sep 4 19:46:53.049952 login[1592]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 4 19:46:53.051380 login[1596]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 4 19:46:53.056243 systemd-logind[1525]: New session 2 of user core. Sep 4 19:46:53.057051 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 19:46:53.076406 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 19:46:53.077979 systemd-logind[1525]: New session 1 of user core. Sep 4 19:46:53.082886 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 19:46:53.084365 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 19:46:53.089429 (systemd)[1657]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 19:46:53.162764 systemd[1657]: Queued start job for default target default.target. Sep 4 19:46:53.173771 systemd[1657]: Created slice app.slice - User Application Slice. Sep 4 19:46:53.173785 systemd[1657]: Reached target paths.target - Paths. Sep 4 19:46:53.173794 systemd[1657]: Reached target timers.target - Timers. Sep 4 19:46:53.174485 systemd[1657]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 19:46:53.180163 systemd[1657]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 19:46:53.180193 systemd[1657]: Reached target sockets.target - Sockets. Sep 4 19:46:53.180202 systemd[1657]: Reached target basic.target - Basic System. Sep 4 19:46:53.180265 systemd[1657]: Reached target default.target - Main User Target. Sep 4 19:46:53.180282 systemd[1657]: Startup finished in 87ms. Sep 4 19:46:53.180369 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 19:46:53.181142 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 19:46:53.181585 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 19:47:00.024067 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 19:47:00.041513 systemd[1]: Started sshd@0-147.75.90.143:22-139.178.89.65:33224.service - OpenSSH per-connection server daemon (139.178.89.65:33224). Sep 4 19:47:00.073288 sshd[1688]: Accepted publickey for core from 139.178.89.65 port 33224 ssh2: RSA SHA256:oG4reMkBNmhhM3S4s7jiXj/Hzc3svZDvUO25x06ttt4 Sep 4 19:47:00.074702 sshd[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 19:47:00.080159 systemd-logind[1525]: New session 3 of user core. Sep 4 19:47:00.095685 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 19:47:00.160402 systemd[1]: Started sshd@1-147.75.90.143:22-139.178.89.65:33226.service - OpenSSH per-connection server daemon (139.178.89.65:33226). Sep 4 19:47:00.185558 sshd[1693]: Accepted publickey for core from 139.178.89.65 port 33226 ssh2: RSA SHA256:oG4reMkBNmhhM3S4s7jiXj/Hzc3svZDvUO25x06ttt4 Sep 4 19:47:00.186377 sshd[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 19:47:00.188870 systemd-logind[1525]: New session 4 of user core. Sep 4 19:47:00.201378 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 19:47:00.251066 sshd[1693]: pam_unix(sshd:session): session closed for user core Sep 4 19:47:00.263692 systemd[1]: sshd@1-147.75.90.143:22-139.178.89.65:33226.service: Deactivated successfully. Sep 4 19:47:00.264929 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 19:47:00.266236 systemd-logind[1525]: Session 4 logged out. Waiting for processes to exit. Sep 4 19:47:00.267371 systemd[1]: Started sshd@2-147.75.90.143:22-139.178.89.65:33234.service - OpenSSH per-connection server daemon (139.178.89.65:33234). Sep 4 19:47:00.268323 systemd-logind[1525]: Removed session 4. Sep 4 19:47:00.296360 sshd[1700]: Accepted publickey for core from 139.178.89.65 port 33234 ssh2: RSA SHA256:oG4reMkBNmhhM3S4s7jiXj/Hzc3svZDvUO25x06ttt4 Sep 4 19:47:00.297167 sshd[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 19:47:00.300180 systemd-logind[1525]: New session 5 of user core. Sep 4 19:47:00.316468 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 19:47:00.369158 sshd[1700]: pam_unix(sshd:session): session closed for user core Sep 4 19:47:00.386888 systemd[1]: sshd@2-147.75.90.143:22-139.178.89.65:33234.service: Deactivated successfully. Sep 4 19:47:00.387725 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 19:47:00.388540 systemd-logind[1525]: Session 5 logged out. Waiting for processes to exit. Sep 4 19:47:00.389301 systemd[1]: Started sshd@3-147.75.90.143:22-139.178.89.65:33246.service - OpenSSH per-connection server daemon (139.178.89.65:33246). Sep 4 19:47:00.389921 systemd-logind[1525]: Removed session 5. Sep 4 19:47:00.417327 sshd[1708]: Accepted publickey for core from 139.178.89.65 port 33246 ssh2: RSA SHA256:oG4reMkBNmhhM3S4s7jiXj/Hzc3svZDvUO25x06ttt4 Sep 4 19:47:00.418124 sshd[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 19:47:00.421094 systemd-logind[1525]: New session 6 of user core. Sep 4 19:47:00.431459 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 19:47:00.496300 sshd[1708]: pam_unix(sshd:session): session closed for user core Sep 4 19:47:00.512915 systemd[1]: sshd@3-147.75.90.143:22-139.178.89.65:33246.service: Deactivated successfully. Sep 4 19:47:00.516489 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 19:47:00.519797 systemd-logind[1525]: Session 6 logged out. Waiting for processes to exit. Sep 4 19:47:00.533973 systemd[1]: Started sshd@4-147.75.90.143:22-139.178.89.65:33250.service - OpenSSH per-connection server daemon (139.178.89.65:33250). Sep 4 19:47:00.536678 systemd-logind[1525]: Removed session 6. Sep 4 19:47:00.587388 sshd[1715]: Accepted publickey for core from 139.178.89.65 port 33250 ssh2: RSA SHA256:oG4reMkBNmhhM3S4s7jiXj/Hzc3svZDvUO25x06ttt4 Sep 4 19:47:00.589193 sshd[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 19:47:00.595327 systemd-logind[1525]: New session 7 of user core. Sep 4 19:47:00.605644 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 19:47:00.672790 sudo[1719]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 19:47:00.672941 sudo[1719]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 19:47:00.693897 sudo[1719]: pam_unix(sudo:session): session closed for user root Sep 4 19:47:00.694921 sshd[1715]: pam_unix(sshd:session): session closed for user core Sep 4 19:47:00.706142 systemd[1]: sshd@4-147.75.90.143:22-139.178.89.65:33250.service: Deactivated successfully. Sep 4 19:47:00.707164 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 19:47:00.708135 systemd-logind[1525]: Session 7 logged out. Waiting for processes to exit. Sep 4 19:47:00.709020 systemd[1]: Started sshd@5-147.75.90.143:22-139.178.89.65:33260.service - OpenSSH per-connection server daemon (139.178.89.65:33260). Sep 4 19:47:00.709716 systemd-logind[1525]: Removed session 7. Sep 4 19:47:00.743651 sshd[1724]: Accepted publickey for core from 139.178.89.65 port 33260 ssh2: RSA SHA256:oG4reMkBNmhhM3S4s7jiXj/Hzc3svZDvUO25x06ttt4 Sep 4 19:47:00.745015 sshd[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 19:47:00.750127 systemd-logind[1525]: New session 8 of user core. Sep 4 19:47:00.765684 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 19:47:00.826220 sudo[1728]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 19:47:00.826377 sudo[1728]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 19:47:00.828513 sudo[1728]: pam_unix(sudo:session): session closed for user root Sep 4 19:47:00.831207 sudo[1727]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 4 19:47:00.831372 sudo[1727]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 19:47:00.849557 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 4 19:47:00.850754 auditctl[1731]: No rules Sep 4 19:47:00.850995 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 19:47:00.851121 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 4 19:47:00.852829 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 19:47:00.878865 augenrules[1749]: No rules Sep 4 19:47:00.879660 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 19:47:00.880907 sudo[1727]: pam_unix(sudo:session): session closed for user root Sep 4 19:47:00.882917 sshd[1724]: pam_unix(sshd:session): session closed for user core Sep 4 19:47:00.905968 systemd[1]: sshd@5-147.75.90.143:22-139.178.89.65:33260.service: Deactivated successfully. Sep 4 19:47:00.909487 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 19:47:00.912857 systemd-logind[1525]: Session 8 logged out. Waiting for processes to exit. Sep 4 19:47:00.929967 systemd[1]: Started sshd@6-147.75.90.143:22-139.178.89.65:33270.service - OpenSSH per-connection server daemon (139.178.89.65:33270). Sep 4 19:47:00.932391 systemd-logind[1525]: Removed session 8. Sep 4 19:47:00.980641 sshd[1757]: Accepted publickey for core from 139.178.89.65 port 33270 ssh2: RSA SHA256:oG4reMkBNmhhM3S4s7jiXj/Hzc3svZDvUO25x06ttt4 Sep 4 19:47:00.982512 sshd[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 19:47:00.988530 systemd-logind[1525]: New session 9 of user core. Sep 4 19:47:01.008609 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 19:47:01.078347 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 19:47:01.079166 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 19:47:01.333556 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 19:47:01.333644 (dockerd)[1774]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 19:47:01.590089 dockerd[1774]: time="2024-09-04T19:47:01.589977118Z" level=info msg="Starting up" Sep 4 19:47:01.663459 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 19:47:01.672419 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 19:47:01.678928 dockerd[1774]: time="2024-09-04T19:47:01.678880215Z" level=info msg="Loading containers: start." Sep 4 19:47:01.910600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 19:47:01.912847 (kubelet)[1839]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 19:47:01.936897 kubelet[1839]: E0904 19:47:01.936873 1839 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 19:47:01.939401 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 19:47:01.939488 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 19:47:01.947296 kernel: Initializing XFRM netlink socket Sep 4 19:47:01.992719 systemd-networkd[1335]: docker0: Link UP Sep 4 19:47:02.016144 dockerd[1774]: time="2024-09-04T19:47:02.016129134Z" level=info msg="Loading containers: done." Sep 4 19:47:02.025864 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck529649288-merged.mount: Deactivated successfully. Sep 4 19:47:02.027232 dockerd[1774]: time="2024-09-04T19:47:02.027181567Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 19:47:02.027275 dockerd[1774]: time="2024-09-04T19:47:02.027237848Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 4 19:47:02.027295 dockerd[1774]: time="2024-09-04T19:47:02.027290310Z" level=info msg="Daemon has completed initialization" Sep 4 19:47:02.043841 dockerd[1774]: time="2024-09-04T19:47:02.043782204Z" level=info msg="API listen on /run/docker.sock" Sep 4 19:47:02.043929 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 19:47:02.771130 containerd[1543]: time="2024-09-04T19:47:02.771109825Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.8\"" Sep 4 19:47:03.348569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount328971700.mount: Deactivated successfully. Sep 4 19:47:04.993869 containerd[1543]: time="2024-09-04T19:47:04.993812441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:47:04.994080 containerd[1543]: time="2024-09-04T19:47:04.994022978Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.8: active requests=0, bytes read=35232949" Sep 4 19:47:04.994472 containerd[1543]: time="2024-09-04T19:47:04.994428591Z" level=info msg="ImageCreate event name:\"sha256:ea7e9c4af6a6f4f2fc0b86f81d102bf60167b3cbd4ce7d1545833b0283ab80b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:47:04.996584 containerd[1543]: time="2024-09-04T19:47:04.996543108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6f72fa926c9b05e10629fe1a092fd28dcd65b4fdfd0cc7bd55f85a57a6ba1fa5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:47:04.997623 containerd[1543]: time="2024-09-04T19:47:04.997582093Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.8\" with image id \"sha256:ea7e9c4af6a6f4f2fc0b86f81d102bf60167b3cbd4ce7d1545833b0283ab80b7\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6f72fa926c9b05e10629fe1a092fd28dcd65b4fdfd0cc7bd55f85a57a6ba1fa5\", size \"35229749\" in 2.226451512s" Sep 4 19:47:04.997623 containerd[1543]: time="2024-09-04T19:47:04.997598257Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.8\" returns image reference \"sha256:ea7e9c4af6a6f4f2fc0b86f81d102bf60167b3cbd4ce7d1545833b0283ab80b7\"" Sep 4 19:47:05.008490 containerd[1543]: time="2024-09-04T19:47:05.008470502Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.8\"" Sep 4 19:47:07.325388 containerd[1543]: time="2024-09-04T19:47:07.325366863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:47:07.325608 containerd[1543]: time="2024-09-04T19:47:07.325590916Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.8: active requests=0, bytes read=32206206" Sep 4 19:47:07.325934 containerd[1543]: time="2024-09-04T19:47:07.325924082Z" level=info msg="ImageCreate event name:\"sha256:b469e8ed7312f97f28340218ee5884606f9998ad73d3692a6078a2692253589a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:47:07.327517 containerd[1543]: time="2024-09-04T19:47:07.327475934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6f27d63ded20614c68554b477cd7a78eda78a498a92bfe8935cf964ca5b74d0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:47:07.328075 containerd[1543]: time="2024-09-04T19:47:07.328059867Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.8\" with image id \"sha256:b469e8ed7312f97f28340218ee5884606f9998ad73d3692a6078a2692253589a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6f27d63ded20614c68554b477cd7a78eda78a498a92bfe8935cf964ca5b74d0b\", size \"33756152\" in 2.31956856s" Sep 4 19:47:07.328106 containerd[1543]: time="2024-09-04T19:47:07.328077731Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.8\" returns image reference \"sha256:b469e8ed7312f97f28340218ee5884606f9998ad73d3692a6078a2692253589a\"" Sep 4 19:47:07.339269 containerd[1543]: time="2024-09-04T19:47:07.339250337Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.8\"" Sep 4 19:47:08.420646 containerd[1543]: time="2024-09-04T19:47:08.420586105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:47:08.420858 containerd[1543]: time="2024-09-04T19:47:08.420793302Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.8: active requests=0, bytes read=17321507" Sep 4 19:47:08.421283 containerd[1543]: time="2024-09-04T19:47:08.421242469Z" level=info msg="ImageCreate event name:\"sha256:e932331104a0d08ad33e8c298f0c2a9a23378869c8fc0915df299b611c196f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:47:08.422748 containerd[1543]: time="2024-09-04T19:47:08.422704714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:da74a66675d95e39ec25da5e70729da746d0fa0b15ee0da872ac980519bc28bd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:47:08.423363 containerd[1543]: time="2024-09-04T19:47:08.423322403Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.8\" with image id \"sha256:e932331104a0d08ad33e8c298f0c2a9a23378869c8fc0915df299b611c196f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:da74a66675d95e39ec25da5e70729da746d0fa0b15ee0da872ac980519bc28bd\", size \"18871471\" in 1.084051846s" Sep 4 19:47:08.423363 containerd[1543]: time="2024-09-04T19:47:08.423338049Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.8\" returns image reference \"sha256:e932331104a0d08ad33e8c298f0c2a9a23378869c8fc0915df299b611c196f21\"" Sep 4 19:47:08.434077 containerd[1543]: time="2024-09-04T19:47:08.434057067Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.8\"" Sep 4 19:47:09.217413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3072634501.mount: Deactivated successfully. Sep 4 19:47:09.421406 containerd[1543]: time="2024-09-04T19:47:09.421378378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:47:09.421617 containerd[1543]: time="2024-09-04T19:47:09.421594890Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.8: active requests=0, bytes read=28600380" Sep 4 19:47:09.421946 containerd[1543]: time="2024-09-04T19:47:09.421934513Z" level=info msg="ImageCreate event name:\"sha256:b6e10835ec72a48862d901a23b7c4c924300c3f6cfe89cd6031533b67e1f4e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:47:09.422888 containerd[1543]: time="2024-09-04T19:47:09.422846946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:559a093080f70ca863922f5e4bb90d6926d52653a91edb5b72c685ebb65f1858\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:47:09.423172 containerd[1543]: time="2024-09-04T19:47:09.423136064Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.8\" with image id \"sha256:b6e10835ec72a48862d901a23b7c4c924300c3f6cfe89cd6031533b67e1f4e54\", repo tag \"registry.k8s.io/kube-proxy:v1.29.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:559a093080f70ca863922f5e4bb90d6926d52653a91edb5b72c685ebb65f1858\", size \"28599399\" in 989.057425ms" Sep 4 19:47:09.423172 containerd[1543]: time="2024-09-04T19:47:09.423151043Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.8\" returns image reference \"sha256:b6e10835ec72a48862d901a23b7c4c924300c3f6cfe89cd6031533b67e1f4e54\"" Sep 4 19:47:09.434563 containerd[1543]: time="2024-09-04T19:47:09.434504947Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Sep 4 19:47:09.912190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount359134618.mount: Deactivated successfully. Sep 4 19:47:10.444503 containerd[1543]: time="2024-09-04T19:47:10.444479159Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:47:10.444707 containerd[1543]: time="2024-09-04T19:47:10.444659664Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Sep 4 19:47:10.445078 containerd[1543]: time="2024-09-04T19:47:10.445067395Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:47:10.446679 containerd[1543]: time="2024-09-04T19:47:10.446620052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:47:10.447303 containerd[1543]: time="2024-09-04T19:47:10.447266379Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.012739791s" Sep 4 19:47:10.447303 containerd[1543]: time="2024-09-04T19:47:10.447283291Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Sep 4 19:47:10.458135 containerd[1543]: time="2024-09-04T19:47:10.458115964Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Sep 4 19:47:10.938494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3791396632.mount: Deactivated successfully. Sep 4 19:47:10.940164 containerd[1543]: time="2024-09-04T19:47:10.940144581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:47:10.940425 containerd[1543]: time="2024-09-04T19:47:10.940402911Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Sep 4 19:47:10.940772 containerd[1543]: time="2024-09-04T19:47:10.940758064Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:47:10.941945 containerd[1543]: time="2024-09-04T19:47:10.941933066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:47:10.942495 containerd[1543]: time="2024-09-04T19:47:10.942453336Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 484.303491ms" Sep 4 19:47:10.942495 containerd[1543]: time="2024-09-04T19:47:10.942469427Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Sep 4 19:47:10.955855 containerd[1543]: time="2024-09-04T19:47:10.955762844Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Sep 4 19:47:11.406280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1389953190.mount: Deactivated successfully. Sep 4 19:47:12.190004 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 19:47:12.208363 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 19:47:12.389169 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 19:47:12.391395 (kubelet)[2227]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 19:47:12.414959 kubelet[2227]: E0904 19:47:12.414900 2227 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 19:47:12.416489 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 19:47:12.416583 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 19:47:12.653899 containerd[1543]: time="2024-09-04T19:47:12.653808563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:47:12.654089 containerd[1543]: time="2024-09-04T19:47:12.653995967Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Sep 4 19:47:12.654484 containerd[1543]: time="2024-09-04T19:47:12.654473575Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:47:12.656851 containerd[1543]: time="2024-09-04T19:47:12.656808527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:47:12.657542 containerd[1543]: time="2024-09-04T19:47:12.657497899Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 1.701714987s" Sep 4 19:47:12.657542 containerd[1543]: time="2024-09-04T19:47:12.657513865Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Sep 4 19:47:14.309913 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 19:47:14.318829 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 19:47:14.334869 systemd[1]: Reloading requested from client PID 2403 ('systemctl') (unit session-9.scope)... Sep 4 19:47:14.334877 systemd[1]: Reloading... Sep 4 19:47:14.367276 zram_generator::config[2440]: No configuration found. Sep 4 19:47:14.433720 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 19:47:14.504073 systemd[1]: Reloading finished in 169 ms. Sep 4 19:47:14.541404 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 19:47:14.542574 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 19:47:14.543646 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 19:47:14.543751 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 19:47:14.544570 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 19:47:14.751966 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 19:47:14.754347 (kubelet)[2507]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 19:47:14.780428 kubelet[2507]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 19:47:14.780428 kubelet[2507]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 19:47:14.780428 kubelet[2507]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 19:47:14.780428 kubelet[2507]: I0904 19:47:14.780416 2507 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 19:47:14.974111 kubelet[2507]: I0904 19:47:14.974065 2507 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Sep 4 19:47:14.974111 kubelet[2507]: I0904 19:47:14.974078 2507 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 19:47:14.974273 kubelet[2507]: I0904 19:47:14.974211 2507 server.go:919] "Client rotation is on, will bootstrap in background" Sep 4 19:47:14.987450 kubelet[2507]: E0904 19:47:14.987405 2507 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://147.75.90.143:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 147.75.90.143:6443: connect: connection refused Sep 4 19:47:14.987941 kubelet[2507]: I0904 19:47:14.987890 2507 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 19:47:15.003083 kubelet[2507]: I0904 19:47:15.003033 2507 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 19:47:15.003939 kubelet[2507]: I0904 19:47:15.003931 2507 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 19:47:15.004027 kubelet[2507]: I0904 19:47:15.004021 2507 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 19:47:15.004371 kubelet[2507]: I0904 19:47:15.004364 2507 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 19:47:15.004371 kubelet[2507]: I0904 19:47:15.004373 2507 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 19:47:15.004431 kubelet[2507]: I0904 19:47:15.004426 2507 state_mem.go:36] "Initialized new in-memory state store" Sep 4 19:47:15.004476 kubelet[2507]: I0904 19:47:15.004471 2507 kubelet.go:396] "Attempting to sync node with API server" Sep 4 19:47:15.004499 kubelet[2507]: I0904 19:47:15.004479 2507 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 19:47:15.004499 kubelet[2507]: I0904 19:47:15.004493 2507 kubelet.go:312] "Adding apiserver pod source" Sep 4 19:47:15.004549 kubelet[2507]: I0904 19:47:15.004501 2507 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 19:47:15.005449 kubelet[2507]: I0904 19:47:15.005415 2507 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.20" apiVersion="v1" Sep 4 19:47:15.005857 kubelet[2507]: W0904 19:47:15.005836 2507 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://147.75.90.143:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.75.90.143:6443: connect: connection refused Sep 4 19:47:15.005889 kubelet[2507]: E0904 19:47:15.005867 2507 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://147.75.90.143:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.75.90.143:6443: connect: connection refused Sep 4 19:47:15.006559 kubelet[2507]: W0904 19:47:15.006521 2507 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://147.75.90.143:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4054.1.0-a-2707fc1066&limit=500&resourceVersion=0": dial tcp 147.75.90.143:6443: connect: connection refused Sep 4 19:47:15.006626 kubelet[2507]: E0904 19:47:15.006578 2507 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://147.75.90.143:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4054.1.0-a-2707fc1066&limit=500&resourceVersion=0": dial tcp 147.75.90.143:6443: connect: connection refused Sep 4 19:47:15.008123 kubelet[2507]: I0904 19:47:15.008079 2507 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 19:47:15.009007 kubelet[2507]: W0904 19:47:15.008971 2507 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 19:47:15.009299 kubelet[2507]: I0904 19:47:15.009274 2507 server.go:1256] "Started kubelet" Sep 4 19:47:15.009378 kubelet[2507]: I0904 19:47:15.009349 2507 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 19:47:15.009439 kubelet[2507]: I0904 19:47:15.009427 2507 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 19:47:15.012040 kubelet[2507]: I0904 19:47:15.012026 2507 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 19:47:15.013093 kubelet[2507]: I0904 19:47:15.013058 2507 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 19:47:15.013093 kubelet[2507]: I0904 19:47:15.013057 2507 server.go:461] "Adding debug handlers to kubelet server" Sep 4 19:47:15.013257 kubelet[2507]: I0904 19:47:15.013244 2507 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 19:47:15.013302 kubelet[2507]: I0904 19:47:15.013275 2507 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 19:47:15.013336 kubelet[2507]: I0904 19:47:15.013320 2507 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 19:47:15.013456 kubelet[2507]: E0904 19:47:15.013418 2507 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.90.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4054.1.0-a-2707fc1066?timeout=10s\": dial tcp 147.75.90.143:6443: connect: connection refused" interval="200ms" Sep 4 19:47:15.013586 kubelet[2507]: I0904 19:47:15.013577 2507 factory.go:221] Registration of the systemd container factory successfully Sep 4 19:47:15.013644 kubelet[2507]: W0904 19:47:15.013610 2507 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://147.75.90.143:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.90.143:6443: connect: connection refused Sep 4 19:47:15.013672 kubelet[2507]: I0904 19:47:15.013643 2507 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 19:47:15.013672 kubelet[2507]: E0904 19:47:15.013655 2507 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://147.75.90.143:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.90.143:6443: connect: connection refused Sep 4 19:47:15.014080 kubelet[2507]: E0904 19:47:15.014068 2507 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 19:47:15.014429 kubelet[2507]: I0904 19:47:15.014422 2507 factory.go:221] Registration of the containerd container factory successfully Sep 4 19:47:15.014730 kubelet[2507]: E0904 19:47:15.014554 2507 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://147.75.90.143:6443/api/v1/namespaces/default/events\": dial tcp 147.75.90.143:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4054.1.0-a-2707fc1066.17f22238e60a0144 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4054.1.0-a-2707fc1066,UID:ci-4054.1.0-a-2707fc1066,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4054.1.0-a-2707fc1066,},FirstTimestamp:2024-09-04 19:47:15.009249604 +0000 UTC m=+0.252888788,LastTimestamp:2024-09-04 19:47:15.009249604 +0000 UTC m=+0.252888788,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4054.1.0-a-2707fc1066,}" Sep 4 19:47:15.021348 kubelet[2507]: I0904 19:47:15.021335 2507 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 19:47:15.021852 kubelet[2507]: I0904 19:47:15.021840 2507 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 19:47:15.021888 kubelet[2507]: I0904 19:47:15.021859 2507 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 19:47:15.021888 kubelet[2507]: I0904 19:47:15.021871 2507 kubelet.go:2329] "Starting kubelet main sync loop" Sep 4 19:47:15.021920 kubelet[2507]: E0904 19:47:15.021898 2507 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 19:47:15.022109 kubelet[2507]: W0904 19:47:15.022095 2507 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://147.75.90.143:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.90.143:6443: connect: connection refused Sep 4 19:47:15.022151 kubelet[2507]: E0904 19:47:15.022117 2507 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://147.75.90.143:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.90.143:6443: connect: connection refused Sep 4 19:47:15.035832 kubelet[2507]: I0904 19:47:15.035808 2507 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 19:47:15.035832 kubelet[2507]: I0904 19:47:15.035832 2507 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 19:47:15.035935 kubelet[2507]: I0904 19:47:15.035858 2507 state_mem.go:36] "Initialized new in-memory state store" Sep 4 19:47:15.036805 kubelet[2507]: I0904 19:47:15.036769 2507 policy_none.go:49] "None policy: Start" Sep 4 19:47:15.037020 kubelet[2507]: I0904 19:47:15.037013 2507 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 19:47:15.037044 kubelet[2507]: I0904 19:47:15.037024 2507 state_mem.go:35] "Initializing new in-memory state store" Sep 4 19:47:15.040545 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 19:47:15.072596 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 19:47:15.084260 kubelet[2507]: E0904 19:47:15.084176 2507 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://147.75.90.143:6443/api/v1/namespaces/default/events\": dial tcp 147.75.90.143:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4054.1.0-a-2707fc1066.17f22238e60a0144 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4054.1.0-a-2707fc1066,UID:ci-4054.1.0-a-2707fc1066,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4054.1.0-a-2707fc1066,},FirstTimestamp:2024-09-04 19:47:15.009249604 +0000 UTC m=+0.252888788,LastTimestamp:2024-09-04 19:47:15.009249604 +0000 UTC m=+0.252888788,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4054.1.0-a-2707fc1066,}" Sep 4 19:47:15.095955 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 19:47:15.099064 kubelet[2507]: I0904 19:47:15.099020 2507 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 19:47:15.099594 kubelet[2507]: I0904 19:47:15.099555 2507 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 19:47:15.101774 kubelet[2507]: E0904 19:47:15.101687 2507 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4054.1.0-a-2707fc1066\" not found" Sep 4 19:47:15.117829 kubelet[2507]: I0904 19:47:15.117748 2507 kubelet_node_status.go:73] "Attempting to register node" node="ci-4054.1.0-a-2707fc1066" Sep 4 19:47:15.118448 kubelet[2507]: E0904 19:47:15.118371 2507 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.75.90.143:6443/api/v1/nodes\": dial tcp 147.75.90.143:6443: connect: connection refused" node="ci-4054.1.0-a-2707fc1066" Sep 4 19:47:15.122574 kubelet[2507]: I0904 19:47:15.122487 2507 topology_manager.go:215] "Topology Admit Handler" podUID="0d0ec3d474cabad2524b342d04a9b151" podNamespace="kube-system" podName="kube-apiserver-ci-4054.1.0-a-2707fc1066" Sep 4 19:47:15.126623 kubelet[2507]: I0904 19:47:15.126538 2507 topology_manager.go:215] "Topology Admit Handler" podUID="0256a440975e2cd9e8932d68bd7e3371" podNamespace="kube-system" podName="kube-controller-manager-ci-4054.1.0-a-2707fc1066" Sep 4 19:47:15.130594 kubelet[2507]: I0904 19:47:15.130518 2507 topology_manager.go:215] "Topology Admit Handler" podUID="fd44676a2633826b47171625fd6527a8" podNamespace="kube-system" podName="kube-scheduler-ci-4054.1.0-a-2707fc1066" Sep 4 19:47:15.143883 systemd[1]: Created slice kubepods-burstable-pod0d0ec3d474cabad2524b342d04a9b151.slice - libcontainer container kubepods-burstable-pod0d0ec3d474cabad2524b342d04a9b151.slice. Sep 4 19:47:15.181432 systemd[1]: Created slice kubepods-burstable-pod0256a440975e2cd9e8932d68bd7e3371.slice - libcontainer container kubepods-burstable-pod0256a440975e2cd9e8932d68bd7e3371.slice. Sep 4 19:47:15.190149 systemd[1]: Created slice kubepods-burstable-podfd44676a2633826b47171625fd6527a8.slice - libcontainer container kubepods-burstable-podfd44676a2633826b47171625fd6527a8.slice. Sep 4 19:47:15.215428 kubelet[2507]: E0904 19:47:15.215315 2507 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.90.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4054.1.0-a-2707fc1066?timeout=10s\": dial tcp 147.75.90.143:6443: connect: connection refused" interval="400ms" Sep 4 19:47:15.315655 kubelet[2507]: I0904 19:47:15.315411 2507 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd44676a2633826b47171625fd6527a8-kubeconfig\") pod \"kube-scheduler-ci-4054.1.0-a-2707fc1066\" (UID: \"fd44676a2633826b47171625fd6527a8\") " pod="kube-system/kube-scheduler-ci-4054.1.0-a-2707fc1066" Sep 4 19:47:15.315655 kubelet[2507]: I0904 19:47:15.315529 2507 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0d0ec3d474cabad2524b342d04a9b151-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4054.1.0-a-2707fc1066\" (UID: \"0d0ec3d474cabad2524b342d04a9b151\") " pod="kube-system/kube-apiserver-ci-4054.1.0-a-2707fc1066" Sep 4 19:47:15.315977 kubelet[2507]: I0904 19:47:15.315666 2507 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0256a440975e2cd9e8932d68bd7e3371-ca-certs\") pod \"kube-controller-manager-ci-4054.1.0-a-2707fc1066\" (UID: \"0256a440975e2cd9e8932d68bd7e3371\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-2707fc1066" Sep 4 19:47:15.315977 kubelet[2507]: I0904 19:47:15.315775 2507 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0d0ec3d474cabad2524b342d04a9b151-ca-certs\") pod \"kube-apiserver-ci-4054.1.0-a-2707fc1066\" (UID: \"0d0ec3d474cabad2524b342d04a9b151\") " pod="kube-system/kube-apiserver-ci-4054.1.0-a-2707fc1066" Sep 4 19:47:15.315977 kubelet[2507]: I0904 19:47:15.315842 2507 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0d0ec3d474cabad2524b342d04a9b151-k8s-certs\") pod \"kube-apiserver-ci-4054.1.0-a-2707fc1066\" (UID: \"0d0ec3d474cabad2524b342d04a9b151\") " pod="kube-system/kube-apiserver-ci-4054.1.0-a-2707fc1066" Sep 4 19:47:15.315977 kubelet[2507]: I0904 19:47:15.315906 2507 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0256a440975e2cd9e8932d68bd7e3371-flexvolume-dir\") pod \"kube-controller-manager-ci-4054.1.0-a-2707fc1066\" (UID: \"0256a440975e2cd9e8932d68bd7e3371\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-2707fc1066" Sep 4 19:47:15.316441 kubelet[2507]: I0904 19:47:15.316035 2507 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0256a440975e2cd9e8932d68bd7e3371-k8s-certs\") pod \"kube-controller-manager-ci-4054.1.0-a-2707fc1066\" (UID: \"0256a440975e2cd9e8932d68bd7e3371\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-2707fc1066" Sep 4 19:47:15.316441 kubelet[2507]: I0904 19:47:15.316145 2507 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0256a440975e2cd9e8932d68bd7e3371-kubeconfig\") pod \"kube-controller-manager-ci-4054.1.0-a-2707fc1066\" (UID: \"0256a440975e2cd9e8932d68bd7e3371\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-2707fc1066" Sep 4 19:47:15.316441 kubelet[2507]: I0904 19:47:15.316262 2507 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0256a440975e2cd9e8932d68bd7e3371-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4054.1.0-a-2707fc1066\" (UID: \"0256a440975e2cd9e8932d68bd7e3371\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-2707fc1066" Sep 4 19:47:15.321784 kubelet[2507]: I0904 19:47:15.321746 2507 kubelet_node_status.go:73] "Attempting to register node" node="ci-4054.1.0-a-2707fc1066" Sep 4 19:47:15.322010 kubelet[2507]: E0904 19:47:15.321972 2507 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.75.90.143:6443/api/v1/nodes\": dial tcp 147.75.90.143:6443: connect: connection refused" node="ci-4054.1.0-a-2707fc1066" Sep 4 19:47:15.477323 containerd[1543]: time="2024-09-04T19:47:15.475965604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4054.1.0-a-2707fc1066,Uid:0d0ec3d474cabad2524b342d04a9b151,Namespace:kube-system,Attempt:0,}" Sep 4 19:47:15.487427 containerd[1543]: time="2024-09-04T19:47:15.487305431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4054.1.0-a-2707fc1066,Uid:0256a440975e2cd9e8932d68bd7e3371,Namespace:kube-system,Attempt:0,}" Sep 4 19:47:15.495742 containerd[1543]: time="2024-09-04T19:47:15.495697858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4054.1.0-a-2707fc1066,Uid:fd44676a2633826b47171625fd6527a8,Namespace:kube-system,Attempt:0,}" Sep 4 19:47:15.616830 kubelet[2507]: E0904 19:47:15.616776 2507 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.90.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4054.1.0-a-2707fc1066?timeout=10s\": dial tcp 147.75.90.143:6443: connect: connection refused" interval="800ms" Sep 4 19:47:15.726903 kubelet[2507]: I0904 19:47:15.726785 2507 kubelet_node_status.go:73] "Attempting to register node" node="ci-4054.1.0-a-2707fc1066" Sep 4 19:47:15.727586 kubelet[2507]: E0904 19:47:15.727505 2507 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.75.90.143:6443/api/v1/nodes\": dial tcp 147.75.90.143:6443: connect: connection refused" node="ci-4054.1.0-a-2707fc1066" Sep 4 19:47:15.944860 kubelet[2507]: W0904 19:47:15.944755 2507 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://147.75.90.143:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.90.143:6443: connect: connection refused Sep 4 19:47:15.944860 kubelet[2507]: E0904 19:47:15.944794 2507 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://147.75.90.143:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.90.143:6443: connect: connection refused Sep 4 19:47:15.969274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1362294482.mount: Deactivated successfully. Sep 4 19:47:15.970824 containerd[1543]: time="2024-09-04T19:47:15.970771887Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 19:47:15.971028 containerd[1543]: time="2024-09-04T19:47:15.971006171Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 4 19:47:15.971596 containerd[1543]: time="2024-09-04T19:47:15.971581742Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 19:47:15.971991 containerd[1543]: time="2024-09-04T19:47:15.971979009Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 19:47:15.972187 containerd[1543]: time="2024-09-04T19:47:15.972171388Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 19:47:15.972763 containerd[1543]: time="2024-09-04T19:47:15.972750617Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 19:47:15.972873 containerd[1543]: time="2024-09-04T19:47:15.972854835Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 19:47:15.974610 containerd[1543]: time="2024-09-04T19:47:15.974554416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 19:47:15.975946 containerd[1543]: time="2024-09-04T19:47:15.975905411Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 499.72495ms" Sep 4 19:47:15.976737 containerd[1543]: time="2024-09-04T19:47:15.976703289Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 480.97809ms" Sep 4 19:47:15.977940 containerd[1543]: time="2024-09-04T19:47:15.977906909Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 490.472826ms" Sep 4 19:47:16.108049 containerd[1543]: time="2024-09-04T19:47:16.108002304Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 19:47:16.108049 containerd[1543]: time="2024-09-04T19:47:16.108034397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 19:47:16.108049 containerd[1543]: time="2024-09-04T19:47:16.107831955Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 19:47:16.108049 containerd[1543]: time="2024-09-04T19:47:16.108014066Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 19:47:16.108049 containerd[1543]: time="2024-09-04T19:47:16.108045899Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 19:47:16.108049 containerd[1543]: time="2024-09-04T19:47:16.108045702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 19:47:16.108049 containerd[1543]: time="2024-09-04T19:47:16.108050151Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 19:47:16.108231 containerd[1543]: time="2024-09-04T19:47:16.108058346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 19:47:16.108231 containerd[1543]: time="2024-09-04T19:47:16.108063662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 19:47:16.108231 containerd[1543]: time="2024-09-04T19:47:16.108098616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 19:47:16.108231 containerd[1543]: time="2024-09-04T19:47:16.108113478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 19:47:16.108231 containerd[1543]: time="2024-09-04T19:47:16.108110160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 19:47:16.135503 systemd[1]: Started cri-containerd-1bfe1cf55b1596ef5d9902a52e3ce41cb35fc9b5a88969967c90ed645e0e6de3.scope - libcontainer container 1bfe1cf55b1596ef5d9902a52e3ce41cb35fc9b5a88969967c90ed645e0e6de3. Sep 4 19:47:16.136343 systemd[1]: Started cri-containerd-22b717120657a79c1244c8b4a4e60e1369a236ed9227a9fcfb8daef8b73e3ceb.scope - libcontainer container 22b717120657a79c1244c8b4a4e60e1369a236ed9227a9fcfb8daef8b73e3ceb. Sep 4 19:47:16.137269 systemd[1]: Started cri-containerd-3ccb477e5d61f505ad955aba0ca03fcb8a5207f15f79783edc0ac7245a997140.scope - libcontainer container 3ccb477e5d61f505ad955aba0ca03fcb8a5207f15f79783edc0ac7245a997140. Sep 4 19:47:16.166047 containerd[1543]: time="2024-09-04T19:47:16.166017332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4054.1.0-a-2707fc1066,Uid:fd44676a2633826b47171625fd6527a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"1bfe1cf55b1596ef5d9902a52e3ce41cb35fc9b5a88969967c90ed645e0e6de3\"" Sep 4 19:47:16.166226 containerd[1543]: time="2024-09-04T19:47:16.166194269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4054.1.0-a-2707fc1066,Uid:0256a440975e2cd9e8932d68bd7e3371,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ccb477e5d61f505ad955aba0ca03fcb8a5207f15f79783edc0ac7245a997140\"" Sep 4 19:47:16.166345 containerd[1543]: time="2024-09-04T19:47:16.166323229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4054.1.0-a-2707fc1066,Uid:0d0ec3d474cabad2524b342d04a9b151,Namespace:kube-system,Attempt:0,} returns sandbox id \"22b717120657a79c1244c8b4a4e60e1369a236ed9227a9fcfb8daef8b73e3ceb\"" Sep 4 19:47:16.168393 containerd[1543]: time="2024-09-04T19:47:16.168375381Z" level=info msg="CreateContainer within sandbox \"3ccb477e5d61f505ad955aba0ca03fcb8a5207f15f79783edc0ac7245a997140\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 19:47:16.168445 containerd[1543]: time="2024-09-04T19:47:16.168430422Z" level=info msg="CreateContainer within sandbox \"1bfe1cf55b1596ef5d9902a52e3ce41cb35fc9b5a88969967c90ed645e0e6de3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 19:47:16.168505 containerd[1543]: time="2024-09-04T19:47:16.168489617Z" level=info msg="CreateContainer within sandbox \"22b717120657a79c1244c8b4a4e60e1369a236ed9227a9fcfb8daef8b73e3ceb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 19:47:16.175271 containerd[1543]: time="2024-09-04T19:47:16.175257749Z" level=info msg="CreateContainer within sandbox \"22b717120657a79c1244c8b4a4e60e1369a236ed9227a9fcfb8daef8b73e3ceb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a7f0754b4a4d648751d530cff0c10a6a8515cee955c308041aaf98d9be6f713a\"" Sep 4 19:47:16.175546 containerd[1543]: time="2024-09-04T19:47:16.175534557Z" level=info msg="StartContainer for \"a7f0754b4a4d648751d530cff0c10a6a8515cee955c308041aaf98d9be6f713a\"" Sep 4 19:47:16.176736 containerd[1543]: time="2024-09-04T19:47:16.176699830Z" level=info msg="CreateContainer within sandbox \"1bfe1cf55b1596ef5d9902a52e3ce41cb35fc9b5a88969967c90ed645e0e6de3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a1c5ea804350677dd4a4082c40646341f99e02d55bb52083945325babdb46e43\"" Sep 4 19:47:16.176882 containerd[1543]: time="2024-09-04T19:47:16.176871666Z" level=info msg="StartContainer for \"a1c5ea804350677dd4a4082c40646341f99e02d55bb52083945325babdb46e43\"" Sep 4 19:47:16.177035 containerd[1543]: time="2024-09-04T19:47:16.177024288Z" level=info msg="CreateContainer within sandbox \"3ccb477e5d61f505ad955aba0ca03fcb8a5207f15f79783edc0ac7245a997140\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"954f9bec10e747cff9a4dcd7701b9c5e64176f318d5f3404e049874b2555376b\"" Sep 4 19:47:16.177172 containerd[1543]: time="2024-09-04T19:47:16.177163139Z" level=info msg="StartContainer for \"954f9bec10e747cff9a4dcd7701b9c5e64176f318d5f3404e049874b2555376b\"" Sep 4 19:47:16.204556 systemd[1]: Started cri-containerd-a7f0754b4a4d648751d530cff0c10a6a8515cee955c308041aaf98d9be6f713a.scope - libcontainer container a7f0754b4a4d648751d530cff0c10a6a8515cee955c308041aaf98d9be6f713a. Sep 4 19:47:16.215376 systemd[1]: Started cri-containerd-954f9bec10e747cff9a4dcd7701b9c5e64176f318d5f3404e049874b2555376b.scope - libcontainer container 954f9bec10e747cff9a4dcd7701b9c5e64176f318d5f3404e049874b2555376b. Sep 4 19:47:16.218308 systemd[1]: Started cri-containerd-a1c5ea804350677dd4a4082c40646341f99e02d55bb52083945325babdb46e43.scope - libcontainer container a1c5ea804350677dd4a4082c40646341f99e02d55bb52083945325babdb46e43. Sep 4 19:47:16.233376 kubelet[2507]: W0904 19:47:16.233243 2507 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://147.75.90.143:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.90.143:6443: connect: connection refused Sep 4 19:47:16.233579 kubelet[2507]: E0904 19:47:16.233417 2507 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://147.75.90.143:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.90.143:6443: connect: connection refused Sep 4 19:47:16.277554 containerd[1543]: time="2024-09-04T19:47:16.277523919Z" level=info msg="StartContainer for \"a7f0754b4a4d648751d530cff0c10a6a8515cee955c308041aaf98d9be6f713a\" returns successfully" Sep 4 19:47:16.280591 containerd[1543]: time="2024-09-04T19:47:16.280554342Z" level=info msg="StartContainer for \"954f9bec10e747cff9a4dcd7701b9c5e64176f318d5f3404e049874b2555376b\" returns successfully" Sep 4 19:47:16.281186 containerd[1543]: time="2024-09-04T19:47:16.281171490Z" level=info msg="StartContainer for \"a1c5ea804350677dd4a4082c40646341f99e02d55bb52083945325babdb46e43\" returns successfully" Sep 4 19:47:16.528965 kubelet[2507]: I0904 19:47:16.528882 2507 kubelet_node_status.go:73] "Attempting to register node" node="ci-4054.1.0-a-2707fc1066" Sep 4 19:47:16.821927 kubelet[2507]: E0904 19:47:16.821869 2507 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4054.1.0-a-2707fc1066\" not found" node="ci-4054.1.0-a-2707fc1066" Sep 4 19:47:16.929526 kubelet[2507]: I0904 19:47:16.929319 2507 kubelet_node_status.go:76] "Successfully registered node" node="ci-4054.1.0-a-2707fc1066" Sep 4 19:47:17.005577 kubelet[2507]: I0904 19:47:17.005538 2507 apiserver.go:52] "Watching apiserver" Sep 4 19:47:17.013929 kubelet[2507]: I0904 19:47:17.013909 2507 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 19:47:17.029328 kubelet[2507]: E0904 19:47:17.029309 2507 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4054.1.0-a-2707fc1066\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4054.1.0-a-2707fc1066" Sep 4 19:47:17.029418 kubelet[2507]: E0904 19:47:17.029407 2507 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4054.1.0-a-2707fc1066\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4054.1.0-a-2707fc1066" Sep 4 19:47:17.029451 kubelet[2507]: E0904 19:47:17.029429 2507 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4054.1.0-a-2707fc1066\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4054.1.0-a-2707fc1066" Sep 4 19:47:18.036773 kubelet[2507]: W0904 19:47:18.036712 2507 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 19:47:19.900496 systemd[1]: Reloading requested from client PID 2822 ('systemctl') (unit session-9.scope)... Sep 4 19:47:19.900502 systemd[1]: Reloading... Sep 4 19:47:19.939259 zram_generator::config[2859]: No configuration found. Sep 4 19:47:20.003378 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 19:47:20.072261 systemd[1]: Reloading finished in 171 ms. Sep 4 19:47:20.097386 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 19:47:20.102949 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 19:47:20.103055 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 19:47:20.115642 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 19:47:20.337271 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 19:47:20.339777 (kubelet)[2920]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 19:47:20.367067 kubelet[2920]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 19:47:20.367067 kubelet[2920]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 19:47:20.367067 kubelet[2920]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 19:47:20.367341 kubelet[2920]: I0904 19:47:20.367085 2920 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 19:47:20.369875 kubelet[2920]: I0904 19:47:20.369860 2920 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Sep 4 19:47:20.369875 kubelet[2920]: I0904 19:47:20.369876 2920 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 19:47:20.370036 kubelet[2920]: I0904 19:47:20.370026 2920 server.go:919] "Client rotation is on, will bootstrap in background" Sep 4 19:47:20.371601 kubelet[2920]: I0904 19:47:20.371553 2920 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 19:47:20.372755 kubelet[2920]: I0904 19:47:20.372742 2920 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 19:47:20.381695 kubelet[2920]: I0904 19:47:20.381684 2920 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 19:47:20.381824 kubelet[2920]: I0904 19:47:20.381816 2920 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 19:47:20.381938 kubelet[2920]: I0904 19:47:20.381930 2920 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 19:47:20.381998 kubelet[2920]: I0904 19:47:20.381946 2920 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 19:47:20.381998 kubelet[2920]: I0904 19:47:20.381954 2920 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 19:47:20.381998 kubelet[2920]: I0904 19:47:20.381973 2920 state_mem.go:36] "Initialized new in-memory state store" Sep 4 19:47:20.382064 kubelet[2920]: I0904 19:47:20.382027 2920 kubelet.go:396] "Attempting to sync node with API server" Sep 4 19:47:20.382064 kubelet[2920]: I0904 19:47:20.382037 2920 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 19:47:20.382064 kubelet[2920]: I0904 19:47:20.382053 2920 kubelet.go:312] "Adding apiserver pod source" Sep 4 19:47:20.382064 kubelet[2920]: I0904 19:47:20.382065 2920 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 19:47:20.382459 kubelet[2920]: I0904 19:47:20.382448 2920 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.20" apiVersion="v1" Sep 4 19:47:20.383086 kubelet[2920]: I0904 19:47:20.382902 2920 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 19:47:20.383451 kubelet[2920]: I0904 19:47:20.383434 2920 server.go:1256] "Started kubelet" Sep 4 19:47:20.383509 kubelet[2920]: I0904 19:47:20.383474 2920 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 19:47:20.383805 kubelet[2920]: I0904 19:47:20.383509 2920 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 19:47:20.383917 kubelet[2920]: I0904 19:47:20.383907 2920 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 19:47:20.384403 kubelet[2920]: I0904 19:47:20.384394 2920 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 19:47:20.384486 kubelet[2920]: I0904 19:47:20.384428 2920 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 19:47:20.384486 kubelet[2920]: I0904 19:47:20.384447 2920 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 19:47:20.384545 kubelet[2920]: I0904 19:47:20.384518 2920 server.go:461] "Adding debug handlers to kubelet server" Sep 4 19:47:20.384545 kubelet[2920]: I0904 19:47:20.384540 2920 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 19:47:20.384771 kubelet[2920]: I0904 19:47:20.384762 2920 factory.go:221] Registration of the systemd container factory successfully Sep 4 19:47:20.384838 kubelet[2920]: I0904 19:47:20.384822 2920 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 19:47:20.384889 kubelet[2920]: E0904 19:47:20.384880 2920 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 19:47:20.385322 kubelet[2920]: I0904 19:47:20.385313 2920 factory.go:221] Registration of the containerd container factory successfully Sep 4 19:47:20.389688 kubelet[2920]: I0904 19:47:20.389673 2920 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 19:47:20.390300 kubelet[2920]: I0904 19:47:20.390288 2920 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 19:47:20.390355 kubelet[2920]: I0904 19:47:20.390306 2920 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 19:47:20.390355 kubelet[2920]: I0904 19:47:20.390322 2920 kubelet.go:2329] "Starting kubelet main sync loop" Sep 4 19:47:20.390391 kubelet[2920]: E0904 19:47:20.390362 2920 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 19:47:20.400945 kubelet[2920]: I0904 19:47:20.400927 2920 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 19:47:20.400945 kubelet[2920]: I0904 19:47:20.400941 2920 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 19:47:20.401057 kubelet[2920]: I0904 19:47:20.400954 2920 state_mem.go:36] "Initialized new in-memory state store" Sep 4 19:47:20.401093 kubelet[2920]: I0904 19:47:20.401062 2920 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 19:47:20.401093 kubelet[2920]: I0904 19:47:20.401081 2920 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 19:47:20.401093 kubelet[2920]: I0904 19:47:20.401088 2920 policy_none.go:49] "None policy: Start" Sep 4 19:47:20.401369 kubelet[2920]: I0904 19:47:20.401361 2920 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 19:47:20.401423 kubelet[2920]: I0904 19:47:20.401372 2920 state_mem.go:35] "Initializing new in-memory state store" Sep 4 19:47:20.401498 kubelet[2920]: I0904 19:47:20.401491 2920 state_mem.go:75] "Updated machine memory state" Sep 4 19:47:20.403469 kubelet[2920]: I0904 19:47:20.403461 2920 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 19:47:20.403597 kubelet[2920]: I0904 19:47:20.403590 2920 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 19:47:20.486155 kubelet[2920]: I0904 19:47:20.486138 2920 kubelet_node_status.go:73] "Attempting to register node" node="ci-4054.1.0-a-2707fc1066" Sep 4 19:47:20.489434 kubelet[2920]: I0904 19:47:20.489421 2920 kubelet_node_status.go:112] "Node was previously registered" node="ci-4054.1.0-a-2707fc1066" Sep 4 19:47:20.489480 kubelet[2920]: I0904 19:47:20.489459 2920 kubelet_node_status.go:76] "Successfully registered node" node="ci-4054.1.0-a-2707fc1066" Sep 4 19:47:20.490580 kubelet[2920]: I0904 19:47:20.490542 2920 topology_manager.go:215] "Topology Admit Handler" podUID="0d0ec3d474cabad2524b342d04a9b151" podNamespace="kube-system" podName="kube-apiserver-ci-4054.1.0-a-2707fc1066" Sep 4 19:47:20.490611 kubelet[2920]: I0904 19:47:20.490593 2920 topology_manager.go:215] "Topology Admit Handler" podUID="0256a440975e2cd9e8932d68bd7e3371" podNamespace="kube-system" podName="kube-controller-manager-ci-4054.1.0-a-2707fc1066" Sep 4 19:47:20.490636 kubelet[2920]: I0904 19:47:20.490624 2920 topology_manager.go:215] "Topology Admit Handler" podUID="fd44676a2633826b47171625fd6527a8" podNamespace="kube-system" podName="kube-scheduler-ci-4054.1.0-a-2707fc1066" Sep 4 19:47:20.493191 kubelet[2920]: W0904 19:47:20.493181 2920 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 19:47:20.493257 kubelet[2920]: W0904 19:47:20.493196 2920 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 19:47:20.493257 kubelet[2920]: E0904 19:47:20.493241 2920 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4054.1.0-a-2707fc1066\" already exists" pod="kube-system/kube-apiserver-ci-4054.1.0-a-2707fc1066" Sep 4 19:47:20.493257 kubelet[2920]: W0904 19:47:20.493253 2920 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 19:47:20.686335 kubelet[2920]: I0904 19:47:20.686313 2920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0256a440975e2cd9e8932d68bd7e3371-ca-certs\") pod \"kube-controller-manager-ci-4054.1.0-a-2707fc1066\" (UID: \"0256a440975e2cd9e8932d68bd7e3371\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-2707fc1066" Sep 4 19:47:20.686335 kubelet[2920]: I0904 19:47:20.686342 2920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0256a440975e2cd9e8932d68bd7e3371-flexvolume-dir\") pod \"kube-controller-manager-ci-4054.1.0-a-2707fc1066\" (UID: \"0256a440975e2cd9e8932d68bd7e3371\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-2707fc1066" Sep 4 19:47:20.686457 kubelet[2920]: I0904 19:47:20.686373 2920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0256a440975e2cd9e8932d68bd7e3371-k8s-certs\") pod \"kube-controller-manager-ci-4054.1.0-a-2707fc1066\" (UID: \"0256a440975e2cd9e8932d68bd7e3371\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-2707fc1066" Sep 4 19:47:20.686457 kubelet[2920]: I0904 19:47:20.686411 2920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0256a440975e2cd9e8932d68bd7e3371-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4054.1.0-a-2707fc1066\" (UID: \"0256a440975e2cd9e8932d68bd7e3371\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-2707fc1066" Sep 4 19:47:20.686457 kubelet[2920]: I0904 19:47:20.686431 2920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0d0ec3d474cabad2524b342d04a9b151-ca-certs\") pod \"kube-apiserver-ci-4054.1.0-a-2707fc1066\" (UID: \"0d0ec3d474cabad2524b342d04a9b151\") " pod="kube-system/kube-apiserver-ci-4054.1.0-a-2707fc1066" Sep 4 19:47:20.686542 kubelet[2920]: I0904 19:47:20.686476 2920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0d0ec3d474cabad2524b342d04a9b151-k8s-certs\") pod \"kube-apiserver-ci-4054.1.0-a-2707fc1066\" (UID: \"0d0ec3d474cabad2524b342d04a9b151\") " pod="kube-system/kube-apiserver-ci-4054.1.0-a-2707fc1066" Sep 4 19:47:20.686542 kubelet[2920]: I0904 19:47:20.686499 2920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd44676a2633826b47171625fd6527a8-kubeconfig\") pod \"kube-scheduler-ci-4054.1.0-a-2707fc1066\" (UID: \"fd44676a2633826b47171625fd6527a8\") " pod="kube-system/kube-scheduler-ci-4054.1.0-a-2707fc1066" Sep 4 19:47:20.686542 kubelet[2920]: I0904 19:47:20.686521 2920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0d0ec3d474cabad2524b342d04a9b151-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4054.1.0-a-2707fc1066\" (UID: \"0d0ec3d474cabad2524b342d04a9b151\") " pod="kube-system/kube-apiserver-ci-4054.1.0-a-2707fc1066" Sep 4 19:47:20.686598 kubelet[2920]: I0904 19:47:20.686545 2920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0256a440975e2cd9e8932d68bd7e3371-kubeconfig\") pod \"kube-controller-manager-ci-4054.1.0-a-2707fc1066\" (UID: \"0256a440975e2cd9e8932d68bd7e3371\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-2707fc1066" Sep 4 19:47:21.383451 kubelet[2920]: I0904 19:47:21.383396 2920 apiserver.go:52] "Watching apiserver" Sep 4 19:47:21.384727 kubelet[2920]: I0904 19:47:21.384715 2920 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 19:47:21.396399 kubelet[2920]: W0904 19:47:21.396370 2920 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 19:47:21.396399 kubelet[2920]: E0904 19:47:21.396400 2920 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4054.1.0-a-2707fc1066\" already exists" pod="kube-system/kube-controller-manager-ci-4054.1.0-a-2707fc1066" Sep 4 19:47:21.396874 kubelet[2920]: W0904 19:47:21.396835 2920 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 19:47:21.396874 kubelet[2920]: W0904 19:47:21.396837 2920 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 19:47:21.396874 kubelet[2920]: E0904 19:47:21.396863 2920 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4054.1.0-a-2707fc1066\" already exists" pod="kube-system/kube-apiserver-ci-4054.1.0-a-2707fc1066" Sep 4 19:47:21.396874 kubelet[2920]: E0904 19:47:21.396865 2920 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4054.1.0-a-2707fc1066\" already exists" pod="kube-system/kube-scheduler-ci-4054.1.0-a-2707fc1066" Sep 4 19:47:21.403446 kubelet[2920]: I0904 19:47:21.403434 2920 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4054.1.0-a-2707fc1066" podStartSLOduration=1.403411531 podStartE2EDuration="1.403411531s" podCreationTimestamp="2024-09-04 19:47:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 19:47:21.402439308 +0000 UTC m=+1.060505266" watchObservedRunningTime="2024-09-04 19:47:21.403411531 +0000 UTC m=+1.061477488" Sep 4 19:47:21.408115 kubelet[2920]: I0904 19:47:21.408074 2920 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4054.1.0-a-2707fc1066" podStartSLOduration=1.408054055 podStartE2EDuration="1.408054055s" podCreationTimestamp="2024-09-04 19:47:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 19:47:21.407948078 +0000 UTC m=+1.066014038" watchObservedRunningTime="2024-09-04 19:47:21.408054055 +0000 UTC m=+1.066120014" Sep 4 19:47:21.408115 kubelet[2920]: I0904 19:47:21.408107 2920 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4054.1.0-a-2707fc1066" podStartSLOduration=3.4080961 podStartE2EDuration="3.4080961s" podCreationTimestamp="2024-09-04 19:47:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 19:47:21.405472267 +0000 UTC m=+1.063538226" watchObservedRunningTime="2024-09-04 19:47:21.4080961 +0000 UTC m=+1.066162055" Sep 4 19:47:23.838687 sudo[1761]: pam_unix(sudo:session): session closed for user root Sep 4 19:47:23.839839 sshd[1757]: pam_unix(sshd:session): session closed for user core Sep 4 19:47:23.842056 systemd[1]: sshd@6-147.75.90.143:22-139.178.89.65:33270.service: Deactivated successfully. Sep 4 19:47:23.843304 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 19:47:23.843425 systemd[1]: session-9.scope: Consumed 3.087s CPU time, 147.7M memory peak, 0B memory swap peak. Sep 4 19:47:23.844268 systemd-logind[1525]: Session 9 logged out. Waiting for processes to exit. Sep 4 19:47:23.845181 systemd-logind[1525]: Removed session 9. Sep 4 19:47:32.524142 kubelet[2920]: I0904 19:47:32.524121 2920 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 19:47:32.524538 containerd[1543]: time="2024-09-04T19:47:32.524398892Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 19:47:32.524725 kubelet[2920]: I0904 19:47:32.524539 2920 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 19:47:33.397887 kubelet[2920]: I0904 19:47:33.397821 2920 topology_manager.go:215] "Topology Admit Handler" podUID="023cf723-2356-45f3-9ce1-a7fe915cf22a" podNamespace="kube-system" podName="kube-proxy-bwhsm" Sep 4 19:47:33.416836 systemd[1]: Created slice kubepods-besteffort-pod023cf723_2356_45f3_9ce1_a7fe915cf22a.slice - libcontainer container kubepods-besteffort-pod023cf723_2356_45f3_9ce1_a7fe915cf22a.slice. Sep 4 19:47:33.484969 kubelet[2920]: I0904 19:47:33.484892 2920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/023cf723-2356-45f3-9ce1-a7fe915cf22a-kube-proxy\") pod \"kube-proxy-bwhsm\" (UID: \"023cf723-2356-45f3-9ce1-a7fe915cf22a\") " pod="kube-system/kube-proxy-bwhsm" Sep 4 19:47:33.485300 kubelet[2920]: I0904 19:47:33.485017 2920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqvnl\" (UniqueName: \"kubernetes.io/projected/023cf723-2356-45f3-9ce1-a7fe915cf22a-kube-api-access-kqvnl\") pod \"kube-proxy-bwhsm\" (UID: \"023cf723-2356-45f3-9ce1-a7fe915cf22a\") " pod="kube-system/kube-proxy-bwhsm" Sep 4 19:47:33.485300 kubelet[2920]: I0904 19:47:33.485266 2920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/023cf723-2356-45f3-9ce1-a7fe915cf22a-xtables-lock\") pod \"kube-proxy-bwhsm\" (UID: \"023cf723-2356-45f3-9ce1-a7fe915cf22a\") " pod="kube-system/kube-proxy-bwhsm" Sep 4 19:47:33.485651 kubelet[2920]: I0904 19:47:33.485434 2920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/023cf723-2356-45f3-9ce1-a7fe915cf22a-lib-modules\") pod \"kube-proxy-bwhsm\" (UID: \"023cf723-2356-45f3-9ce1-a7fe915cf22a\") " pod="kube-system/kube-proxy-bwhsm" Sep 4 19:47:33.571091 kubelet[2920]: I0904 19:47:33.571024 2920 topology_manager.go:215] "Topology Admit Handler" podUID="0e9baf8a-af69-4f30-a421-553e2c70b84b" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-lllmf" Sep 4 19:47:33.586155 systemd[1]: Created slice kubepods-besteffort-pod0e9baf8a_af69_4f30_a421_553e2c70b84b.slice - libcontainer container kubepods-besteffort-pod0e9baf8a_af69_4f30_a421_553e2c70b84b.slice. Sep 4 19:47:33.686705 kubelet[2920]: I0904 19:47:33.686487 2920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0e9baf8a-af69-4f30-a421-553e2c70b84b-var-lib-calico\") pod \"tigera-operator-5d56685c77-lllmf\" (UID: \"0e9baf8a-af69-4f30-a421-553e2c70b84b\") " pod="tigera-operator/tigera-operator-5d56685c77-lllmf" Sep 4 19:47:33.686705 kubelet[2920]: I0904 19:47:33.686658 2920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hzhm\" (UniqueName: \"kubernetes.io/projected/0e9baf8a-af69-4f30-a421-553e2c70b84b-kube-api-access-4hzhm\") pod \"tigera-operator-5d56685c77-lllmf\" (UID: \"0e9baf8a-af69-4f30-a421-553e2c70b84b\") " pod="tigera-operator/tigera-operator-5d56685c77-lllmf" Sep 4 19:47:33.729727 containerd[1543]: time="2024-09-04T19:47:33.729640808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bwhsm,Uid:023cf723-2356-45f3-9ce1-a7fe915cf22a,Namespace:kube-system,Attempt:0,}" Sep 4 19:47:33.740379 containerd[1543]: time="2024-09-04T19:47:33.740334363Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 19:47:33.740459 containerd[1543]: time="2024-09-04T19:47:33.740372502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 19:47:33.740581 containerd[1543]: time="2024-09-04T19:47:33.740544458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 19:47:33.740628 containerd[1543]: time="2024-09-04T19:47:33.740617953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 19:47:33.764463 systemd[1]: Started cri-containerd-2cf67309ce5085dc38aa5d45e06daf3389e750886c6f2b32533fb36ebded187b.scope - libcontainer container 2cf67309ce5085dc38aa5d45e06daf3389e750886c6f2b32533fb36ebded187b. Sep 4 19:47:33.778323 containerd[1543]: time="2024-09-04T19:47:33.778287235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bwhsm,Uid:023cf723-2356-45f3-9ce1-a7fe915cf22a,Namespace:kube-system,Attempt:0,} returns sandbox id \"2cf67309ce5085dc38aa5d45e06daf3389e750886c6f2b32533fb36ebded187b\"" Sep 4 19:47:33.780381 containerd[1543]: time="2024-09-04T19:47:33.780359915Z" level=info msg="CreateContainer within sandbox \"2cf67309ce5085dc38aa5d45e06daf3389e750886c6f2b32533fb36ebded187b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 19:47:33.787312 containerd[1543]: time="2024-09-04T19:47:33.787297100Z" level=info msg="CreateContainer within sandbox \"2cf67309ce5085dc38aa5d45e06daf3389e750886c6f2b32533fb36ebded187b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ad95729321514a947bad22e5838f8ebea81d92672066d60604242de5e5960900\"" Sep 4 19:47:33.787654 containerd[1543]: time="2024-09-04T19:47:33.787644322Z" level=info msg="StartContainer for \"ad95729321514a947bad22e5838f8ebea81d92672066d60604242de5e5960900\"" Sep 4 19:47:33.819447 systemd[1]: Started cri-containerd-ad95729321514a947bad22e5838f8ebea81d92672066d60604242de5e5960900.scope - libcontainer container ad95729321514a947bad22e5838f8ebea81d92672066d60604242de5e5960900. Sep 4 19:47:33.836755 containerd[1543]: time="2024-09-04T19:47:33.836719781Z" level=info msg="StartContainer for \"ad95729321514a947bad22e5838f8ebea81d92672066d60604242de5e5960900\" returns successfully" Sep 4 19:47:33.891900 containerd[1543]: time="2024-09-04T19:47:33.891853649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-lllmf,Uid:0e9baf8a-af69-4f30-a421-553e2c70b84b,Namespace:tigera-operator,Attempt:0,}" Sep 4 19:47:33.902111 containerd[1543]: time="2024-09-04T19:47:33.902062827Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 19:47:33.902180 containerd[1543]: time="2024-09-04T19:47:33.902104258Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 19:47:33.902328 containerd[1543]: time="2024-09-04T19:47:33.902310986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 19:47:33.902383 containerd[1543]: time="2024-09-04T19:47:33.902369377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 19:47:33.928487 systemd[1]: Started cri-containerd-61efac59d8c701bb02e1cb72b0cdea7470649abad15bd8aff0bd4e500a13274c.scope - libcontainer container 61efac59d8c701bb02e1cb72b0cdea7470649abad15bd8aff0bd4e500a13274c. Sep 4 19:47:33.962139 containerd[1543]: time="2024-09-04T19:47:33.962061726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-lllmf,Uid:0e9baf8a-af69-4f30-a421-553e2c70b84b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"61efac59d8c701bb02e1cb72b0cdea7470649abad15bd8aff0bd4e500a13274c\"" Sep 4 19:47:33.963156 containerd[1543]: time="2024-09-04T19:47:33.963139746Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Sep 4 19:47:34.182683 update_engine[1530]: I0904 19:47:34.182581 1530 update_attempter.cc:509] Updating boot flags... Sep 4 19:47:34.211240 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 37 scanned by (udev-worker) (3334) Sep 4 19:47:34.236210 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 37 scanned by (udev-worker) (3338) Sep 4 19:47:34.445536 kubelet[2920]: I0904 19:47:34.445470 2920 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-bwhsm" podStartSLOduration=1.44537852 podStartE2EDuration="1.44537852s" podCreationTimestamp="2024-09-04 19:47:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 19:47:34.445290727 +0000 UTC m=+14.103356758" watchObservedRunningTime="2024-09-04 19:47:34.44537852 +0000 UTC m=+14.103444548" Sep 4 19:47:35.463006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount628148917.mount: Deactivated successfully. Sep 4 19:47:35.673565 containerd[1543]: time="2024-09-04T19:47:35.673514063Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:47:35.673775 containerd[1543]: time="2024-09-04T19:47:35.673647004Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136597" Sep 4 19:47:35.674015 containerd[1543]: time="2024-09-04T19:47:35.673974525Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:47:35.675440 containerd[1543]: time="2024-09-04T19:47:35.675396862Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:47:35.675799 containerd[1543]: time="2024-09-04T19:47:35.675755311Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 1.712590963s" Sep 4 19:47:35.675799 containerd[1543]: time="2024-09-04T19:47:35.675775827Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Sep 4 19:47:35.676561 containerd[1543]: time="2024-09-04T19:47:35.676547625Z" level=info msg="CreateContainer within sandbox \"61efac59d8c701bb02e1cb72b0cdea7470649abad15bd8aff0bd4e500a13274c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 4 19:47:35.680277 containerd[1543]: time="2024-09-04T19:47:35.680229080Z" level=info msg="CreateContainer within sandbox \"61efac59d8c701bb02e1cb72b0cdea7470649abad15bd8aff0bd4e500a13274c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8e339b0f3ba3cc80976cedf4c5921de1c0b4fd62024075fb2319a25bdc345b33\"" Sep 4 19:47:35.680521 containerd[1543]: time="2024-09-04T19:47:35.680473945Z" level=info msg="StartContainer for \"8e339b0f3ba3cc80976cedf4c5921de1c0b4fd62024075fb2319a25bdc345b33\"" Sep 4 19:47:35.681188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3702811305.mount: Deactivated successfully. Sep 4 19:47:35.706386 systemd[1]: Started cri-containerd-8e339b0f3ba3cc80976cedf4c5921de1c0b4fd62024075fb2319a25bdc345b33.scope - libcontainer container 8e339b0f3ba3cc80976cedf4c5921de1c0b4fd62024075fb2319a25bdc345b33. Sep 4 19:47:35.718140 containerd[1543]: time="2024-09-04T19:47:35.718052751Z" level=info msg="StartContainer for \"8e339b0f3ba3cc80976cedf4c5921de1c0b4fd62024075fb2319a25bdc345b33\" returns successfully" Sep 4 19:47:36.449666 kubelet[2920]: I0904 19:47:36.449649 2920 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-lllmf" podStartSLOduration=1.73651276 podStartE2EDuration="3.449625007s" podCreationTimestamp="2024-09-04 19:47:33 +0000 UTC" firstStartedPulling="2024-09-04 19:47:33.962827139 +0000 UTC m=+13.620893106" lastFinishedPulling="2024-09-04 19:47:35.675939395 +0000 UTC m=+15.334005353" observedRunningTime="2024-09-04 19:47:36.449517345 +0000 UTC m=+16.107583311" watchObservedRunningTime="2024-09-04 19:47:36.449625007 +0000 UTC m=+16.107690970" Sep 4 19:47:38.568616 kubelet[2920]: I0904 19:47:38.568591 2920 topology_manager.go:215] "Topology Admit Handler" podUID="7eb8079e-8474-47b0-bc26-b0dc02e1502c" podNamespace="calico-system" podName="calico-typha-7b744d574-f5r4d" Sep 4 19:47:38.571881 systemd[1]: Created slice kubepods-besteffort-pod7eb8079e_8474_47b0_bc26_b0dc02e1502c.slice - libcontainer container kubepods-besteffort-pod7eb8079e_8474_47b0_bc26_b0dc02e1502c.slice. Sep 4 19:47:38.581894 kubelet[2920]: I0904 19:47:38.581872 2920 topology_manager.go:215] "Topology Admit Handler" podUID="16cc5663-ad5b-499c-8124-c955fcb261ea" podNamespace="calico-system" podName="calico-node-hzqfw" Sep 4 19:47:38.585144 systemd[1]: Created slice kubepods-besteffort-pod16cc5663_ad5b_499c_8124_c955fcb261ea.slice - libcontainer container kubepods-besteffort-pod16cc5663_ad5b_499c_8124_c955fcb261ea.slice. Sep 4 19:47:38.627081 kubelet[2920]: I0904 19:47:38.627025 2920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16cc5663-ad5b-499c-8124-c955fcb261ea-lib-modules\") pod \"calico-node-hzqfw\" (UID: \"16cc5663-ad5b-499c-8124-c955fcb261ea\") " pod="calico-system/calico-node-hzqfw" Sep 4 19:47:38.627347 kubelet[2920]: I0904 19:47:38.627282 2920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/16cc5663-ad5b-499c-8124-c955fcb261ea-cni-net-dir\") pod \"calico-node-hzqfw\" (UID: \"16cc5663-ad5b-499c-8124-c955fcb261ea\") " pod="calico-system/calico-node-hzqfw" Sep 4 19:47:38.627504 kubelet[2920]: I0904 19:47:38.627472 2920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/16cc5663-ad5b-499c-8124-c955fcb261ea-var-lib-calico\") pod \"calico-node-hzqfw\" (UID: \"16cc5663-ad5b-499c-8124-c955fcb261ea\") " pod="calico-system/calico-node-hzqfw" Sep 4 19:47:38.627627 kubelet[2920]: I0904 19:47:38.627551 2920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/16cc5663-ad5b-499c-8124-c955fcb261ea-flexvol-driver-host\") pod \"calico-node-hzqfw\" (UID: \"16cc5663-ad5b-499c-8124-c955fcb261ea\") " pod="calico-system/calico-node-hzqfw" Sep 4 19:47:38.627764 kubelet[2920]: I0904 19:47:38.627734 2920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9g9c\" (UniqueName: \"kubernetes.io/projected/16cc5663-ad5b-499c-8124-c955fcb261ea-kube-api-access-z9g9c\") pod \"calico-node-hzqfw\" (UID: \"16cc5663-ad5b-499c-8124-c955fcb261ea\") " pod="calico-system/calico-node-hzqfw" Sep 4 19:47:38.627914 kubelet[2920]: I0904 19:47:38.627849 2920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7eb8079e-8474-47b0-bc26-b0dc02e1502c-typha-certs\") pod \"calico-typha-7b744d574-f5r4d\" (UID: \"7eb8079e-8474-47b0-bc26-b0dc02e1502c\") " pod="calico-system/calico-typha-7b744d574-f5r4d" Sep 4 19:47:38.628029 kubelet[2920]: I0904 19:47:38.627928 2920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/16cc5663-ad5b-499c-8124-c955fcb261ea-cni-log-dir\") pod \"calico-node-hzqfw\" (UID: \"16cc5663-ad5b-499c-8124-c955fcb261ea\") " pod="calico-system/calico-node-hzqfw" Sep 4 19:47:38.628135 kubelet[2920]: I0904 19:47:38.628045 2920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7eb8079e-8474-47b0-bc26-b0dc02e1502c-tigera-ca-bundle\") pod \"calico-typha-7b744d574-f5r4d\" (UID: \"7eb8079e-8474-47b0-bc26-b0dc02e1502c\") " pod="calico-system/calico-typha-7b744d574-f5r4d" Sep 4 19:47:38.628260 kubelet[2920]: I0904 19:47:38.628151 2920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16cc5663-ad5b-499c-8124-c955fcb261ea-tigera-ca-bundle\") pod \"calico-node-hzqfw\" (UID: \"16cc5663-ad5b-499c-8124-c955fcb261ea\") " pod="calico-system/calico-node-hzqfw" Sep 4 19:47:38.628260 kubelet[2920]: I0904 19:47:38.628242 2920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/16cc5663-ad5b-499c-8124-c955fcb261ea-var-run-calico\") pod \"calico-node-hzqfw\" (UID: \"16cc5663-ad5b-499c-8124-c955fcb261ea\") " pod="calico-system/calico-node-hzqfw" Sep 4 19:47:38.628480 kubelet[2920]: I0904 19:47:38.628415 2920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16cc5663-ad5b-499c-8124-c955fcb261ea-xtables-lock\") pod \"calico-node-hzqfw\" (UID: \"16cc5663-ad5b-499c-8124-c955fcb261ea\") " pod="calico-system/calico-node-hzqfw" Sep 4 19:47:38.628587 kubelet[2920]: I0904 19:47:38.628493 2920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrx47\" (UniqueName: \"kubernetes.io/projected/7eb8079e-8474-47b0-bc26-b0dc02e1502c-kube-api-access-wrx47\") pod \"calico-typha-7b744d574-f5r4d\" (UID: \"7eb8079e-8474-47b0-bc26-b0dc02e1502c\") " pod="calico-system/calico-typha-7b744d574-f5r4d" Sep 4 19:47:38.628694 kubelet[2920]: I0904 19:47:38.628630 2920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/16cc5663-ad5b-499c-8124-c955fcb261ea-node-certs\") pod \"calico-node-hzqfw\" (UID: \"16cc5663-ad5b-499c-8124-c955fcb261ea\") " pod="calico-system/calico-node-hzqfw" Sep 4 19:47:38.628807 kubelet[2920]: I0904 19:47:38.628728 2920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/16cc5663-ad5b-499c-8124-c955fcb261ea-cni-bin-dir\") pod \"calico-node-hzqfw\" (UID: \"16cc5663-ad5b-499c-8124-c955fcb261ea\") " pod="calico-system/calico-node-hzqfw" Sep 4 19:47:38.629009 kubelet[2920]: I0904 19:47:38.628966 2920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/16cc5663-ad5b-499c-8124-c955fcb261ea-policysync\") pod \"calico-node-hzqfw\" (UID: \"16cc5663-ad5b-499c-8124-c955fcb261ea\") " pod="calico-system/calico-node-hzqfw" Sep 4 19:47:38.712138 kubelet[2920]: I0904 19:47:38.712080 2920 topology_manager.go:215] "Topology Admit Handler" podUID="b5ea2e4f-7558-45fa-ba9d-79653786d20f" podNamespace="calico-system" podName="csi-node-driver-x4kn5" Sep 4 19:47:38.712776 kubelet[2920]: E0904 19:47:38.712736 2920 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x4kn5" podUID="b5ea2e4f-7558-45fa-ba9d-79653786d20f" Sep 4 19:47:38.732266 kubelet[2920]: E0904 19:47:38.732220 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.732266 kubelet[2920]: W0904 19:47:38.732258 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.732549 kubelet[2920]: E0904 19:47:38.732344 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.732751 kubelet[2920]: E0904 19:47:38.732724 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.732751 kubelet[2920]: W0904 19:47:38.732750 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.732988 kubelet[2920]: E0904 19:47:38.732783 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.735422 kubelet[2920]: E0904 19:47:38.735278 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.735422 kubelet[2920]: W0904 19:47:38.735312 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.735422 kubelet[2920]: E0904 19:47:38.735346 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.736418 kubelet[2920]: E0904 19:47:38.736400 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.736418 kubelet[2920]: W0904 19:47:38.736417 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.736563 kubelet[2920]: E0904 19:47:38.736444 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.741040 kubelet[2920]: E0904 19:47:38.741019 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.741040 kubelet[2920]: W0904 19:47:38.741035 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.741215 kubelet[2920]: E0904 19:47:38.741068 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.741444 kubelet[2920]: E0904 19:47:38.741429 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.741444 kubelet[2920]: W0904 19:47:38.741443 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.741546 kubelet[2920]: E0904 19:47:38.741464 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.808588 kubelet[2920]: E0904 19:47:38.808569 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.808588 kubelet[2920]: W0904 19:47:38.808583 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.808682 kubelet[2920]: E0904 19:47:38.808599 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.808786 kubelet[2920]: E0904 19:47:38.808778 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.808786 kubelet[2920]: W0904 19:47:38.808785 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.808836 kubelet[2920]: E0904 19:47:38.808794 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.808928 kubelet[2920]: E0904 19:47:38.808922 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.808928 kubelet[2920]: W0904 19:47:38.808927 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.808975 kubelet[2920]: E0904 19:47:38.808934 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.809039 kubelet[2920]: E0904 19:47:38.809033 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.809065 kubelet[2920]: W0904 19:47:38.809039 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.809065 kubelet[2920]: E0904 19:47:38.809045 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.809179 kubelet[2920]: E0904 19:47:38.809174 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.809203 kubelet[2920]: W0904 19:47:38.809179 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.809203 kubelet[2920]: E0904 19:47:38.809185 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.809331 kubelet[2920]: E0904 19:47:38.809325 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.809356 kubelet[2920]: W0904 19:47:38.809332 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.809356 kubelet[2920]: E0904 19:47:38.809342 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.809619 kubelet[2920]: E0904 19:47:38.809604 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.809619 kubelet[2920]: W0904 19:47:38.809616 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.809710 kubelet[2920]: E0904 19:47:38.809625 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.809942 kubelet[2920]: E0904 19:47:38.809834 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.809942 kubelet[2920]: W0904 19:47:38.809847 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.809942 kubelet[2920]: E0904 19:47:38.809860 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.810135 kubelet[2920]: E0904 19:47:38.810125 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.810184 kubelet[2920]: W0904 19:47:38.810134 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.810184 kubelet[2920]: E0904 19:47:38.810158 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.810310 kubelet[2920]: E0904 19:47:38.810298 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.810347 kubelet[2920]: W0904 19:47:38.810313 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.810347 kubelet[2920]: E0904 19:47:38.810328 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.810516 kubelet[2920]: E0904 19:47:38.810507 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.810545 kubelet[2920]: W0904 19:47:38.810516 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.810545 kubelet[2920]: E0904 19:47:38.810536 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.810856 kubelet[2920]: E0904 19:47:38.810848 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.810879 kubelet[2920]: W0904 19:47:38.810858 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.810879 kubelet[2920]: E0904 19:47:38.810872 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.811029 kubelet[2920]: E0904 19:47:38.811022 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.811029 kubelet[2920]: W0904 19:47:38.811028 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.811074 kubelet[2920]: E0904 19:47:38.811036 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.811141 kubelet[2920]: E0904 19:47:38.811135 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.811161 kubelet[2920]: W0904 19:47:38.811142 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.811161 kubelet[2920]: E0904 19:47:38.811152 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.811266 kubelet[2920]: E0904 19:47:38.811260 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.811266 kubelet[2920]: W0904 19:47:38.811266 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.811309 kubelet[2920]: E0904 19:47:38.811273 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.811363 kubelet[2920]: E0904 19:47:38.811358 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.811385 kubelet[2920]: W0904 19:47:38.811363 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.811385 kubelet[2920]: E0904 19:47:38.811370 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.811537 kubelet[2920]: E0904 19:47:38.811531 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.811537 kubelet[2920]: W0904 19:47:38.811536 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.811581 kubelet[2920]: E0904 19:47:38.811543 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.811642 kubelet[2920]: E0904 19:47:38.811637 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.811662 kubelet[2920]: W0904 19:47:38.811642 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.811662 kubelet[2920]: E0904 19:47:38.811648 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.811740 kubelet[2920]: E0904 19:47:38.811735 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.811759 kubelet[2920]: W0904 19:47:38.811740 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.811759 kubelet[2920]: E0904 19:47:38.811746 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.811880 kubelet[2920]: E0904 19:47:38.811875 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.811900 kubelet[2920]: W0904 19:47:38.811880 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.811900 kubelet[2920]: E0904 19:47:38.811886 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.830313 kubelet[2920]: E0904 19:47:38.830225 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.830313 kubelet[2920]: W0904 19:47:38.830243 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.830313 kubelet[2920]: E0904 19:47:38.830257 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.830313 kubelet[2920]: I0904 19:47:38.830277 2920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b5ea2e4f-7558-45fa-ba9d-79653786d20f-varrun\") pod \"csi-node-driver-x4kn5\" (UID: \"b5ea2e4f-7558-45fa-ba9d-79653786d20f\") " pod="calico-system/csi-node-driver-x4kn5" Sep 4 19:47:38.830456 kubelet[2920]: E0904 19:47:38.830405 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.830456 kubelet[2920]: W0904 19:47:38.830413 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.830456 kubelet[2920]: E0904 19:47:38.830422 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.830456 kubelet[2920]: I0904 19:47:38.830436 2920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b5ea2e4f-7558-45fa-ba9d-79653786d20f-registration-dir\") pod \"csi-node-driver-x4kn5\" (UID: \"b5ea2e4f-7558-45fa-ba9d-79653786d20f\") " pod="calico-system/csi-node-driver-x4kn5" Sep 4 19:47:38.830627 kubelet[2920]: E0904 19:47:38.830585 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.830627 kubelet[2920]: W0904 19:47:38.830595 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.830627 kubelet[2920]: E0904 19:47:38.830609 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.830627 kubelet[2920]: I0904 19:47:38.830628 2920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5ea2e4f-7558-45fa-ba9d-79653786d20f-kubelet-dir\") pod \"csi-node-driver-x4kn5\" (UID: \"b5ea2e4f-7558-45fa-ba9d-79653786d20f\") " pod="calico-system/csi-node-driver-x4kn5" Sep 4 19:47:38.830788 kubelet[2920]: E0904 19:47:38.830776 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.830788 kubelet[2920]: W0904 19:47:38.830786 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.830844 kubelet[2920]: E0904 19:47:38.830798 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.830844 kubelet[2920]: I0904 19:47:38.830811 2920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b5ea2e4f-7558-45fa-ba9d-79653786d20f-socket-dir\") pod \"csi-node-driver-x4kn5\" (UID: \"b5ea2e4f-7558-45fa-ba9d-79653786d20f\") " pod="calico-system/csi-node-driver-x4kn5" Sep 4 19:47:38.830967 kubelet[2920]: E0904 19:47:38.830933 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.830967 kubelet[2920]: W0904 19:47:38.830943 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.830967 kubelet[2920]: E0904 19:47:38.830955 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.830967 kubelet[2920]: I0904 19:47:38.830968 2920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gttv6\" (UniqueName: \"kubernetes.io/projected/b5ea2e4f-7558-45fa-ba9d-79653786d20f-kube-api-access-gttv6\") pod \"csi-node-driver-x4kn5\" (UID: \"b5ea2e4f-7558-45fa-ba9d-79653786d20f\") " pod="calico-system/csi-node-driver-x4kn5" Sep 4 19:47:38.831101 kubelet[2920]: E0904 19:47:38.831094 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.831126 kubelet[2920]: W0904 19:47:38.831101 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.831126 kubelet[2920]: E0904 19:47:38.831110 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.831221 kubelet[2920]: E0904 19:47:38.831214 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.831245 kubelet[2920]: W0904 19:47:38.831221 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.831245 kubelet[2920]: E0904 19:47:38.831235 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.831360 kubelet[2920]: E0904 19:47:38.831354 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.831382 kubelet[2920]: W0904 19:47:38.831359 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.831382 kubelet[2920]: E0904 19:47:38.831368 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.831463 kubelet[2920]: E0904 19:47:38.831457 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.831463 kubelet[2920]: W0904 19:47:38.831463 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.831505 kubelet[2920]: E0904 19:47:38.831473 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.831581 kubelet[2920]: E0904 19:47:38.831576 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.831607 kubelet[2920]: W0904 19:47:38.831581 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.831607 kubelet[2920]: E0904 19:47:38.831592 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.831695 kubelet[2920]: E0904 19:47:38.831689 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.831695 kubelet[2920]: W0904 19:47:38.831695 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.831739 kubelet[2920]: E0904 19:47:38.831705 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.831806 kubelet[2920]: E0904 19:47:38.831801 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.831828 kubelet[2920]: W0904 19:47:38.831806 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.831828 kubelet[2920]: E0904 19:47:38.831816 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.831906 kubelet[2920]: E0904 19:47:38.831900 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.831926 kubelet[2920]: W0904 19:47:38.831906 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.831926 kubelet[2920]: E0904 19:47:38.831914 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.832027 kubelet[2920]: E0904 19:47:38.832021 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.832048 kubelet[2920]: W0904 19:47:38.832027 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.832048 kubelet[2920]: E0904 19:47:38.832035 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.832122 kubelet[2920]: E0904 19:47:38.832116 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.832142 kubelet[2920]: W0904 19:47:38.832122 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.832142 kubelet[2920]: E0904 19:47:38.832128 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.874769 containerd[1543]: time="2024-09-04T19:47:38.874633125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b744d574-f5r4d,Uid:7eb8079e-8474-47b0-bc26-b0dc02e1502c,Namespace:calico-system,Attempt:0,}" Sep 4 19:47:38.885608 containerd[1543]: time="2024-09-04T19:47:38.885394951Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 19:47:38.885608 containerd[1543]: time="2024-09-04T19:47:38.885597450Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 19:47:38.885608 containerd[1543]: time="2024-09-04T19:47:38.885605561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 19:47:38.885706 containerd[1543]: time="2024-09-04T19:47:38.885651848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 19:47:38.887155 containerd[1543]: time="2024-09-04T19:47:38.887126512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hzqfw,Uid:16cc5663-ad5b-499c-8124-c955fcb261ea,Namespace:calico-system,Attempt:0,}" Sep 4 19:47:38.896529 containerd[1543]: time="2024-09-04T19:47:38.896454215Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 19:47:38.896529 containerd[1543]: time="2024-09-04T19:47:38.896488140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 19:47:38.896529 containerd[1543]: time="2024-09-04T19:47:38.896495719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 19:47:38.896643 containerd[1543]: time="2024-09-04T19:47:38.896541455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 19:47:38.902325 systemd[1]: Started cri-containerd-7b987da59594bc82bd8a877820f2c81d7575a7d4d4d5988b1359527bd6b1ce83.scope - libcontainer container 7b987da59594bc82bd8a877820f2c81d7575a7d4d4d5988b1359527bd6b1ce83. Sep 4 19:47:38.903945 systemd[1]: Started cri-containerd-1df7d635a08be4d9a8cae01d1af4a36f76de3eb97974449af3bba29c0edbbece.scope - libcontainer container 1df7d635a08be4d9a8cae01d1af4a36f76de3eb97974449af3bba29c0edbbece. Sep 4 19:47:38.913906 containerd[1543]: time="2024-09-04T19:47:38.913882608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hzqfw,Uid:16cc5663-ad5b-499c-8124-c955fcb261ea,Namespace:calico-system,Attempt:0,} returns sandbox id \"1df7d635a08be4d9a8cae01d1af4a36f76de3eb97974449af3bba29c0edbbece\"" Sep 4 19:47:38.914453 containerd[1543]: time="2024-09-04T19:47:38.914443131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Sep 4 19:47:38.925613 containerd[1543]: time="2024-09-04T19:47:38.925561163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b744d574-f5r4d,Uid:7eb8079e-8474-47b0-bc26-b0dc02e1502c,Namespace:calico-system,Attempt:0,} returns sandbox id \"7b987da59594bc82bd8a877820f2c81d7575a7d4d4d5988b1359527bd6b1ce83\"" Sep 4 19:47:38.932016 kubelet[2920]: E0904 19:47:38.932003 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.932016 kubelet[2920]: W0904 19:47:38.932014 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.932098 kubelet[2920]: E0904 19:47:38.932025 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.932140 kubelet[2920]: E0904 19:47:38.932134 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.932140 kubelet[2920]: W0904 19:47:38.932139 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.932202 kubelet[2920]: E0904 19:47:38.932146 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.932337 kubelet[2920]: E0904 19:47:38.932295 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.932337 kubelet[2920]: W0904 19:47:38.932304 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.932337 kubelet[2920]: E0904 19:47:38.932314 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.932465 kubelet[2920]: E0904 19:47:38.932440 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.932465 kubelet[2920]: W0904 19:47:38.932446 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.932465 kubelet[2920]: E0904 19:47:38.932455 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.932582 kubelet[2920]: E0904 19:47:38.932549 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.932582 kubelet[2920]: W0904 19:47:38.932555 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.932582 kubelet[2920]: E0904 19:47:38.932562 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.932669 kubelet[2920]: E0904 19:47:38.932665 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.932669 kubelet[2920]: W0904 19:47:38.932669 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.932711 kubelet[2920]: E0904 19:47:38.932677 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.932753 kubelet[2920]: E0904 19:47:38.932749 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.932753 kubelet[2920]: W0904 19:47:38.932753 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.932790 kubelet[2920]: E0904 19:47:38.932760 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.932831 kubelet[2920]: E0904 19:47:38.932826 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.932850 kubelet[2920]: W0904 19:47:38.932831 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.932850 kubelet[2920]: E0904 19:47:38.932837 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.932942 kubelet[2920]: E0904 19:47:38.932937 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.932942 kubelet[2920]: W0904 19:47:38.932942 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.932978 kubelet[2920]: E0904 19:47:38.932948 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.933031 kubelet[2920]: E0904 19:47:38.933026 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.933048 kubelet[2920]: W0904 19:47:38.933030 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.933048 kubelet[2920]: E0904 19:47:38.933037 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.933106 kubelet[2920]: E0904 19:47:38.933102 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.933125 kubelet[2920]: W0904 19:47:38.933106 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.933125 kubelet[2920]: E0904 19:47:38.933113 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.933181 kubelet[2920]: E0904 19:47:38.933177 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.933198 kubelet[2920]: W0904 19:47:38.933181 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.933198 kubelet[2920]: E0904 19:47:38.933187 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.933288 kubelet[2920]: E0904 19:47:38.933284 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.933288 kubelet[2920]: W0904 19:47:38.933288 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.933326 kubelet[2920]: E0904 19:47:38.933294 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.933371 kubelet[2920]: E0904 19:47:38.933366 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.933391 kubelet[2920]: W0904 19:47:38.933371 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.933391 kubelet[2920]: E0904 19:47:38.933377 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.933456 kubelet[2920]: E0904 19:47:38.933451 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.933456 kubelet[2920]: W0904 19:47:38.933455 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.933497 kubelet[2920]: E0904 19:47:38.933472 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.933527 kubelet[2920]: E0904 19:47:38.933523 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.933547 kubelet[2920]: W0904 19:47:38.933527 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.933579 kubelet[2920]: E0904 19:47:38.933555 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.933607 kubelet[2920]: E0904 19:47:38.933592 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.933607 kubelet[2920]: W0904 19:47:38.933596 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.933607 kubelet[2920]: E0904 19:47:38.933602 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.933679 kubelet[2920]: E0904 19:47:38.933675 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.933679 kubelet[2920]: W0904 19:47:38.933679 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.933719 kubelet[2920]: E0904 19:47:38.933685 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.933755 kubelet[2920]: E0904 19:47:38.933751 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.933755 kubelet[2920]: W0904 19:47:38.933755 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.933812 kubelet[2920]: E0904 19:47:38.933761 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.933859 kubelet[2920]: E0904 19:47:38.933852 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.933893 kubelet[2920]: W0904 19:47:38.933859 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.933893 kubelet[2920]: E0904 19:47:38.933872 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.934002 kubelet[2920]: E0904 19:47:38.933997 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.934002 kubelet[2920]: W0904 19:47:38.934001 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.934043 kubelet[2920]: E0904 19:47:38.934008 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.934074 kubelet[2920]: E0904 19:47:38.934069 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.934074 kubelet[2920]: W0904 19:47:38.934073 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.934118 kubelet[2920]: E0904 19:47:38.934079 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.934172 kubelet[2920]: E0904 19:47:38.934166 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.934190 kubelet[2920]: W0904 19:47:38.934173 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.934190 kubelet[2920]: E0904 19:47:38.934182 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.934275 kubelet[2920]: E0904 19:47:38.934271 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.934275 kubelet[2920]: W0904 19:47:38.934275 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.934318 kubelet[2920]: E0904 19:47:38.934283 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.934410 kubelet[2920]: E0904 19:47:38.934406 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.934428 kubelet[2920]: W0904 19:47:38.934410 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.934428 kubelet[2920]: E0904 19:47:38.934416 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:38.938703 kubelet[2920]: E0904 19:47:38.938692 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 19:47:38.938703 kubelet[2920]: W0904 19:47:38.938700 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 19:47:38.938779 kubelet[2920]: E0904 19:47:38.938714 2920 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 19:47:40.194356 containerd[1543]: time="2024-09-04T19:47:40.194303501Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:47:40.194593 containerd[1543]: time="2024-09-04T19:47:40.194432439Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Sep 4 19:47:40.194761 containerd[1543]: time="2024-09-04T19:47:40.194722263Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:47:40.195797 containerd[1543]: time="2024-09-04T19:47:40.195757580Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:47:40.196232 containerd[1543]: time="2024-09-04T19:47:40.196187847Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.28172991s" Sep 4 19:47:40.196232 containerd[1543]: time="2024-09-04T19:47:40.196207906Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Sep 4 19:47:40.196510 containerd[1543]: time="2024-09-04T19:47:40.196469943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Sep 4 19:47:40.197126 containerd[1543]: time="2024-09-04T19:47:40.197081632Z" level=info msg="CreateContainer within sandbox \"1df7d635a08be4d9a8cae01d1af4a36f76de3eb97974449af3bba29c0edbbece\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 19:47:40.220897 containerd[1543]: time="2024-09-04T19:47:40.220845525Z" level=info msg="CreateContainer within sandbox \"1df7d635a08be4d9a8cae01d1af4a36f76de3eb97974449af3bba29c0edbbece\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4f5b70c1f087e5d98562abe52ff3a2d82adcfc698220e57364bbcbc7d6439c33\"" Sep 4 19:47:40.221133 containerd[1543]: time="2024-09-04T19:47:40.221116803Z" level=info msg="StartContainer for \"4f5b70c1f087e5d98562abe52ff3a2d82adcfc698220e57364bbcbc7d6439c33\"" Sep 4 19:47:40.241362 systemd[1]: Started cri-containerd-4f5b70c1f087e5d98562abe52ff3a2d82adcfc698220e57364bbcbc7d6439c33.scope - libcontainer container 4f5b70c1f087e5d98562abe52ff3a2d82adcfc698220e57364bbcbc7d6439c33. Sep 4 19:47:40.254489 containerd[1543]: time="2024-09-04T19:47:40.254438138Z" level=info msg="StartContainer for \"4f5b70c1f087e5d98562abe52ff3a2d82adcfc698220e57364bbcbc7d6439c33\" returns successfully" Sep 4 19:47:40.259523 systemd[1]: cri-containerd-4f5b70c1f087e5d98562abe52ff3a2d82adcfc698220e57364bbcbc7d6439c33.scope: Deactivated successfully. Sep 4 19:47:40.391894 kubelet[2920]: E0904 19:47:40.391830 2920 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x4kn5" podUID="b5ea2e4f-7558-45fa-ba9d-79653786d20f" Sep 4 19:47:40.501530 containerd[1543]: time="2024-09-04T19:47:40.501456027Z" level=info msg="shim disconnected" id=4f5b70c1f087e5d98562abe52ff3a2d82adcfc698220e57364bbcbc7d6439c33 namespace=k8s.io Sep 4 19:47:40.501530 containerd[1543]: time="2024-09-04T19:47:40.501489168Z" level=warning msg="cleaning up after shim disconnected" id=4f5b70c1f087e5d98562abe52ff3a2d82adcfc698220e57364bbcbc7d6439c33 namespace=k8s.io Sep 4 19:47:40.501530 containerd[1543]: time="2024-09-04T19:47:40.501496029Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 19:47:40.739661 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f5b70c1f087e5d98562abe52ff3a2d82adcfc698220e57364bbcbc7d6439c33-rootfs.mount: Deactivated successfully. Sep 4 19:47:42.390676 kubelet[2920]: E0904 19:47:42.390660 2920 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x4kn5" podUID="b5ea2e4f-7558-45fa-ba9d-79653786d20f" Sep 4 19:47:42.518069 containerd[1543]: time="2024-09-04T19:47:42.518015302Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:47:42.518282 containerd[1543]: time="2024-09-04T19:47:42.518255474Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Sep 4 19:47:42.518650 containerd[1543]: time="2024-09-04T19:47:42.518608090Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:47:42.519836 containerd[1543]: time="2024-09-04T19:47:42.519820341Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:47:42.520711 containerd[1543]: time="2024-09-04T19:47:42.520661238Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 2.324174247s" Sep 4 19:47:42.520711 containerd[1543]: time="2024-09-04T19:47:42.520678565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Sep 4 19:47:42.520998 containerd[1543]: time="2024-09-04T19:47:42.520958849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Sep 4 19:47:42.523982 containerd[1543]: time="2024-09-04T19:47:42.523965020Z" level=info msg="CreateContainer within sandbox \"7b987da59594bc82bd8a877820f2c81d7575a7d4d4d5988b1359527bd6b1ce83\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 4 19:47:42.528127 containerd[1543]: time="2024-09-04T19:47:42.528082237Z" level=info msg="CreateContainer within sandbox \"7b987da59594bc82bd8a877820f2c81d7575a7d4d4d5988b1359527bd6b1ce83\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d3a41cfd22a818676d5e2b09530dd06b28540649638c348106b5c07f0d6605bb\"" Sep 4 19:47:42.528356 containerd[1543]: time="2024-09-04T19:47:42.528311117Z" level=info msg="StartContainer for \"d3a41cfd22a818676d5e2b09530dd06b28540649638c348106b5c07f0d6605bb\"" Sep 4 19:47:42.552344 systemd[1]: Started cri-containerd-d3a41cfd22a818676d5e2b09530dd06b28540649638c348106b5c07f0d6605bb.scope - libcontainer container d3a41cfd22a818676d5e2b09530dd06b28540649638c348106b5c07f0d6605bb. Sep 4 19:47:42.574791 containerd[1543]: time="2024-09-04T19:47:42.574765183Z" level=info msg="StartContainer for \"d3a41cfd22a818676d5e2b09530dd06b28540649638c348106b5c07f0d6605bb\" returns successfully" Sep 4 19:47:44.391024 kubelet[2920]: E0904 19:47:44.391005 2920 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x4kn5" podUID="b5ea2e4f-7558-45fa-ba9d-79653786d20f" Sep 4 19:47:44.456403 kubelet[2920]: I0904 19:47:44.456389 2920 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 19:47:44.782462 containerd[1543]: time="2024-09-04T19:47:44.782403316Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:47:44.782696 containerd[1543]: time="2024-09-04T19:47:44.782620782Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Sep 4 19:47:44.782923 containerd[1543]: time="2024-09-04T19:47:44.782910460Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:47:44.783984 containerd[1543]: time="2024-09-04T19:47:44.783971200Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:47:44.784452 containerd[1543]: time="2024-09-04T19:47:44.784410634Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 2.263434085s" Sep 4 19:47:44.784452 containerd[1543]: time="2024-09-04T19:47:44.784427074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Sep 4 19:47:44.785184 containerd[1543]: time="2024-09-04T19:47:44.785171811Z" level=info msg="CreateContainer within sandbox \"1df7d635a08be4d9a8cae01d1af4a36f76de3eb97974449af3bba29c0edbbece\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 4 19:47:44.789844 containerd[1543]: time="2024-09-04T19:47:44.789800652Z" level=info msg="CreateContainer within sandbox \"1df7d635a08be4d9a8cae01d1af4a36f76de3eb97974449af3bba29c0edbbece\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"52b70c324eb61b70b1f8478fb473f1db7b8874668f13679841009ef8fa0c1705\"" Sep 4 19:47:44.790068 containerd[1543]: time="2024-09-04T19:47:44.790014216Z" level=info msg="StartContainer for \"52b70c324eb61b70b1f8478fb473f1db7b8874668f13679841009ef8fa0c1705\"" Sep 4 19:47:44.824481 systemd[1]: Started cri-containerd-52b70c324eb61b70b1f8478fb473f1db7b8874668f13679841009ef8fa0c1705.scope - libcontainer container 52b70c324eb61b70b1f8478fb473f1db7b8874668f13679841009ef8fa0c1705. Sep 4 19:47:44.839391 containerd[1543]: time="2024-09-04T19:47:44.839325836Z" level=info msg="StartContainer for \"52b70c324eb61b70b1f8478fb473f1db7b8874668f13679841009ef8fa0c1705\" returns successfully" Sep 4 19:47:45.395746 systemd[1]: cri-containerd-52b70c324eb61b70b1f8478fb473f1db7b8874668f13679841009ef8fa0c1705.scope: Deactivated successfully. Sep 4 19:47:45.405411 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52b70c324eb61b70b1f8478fb473f1db7b8874668f13679841009ef8fa0c1705-rootfs.mount: Deactivated successfully. Sep 4 19:47:45.486406 kubelet[2920]: I0904 19:47:45.486327 2920 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-7b744d574-f5r4d" podStartSLOduration=3.891372648 podStartE2EDuration="7.486168305s" podCreationTimestamp="2024-09-04 19:47:38 +0000 UTC" firstStartedPulling="2024-09-04 19:47:38.926066527 +0000 UTC m=+18.584132485" lastFinishedPulling="2024-09-04 19:47:42.520862183 +0000 UTC m=+22.178928142" observedRunningTime="2024-09-04 19:47:43.462991638 +0000 UTC m=+23.121057600" watchObservedRunningTime="2024-09-04 19:47:45.486168305 +0000 UTC m=+25.144234321" Sep 4 19:47:45.493881 kubelet[2920]: I0904 19:47:45.493837 2920 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Sep 4 19:47:45.527987 kubelet[2920]: I0904 19:47:45.527902 2920 topology_manager.go:215] "Topology Admit Handler" podUID="8bff0692-b68d-4aa8-9961-ca2f51fb1464" podNamespace="kube-system" podName="coredns-76f75df574-qgfts" Sep 4 19:47:45.532325 kubelet[2920]: I0904 19:47:45.532247 2920 topology_manager.go:215] "Topology Admit Handler" podUID="4011d5c2-f944-4edf-b08f-6548b0f18f8f" podNamespace="kube-system" podName="coredns-76f75df574-ggjf5" Sep 4 19:47:45.534312 kubelet[2920]: I0904 19:47:45.534252 2920 topology_manager.go:215] "Topology Admit Handler" podUID="d8799ace-edd3-46ef-9fda-05184d5f7775" podNamespace="calico-system" podName="calico-kube-controllers-5865df48f4-bgn64" Sep 4 19:47:45.541105 systemd[1]: Created slice kubepods-burstable-pod8bff0692_b68d_4aa8_9961_ca2f51fb1464.slice - libcontainer container kubepods-burstable-pod8bff0692_b68d_4aa8_9961_ca2f51fb1464.slice. Sep 4 19:47:45.547141 systemd[1]: Created slice kubepods-burstable-pod4011d5c2_f944_4edf_b08f_6548b0f18f8f.slice - libcontainer container kubepods-burstable-pod4011d5c2_f944_4edf_b08f_6548b0f18f8f.slice. Sep 4 19:47:45.551194 systemd[1]: Created slice kubepods-besteffort-podd8799ace_edd3_46ef_9fda_05184d5f7775.slice - libcontainer container kubepods-besteffort-podd8799ace_edd3_46ef_9fda_05184d5f7775.slice. Sep 4 19:47:45.579529 kubelet[2920]: I0904 19:47:45.579478 2920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lp28\" (UniqueName: \"kubernetes.io/projected/4011d5c2-f944-4edf-b08f-6548b0f18f8f-kube-api-access-9lp28\") pod \"coredns-76f75df574-ggjf5\" (UID: \"4011d5c2-f944-4edf-b08f-6548b0f18f8f\") " pod="kube-system/coredns-76f75df574-ggjf5" Sep 4 19:47:45.579529 kubelet[2920]: I0904 19:47:45.579522 2920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4011d5c2-f944-4edf-b08f-6548b0f18f8f-config-volume\") pod \"coredns-76f75df574-ggjf5\" (UID: \"4011d5c2-f944-4edf-b08f-6548b0f18f8f\") " pod="kube-system/coredns-76f75df574-ggjf5" Sep 4 19:47:45.579690 kubelet[2920]: I0904 19:47:45.579546 2920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8bff0692-b68d-4aa8-9961-ca2f51fb1464-config-volume\") pod \"coredns-76f75df574-qgfts\" (UID: \"8bff0692-b68d-4aa8-9961-ca2f51fb1464\") " pod="kube-system/coredns-76f75df574-qgfts" Sep 4 19:47:45.579690 kubelet[2920]: I0904 19:47:45.579572 2920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8799ace-edd3-46ef-9fda-05184d5f7775-tigera-ca-bundle\") pod \"calico-kube-controllers-5865df48f4-bgn64\" (UID: \"d8799ace-edd3-46ef-9fda-05184d5f7775\") " pod="calico-system/calico-kube-controllers-5865df48f4-bgn64" Sep 4 19:47:45.579764 kubelet[2920]: I0904 19:47:45.579717 2920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jdmh\" (UniqueName: \"kubernetes.io/projected/d8799ace-edd3-46ef-9fda-05184d5f7775-kube-api-access-2jdmh\") pod \"calico-kube-controllers-5865df48f4-bgn64\" (UID: \"d8799ace-edd3-46ef-9fda-05184d5f7775\") " pod="calico-system/calico-kube-controllers-5865df48f4-bgn64" Sep 4 19:47:45.579811 kubelet[2920]: I0904 19:47:45.579763 2920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np8nx\" (UniqueName: \"kubernetes.io/projected/8bff0692-b68d-4aa8-9961-ca2f51fb1464-kube-api-access-np8nx\") pod \"coredns-76f75df574-qgfts\" (UID: \"8bff0692-b68d-4aa8-9961-ca2f51fb1464\") " pod="kube-system/coredns-76f75df574-qgfts" Sep 4 19:47:45.846075 containerd[1543]: time="2024-09-04T19:47:45.845851920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qgfts,Uid:8bff0692-b68d-4aa8-9961-ca2f51fb1464,Namespace:kube-system,Attempt:0,}" Sep 4 19:47:45.851285 containerd[1543]: time="2024-09-04T19:47:45.851168283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ggjf5,Uid:4011d5c2-f944-4edf-b08f-6548b0f18f8f,Namespace:kube-system,Attempt:0,}" Sep 4 19:47:45.854489 containerd[1543]: time="2024-09-04T19:47:45.854383026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5865df48f4-bgn64,Uid:d8799ace-edd3-46ef-9fda-05184d5f7775,Namespace:calico-system,Attempt:0,}" Sep 4 19:47:46.058009 containerd[1543]: time="2024-09-04T19:47:46.057903033Z" level=info msg="shim disconnected" id=52b70c324eb61b70b1f8478fb473f1db7b8874668f13679841009ef8fa0c1705 namespace=k8s.io Sep 4 19:47:46.058009 containerd[1543]: time="2024-09-04T19:47:46.057973851Z" level=warning msg="cleaning up after shim disconnected" id=52b70c324eb61b70b1f8478fb473f1db7b8874668f13679841009ef8fa0c1705 namespace=k8s.io Sep 4 19:47:46.058009 containerd[1543]: time="2024-09-04T19:47:46.057980086Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 19:47:46.090930 containerd[1543]: time="2024-09-04T19:47:46.090884458Z" level=error msg="Failed to destroy network for sandbox \"48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 19:47:46.091126 containerd[1543]: time="2024-09-04T19:47:46.091113392Z" level=error msg="encountered an error cleaning up failed sandbox \"48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 19:47:46.091157 containerd[1543]: time="2024-09-04T19:47:46.091147070Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ggjf5,Uid:4011d5c2-f944-4edf-b08f-6548b0f18f8f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 19:47:46.091394 kubelet[2920]: E0904 19:47:46.091355 2920 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 19:47:46.091478 kubelet[2920]: E0904 19:47:46.091423 2920 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-ggjf5" Sep 4 19:47:46.091478 kubelet[2920]: E0904 19:47:46.091445 2920 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-ggjf5" Sep 4 19:47:46.091520 kubelet[2920]: E0904 19:47:46.091490 2920 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-ggjf5_kube-system(4011d5c2-f944-4edf-b08f-6548b0f18f8f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-ggjf5_kube-system(4011d5c2-f944-4edf-b08f-6548b0f18f8f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-ggjf5" podUID="4011d5c2-f944-4edf-b08f-6548b0f18f8f" Sep 4 19:47:46.091900 containerd[1543]: time="2024-09-04T19:47:46.091832577Z" level=error msg="Failed to destroy network for sandbox \"d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 19:47:46.091934 containerd[1543]: time="2024-09-04T19:47:46.091835517Z" level=error msg="Failed to destroy network for sandbox \"870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 19:47:46.092062 containerd[1543]: time="2024-09-04T19:47:46.092050556Z" level=error msg="encountered an error cleaning up failed sandbox \"870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 19:47:46.092084 containerd[1543]: time="2024-09-04T19:47:46.092065857Z" level=error msg="encountered an error cleaning up failed sandbox \"d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 19:47:46.092115 containerd[1543]: time="2024-09-04T19:47:46.092099469Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qgfts,Uid:8bff0692-b68d-4aa8-9961-ca2f51fb1464,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 19:47:46.092153 containerd[1543]: time="2024-09-04T19:47:46.092076133Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5865df48f4-bgn64,Uid:d8799ace-edd3-46ef-9fda-05184d5f7775,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 19:47:46.092250 kubelet[2920]: E0904 19:47:46.092212 2920 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 19:47:46.092250 kubelet[2920]: E0904 19:47:46.092225 2920 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 19:47:46.092250 kubelet[2920]: E0904 19:47:46.092241 2920 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5865df48f4-bgn64" Sep 4 19:47:46.092250 kubelet[2920]: E0904 19:47:46.092242 2920 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-qgfts" Sep 4 19:47:46.092361 kubelet[2920]: E0904 19:47:46.092256 2920 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-qgfts" Sep 4 19:47:46.092361 kubelet[2920]: E0904 19:47:46.092260 2920 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5865df48f4-bgn64" Sep 4 19:47:46.092361 kubelet[2920]: E0904 19:47:46.092281 2920 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-qgfts_kube-system(8bff0692-b68d-4aa8-9961-ca2f51fb1464)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-qgfts_kube-system(8bff0692-b68d-4aa8-9961-ca2f51fb1464)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-qgfts" podUID="8bff0692-b68d-4aa8-9961-ca2f51fb1464" Sep 4 19:47:46.092430 kubelet[2920]: E0904 19:47:46.092294 2920 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5865df48f4-bgn64_calico-system(d8799ace-edd3-46ef-9fda-05184d5f7775)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5865df48f4-bgn64_calico-system(d8799ace-edd3-46ef-9fda-05184d5f7775)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5865df48f4-bgn64" podUID="d8799ace-edd3-46ef-9fda-05184d5f7775" Sep 4 19:47:46.092571 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e-shm.mount: Deactivated successfully. Sep 4 19:47:46.093986 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963-shm.mount: Deactivated successfully. Sep 4 19:47:46.094029 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2-shm.mount: Deactivated successfully. Sep 4 19:47:46.405944 systemd[1]: Created slice kubepods-besteffort-podb5ea2e4f_7558_45fa_ba9d_79653786d20f.slice - libcontainer container kubepods-besteffort-podb5ea2e4f_7558_45fa_ba9d_79653786d20f.slice. Sep 4 19:47:46.411167 containerd[1543]: time="2024-09-04T19:47:46.411051354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x4kn5,Uid:b5ea2e4f-7558-45fa-ba9d-79653786d20f,Namespace:calico-system,Attempt:0,}" Sep 4 19:47:46.441392 containerd[1543]: time="2024-09-04T19:47:46.441333616Z" level=error msg="Failed to destroy network for sandbox \"d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 19:47:46.441552 containerd[1543]: time="2024-09-04T19:47:46.441501988Z" level=error msg="encountered an error cleaning up failed sandbox \"d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 19:47:46.441552 containerd[1543]: time="2024-09-04T19:47:46.441532179Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x4kn5,Uid:b5ea2e4f-7558-45fa-ba9d-79653786d20f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 19:47:46.441702 kubelet[2920]: E0904 19:47:46.441690 2920 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 19:47:46.441737 kubelet[2920]: E0904 19:47:46.441722 2920 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x4kn5" Sep 4 19:47:46.441758 kubelet[2920]: E0904 19:47:46.441738 2920 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x4kn5" Sep 4 19:47:46.441805 kubelet[2920]: E0904 19:47:46.441771 2920 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-x4kn5_calico-system(b5ea2e4f-7558-45fa-ba9d-79653786d20f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-x4kn5_calico-system(b5ea2e4f-7558-45fa-ba9d-79653786d20f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x4kn5" podUID="b5ea2e4f-7558-45fa-ba9d-79653786d20f" Sep 4 19:47:46.461877 kubelet[2920]: I0904 19:47:46.461866 2920 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e" Sep 4 19:47:46.461952 containerd[1543]: time="2024-09-04T19:47:46.461939684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Sep 4 19:47:46.462241 containerd[1543]: time="2024-09-04T19:47:46.462226950Z" level=info msg="StopPodSandbox for \"48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e\"" Sep 4 19:47:46.462355 containerd[1543]: time="2024-09-04T19:47:46.462339349Z" level=info msg="Ensure that sandbox 48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e in task-service has been cleanup successfully" Sep 4 19:47:46.462389 kubelet[2920]: I0904 19:47:46.462380 2920 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3" Sep 4 19:47:46.462661 containerd[1543]: time="2024-09-04T19:47:46.462646089Z" level=info msg="StopPodSandbox for \"d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3\"" Sep 4 19:47:46.462779 containerd[1543]: time="2024-09-04T19:47:46.462765798Z" level=info msg="Ensure that sandbox d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3 in task-service has been cleanup successfully" Sep 4 19:47:46.462824 kubelet[2920]: I0904 19:47:46.462816 2920 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963" Sep 4 19:47:46.463070 containerd[1543]: time="2024-09-04T19:47:46.463049630Z" level=info msg="StopPodSandbox for \"870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963\"" Sep 4 19:47:46.463193 containerd[1543]: time="2024-09-04T19:47:46.463179522Z" level=info msg="Ensure that sandbox 870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963 in task-service has been cleanup successfully" Sep 4 19:47:46.463267 kubelet[2920]: I0904 19:47:46.463255 2920 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2" Sep 4 19:47:46.463516 containerd[1543]: time="2024-09-04T19:47:46.463499911Z" level=info msg="StopPodSandbox for \"d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2\"" Sep 4 19:47:46.463643 containerd[1543]: time="2024-09-04T19:47:46.463630324Z" level=info msg="Ensure that sandbox d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2 in task-service has been cleanup successfully" Sep 4 19:47:46.476890 containerd[1543]: time="2024-09-04T19:47:46.476854148Z" level=error msg="StopPodSandbox for \"d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2\" failed" error="failed to destroy network for sandbox \"d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 19:47:46.476985 containerd[1543]: time="2024-09-04T19:47:46.476878185Z" level=error msg="StopPodSandbox for \"870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963\" failed" error="failed to destroy network for sandbox \"870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 19:47:46.477075 kubelet[2920]: E0904 19:47:46.477063 2920 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963" Sep 4 19:47:46.477121 kubelet[2920]: E0904 19:47:46.477116 2920 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963"} Sep 4 19:47:46.477147 kubelet[2920]: E0904 19:47:46.477063 2920 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2" Sep 4 19:47:46.477168 kubelet[2920]: E0904 19:47:46.477157 2920 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2"} Sep 4 19:47:46.477187 kubelet[2920]: E0904 19:47:46.477178 2920 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8bff0692-b68d-4aa8-9961-ca2f51fb1464\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 19:47:46.477236 kubelet[2920]: E0904 19:47:46.477196 2920 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8bff0692-b68d-4aa8-9961-ca2f51fb1464\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-qgfts" podUID="8bff0692-b68d-4aa8-9961-ca2f51fb1464" Sep 4 19:47:46.477236 kubelet[2920]: E0904 19:47:46.477140 2920 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d8799ace-edd3-46ef-9fda-05184d5f7775\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 19:47:46.477236 kubelet[2920]: E0904 19:47:46.477225 2920 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d8799ace-edd3-46ef-9fda-05184d5f7775\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5865df48f4-bgn64" podUID="d8799ace-edd3-46ef-9fda-05184d5f7775" Sep 4 19:47:46.478391 containerd[1543]: time="2024-09-04T19:47:46.478345722Z" level=error msg="StopPodSandbox for \"d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3\" failed" error="failed to destroy network for sandbox \"d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 19:47:46.478446 containerd[1543]: time="2024-09-04T19:47:46.478431348Z" level=error msg="StopPodSandbox for \"48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e\" failed" error="failed to destroy network for sandbox \"48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 19:47:46.478476 kubelet[2920]: E0904 19:47:46.478450 2920 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3" Sep 4 19:47:46.478476 kubelet[2920]: E0904 19:47:46.478466 2920 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3"} Sep 4 19:47:46.478521 kubelet[2920]: E0904 19:47:46.478485 2920 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b5ea2e4f-7558-45fa-ba9d-79653786d20f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 19:47:46.478521 kubelet[2920]: E0904 19:47:46.478499 2920 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b5ea2e4f-7558-45fa-ba9d-79653786d20f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x4kn5" podUID="b5ea2e4f-7558-45fa-ba9d-79653786d20f" Sep 4 19:47:46.478580 kubelet[2920]: E0904 19:47:46.478518 2920 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e" Sep 4 19:47:46.478580 kubelet[2920]: E0904 19:47:46.478532 2920 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e"} Sep 4 19:47:46.478580 kubelet[2920]: E0904 19:47:46.478550 2920 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4011d5c2-f944-4edf-b08f-6548b0f18f8f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 19:47:46.478580 kubelet[2920]: E0904 19:47:46.478565 2920 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4011d5c2-f944-4edf-b08f-6548b0f18f8f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-ggjf5" podUID="4011d5c2-f944-4edf-b08f-6548b0f18f8f" Sep 4 19:47:49.341808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount721613282.mount: Deactivated successfully. Sep 4 19:47:49.358731 containerd[1543]: time="2024-09-04T19:47:49.358680515Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:47:49.358864 containerd[1543]: time="2024-09-04T19:47:49.358806249Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Sep 4 19:47:49.359150 containerd[1543]: time="2024-09-04T19:47:49.359110544Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:47:49.360457 containerd[1543]: time="2024-09-04T19:47:49.360416340Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:47:49.360722 containerd[1543]: time="2024-09-04T19:47:49.360683196Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 2.898724846s" Sep 4 19:47:49.360722 containerd[1543]: time="2024-09-04T19:47:49.360698014Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Sep 4 19:47:49.364305 containerd[1543]: time="2024-09-04T19:47:49.364221401Z" level=info msg="CreateContainer within sandbox \"1df7d635a08be4d9a8cae01d1af4a36f76de3eb97974449af3bba29c0edbbece\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 4 19:47:49.369433 containerd[1543]: time="2024-09-04T19:47:49.369390273Z" level=info msg="CreateContainer within sandbox \"1df7d635a08be4d9a8cae01d1af4a36f76de3eb97974449af3bba29c0edbbece\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3b109685fb1e585f21aaeb4bdb92c4fcb94210db94dbc707b738c0259039637b\"" Sep 4 19:47:49.369653 containerd[1543]: time="2024-09-04T19:47:49.369606339Z" level=info msg="StartContainer for \"3b109685fb1e585f21aaeb4bdb92c4fcb94210db94dbc707b738c0259039637b\"" Sep 4 19:47:49.397476 systemd[1]: Started cri-containerd-3b109685fb1e585f21aaeb4bdb92c4fcb94210db94dbc707b738c0259039637b.scope - libcontainer container 3b109685fb1e585f21aaeb4bdb92c4fcb94210db94dbc707b738c0259039637b. Sep 4 19:47:49.427774 containerd[1543]: time="2024-09-04T19:47:49.427708530Z" level=info msg="StartContainer for \"3b109685fb1e585f21aaeb4bdb92c4fcb94210db94dbc707b738c0259039637b\" returns successfully" Sep 4 19:47:49.477591 kubelet[2920]: I0904 19:47:49.477544 2920 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-hzqfw" podStartSLOduration=1.030989684 podStartE2EDuration="11.477519696s" podCreationTimestamp="2024-09-04 19:47:38 +0000 UTC" firstStartedPulling="2024-09-04 19:47:38.914318312 +0000 UTC m=+18.572384268" lastFinishedPulling="2024-09-04 19:47:49.360848322 +0000 UTC m=+29.018914280" observedRunningTime="2024-09-04 19:47:49.477259139 +0000 UTC m=+29.135325101" watchObservedRunningTime="2024-09-04 19:47:49.477519696 +0000 UTC m=+29.135585652" Sep 4 19:47:49.481802 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 4 19:47:49.481832 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 4 19:47:50.472128 kubelet[2920]: I0904 19:47:50.472078 2920 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 19:47:52.812292 kubelet[2920]: I0904 19:47:52.812174 2920 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 19:47:59.392195 containerd[1543]: time="2024-09-04T19:47:59.392068563Z" level=info msg="StopPodSandbox for \"d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2\"" Sep 4 19:47:59.392195 containerd[1543]: time="2024-09-04T19:47:59.392149425Z" level=info msg="StopPodSandbox for \"48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e\"" Sep 4 19:47:59.484387 containerd[1543]: 2024-09-04 19:47:59.462 [INFO][4748] k8s.go 608: Cleaning up netns ContainerID="48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e" Sep 4 19:47:59.484387 containerd[1543]: 2024-09-04 19:47:59.462 [INFO][4748] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e" iface="eth0" netns="/var/run/netns/cni-d37c639d-0ca2-ae25-b9cc-cc8e6966df77" Sep 4 19:47:59.484387 containerd[1543]: 2024-09-04 19:47:59.462 [INFO][4748] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e" iface="eth0" netns="/var/run/netns/cni-d37c639d-0ca2-ae25-b9cc-cc8e6966df77" Sep 4 19:47:59.484387 containerd[1543]: 2024-09-04 19:47:59.462 [INFO][4748] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e" iface="eth0" netns="/var/run/netns/cni-d37c639d-0ca2-ae25-b9cc-cc8e6966df77" Sep 4 19:47:59.484387 containerd[1543]: 2024-09-04 19:47:59.463 [INFO][4748] k8s.go 615: Releasing IP address(es) ContainerID="48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e" Sep 4 19:47:59.484387 containerd[1543]: 2024-09-04 19:47:59.463 [INFO][4748] utils.go 188: Calico CNI releasing IP address ContainerID="48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e" Sep 4 19:47:59.484387 containerd[1543]: 2024-09-04 19:47:59.476 [INFO][4781] ipam_plugin.go 417: Releasing address using handleID ContainerID="48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e" HandleID="k8s-pod-network.48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e" Workload="ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--ggjf5-eth0" Sep 4 19:47:59.484387 containerd[1543]: 2024-09-04 19:47:59.476 [INFO][4781] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 19:47:59.484387 containerd[1543]: 2024-09-04 19:47:59.476 [INFO][4781] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 19:47:59.484387 containerd[1543]: 2024-09-04 19:47:59.480 [WARNING][4781] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e" HandleID="k8s-pod-network.48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e" Workload="ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--ggjf5-eth0" Sep 4 19:47:59.484387 containerd[1543]: 2024-09-04 19:47:59.480 [INFO][4781] ipam_plugin.go 445: Releasing address using workloadID ContainerID="48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e" HandleID="k8s-pod-network.48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e" Workload="ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--ggjf5-eth0" Sep 4 19:47:59.484387 containerd[1543]: 2024-09-04 19:47:59.481 [INFO][4781] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 19:47:59.484387 containerd[1543]: 2024-09-04 19:47:59.483 [INFO][4748] k8s.go 621: Teardown processing complete. ContainerID="48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e" Sep 4 19:47:59.484810 containerd[1543]: time="2024-09-04T19:47:59.484502793Z" level=info msg="TearDown network for sandbox \"48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e\" successfully" Sep 4 19:47:59.484810 containerd[1543]: time="2024-09-04T19:47:59.484530478Z" level=info msg="StopPodSandbox for \"48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e\" returns successfully" Sep 4 19:47:59.485076 containerd[1543]: time="2024-09-04T19:47:59.485051412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ggjf5,Uid:4011d5c2-f944-4edf-b08f-6548b0f18f8f,Namespace:kube-system,Attempt:1,}" Sep 4 19:47:59.486235 systemd[1]: run-netns-cni\x2dd37c639d\x2d0ca2\x2dae25\x2db9cc\x2dcc8e6966df77.mount: Deactivated successfully. Sep 4 19:47:59.488191 containerd[1543]: 2024-09-04 19:47:59.462 [INFO][4749] k8s.go 608: Cleaning up netns ContainerID="d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2" Sep 4 19:47:59.488191 containerd[1543]: 2024-09-04 19:47:59.462 [INFO][4749] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2" iface="eth0" netns="/var/run/netns/cni-73236171-41ce-ecaa-9133-d367d3aad2bb" Sep 4 19:47:59.488191 containerd[1543]: 2024-09-04 19:47:59.462 [INFO][4749] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2" iface="eth0" netns="/var/run/netns/cni-73236171-41ce-ecaa-9133-d367d3aad2bb" Sep 4 19:47:59.488191 containerd[1543]: 2024-09-04 19:47:59.462 [INFO][4749] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2" iface="eth0" netns="/var/run/netns/cni-73236171-41ce-ecaa-9133-d367d3aad2bb" Sep 4 19:47:59.488191 containerd[1543]: 2024-09-04 19:47:59.463 [INFO][4749] k8s.go 615: Releasing IP address(es) ContainerID="d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2" Sep 4 19:47:59.488191 containerd[1543]: 2024-09-04 19:47:59.463 [INFO][4749] utils.go 188: Calico CNI releasing IP address ContainerID="d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2" Sep 4 19:47:59.488191 containerd[1543]: 2024-09-04 19:47:59.477 [INFO][4782] ipam_plugin.go 417: Releasing address using handleID ContainerID="d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2" HandleID="k8s-pod-network.d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2" Workload="ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--qgfts-eth0" Sep 4 19:47:59.488191 containerd[1543]: 2024-09-04 19:47:59.477 [INFO][4782] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 19:47:59.488191 containerd[1543]: 2024-09-04 19:47:59.481 [INFO][4782] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 19:47:59.488191 containerd[1543]: 2024-09-04 19:47:59.485 [WARNING][4782] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2" HandleID="k8s-pod-network.d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2" Workload="ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--qgfts-eth0" Sep 4 19:47:59.488191 containerd[1543]: 2024-09-04 19:47:59.485 [INFO][4782] ipam_plugin.go 445: Releasing address using workloadID ContainerID="d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2" HandleID="k8s-pod-network.d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2" Workload="ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--qgfts-eth0" Sep 4 19:47:59.488191 containerd[1543]: 2024-09-04 19:47:59.486 [INFO][4782] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 19:47:59.488191 containerd[1543]: 2024-09-04 19:47:59.487 [INFO][4749] k8s.go 621: Teardown processing complete. ContainerID="d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2" Sep 4 19:47:59.488454 containerd[1543]: time="2024-09-04T19:47:59.488250852Z" level=info msg="TearDown network for sandbox \"d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2\" successfully" Sep 4 19:47:59.488454 containerd[1543]: time="2024-09-04T19:47:59.488262618Z" level=info msg="StopPodSandbox for \"d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2\" returns successfully" Sep 4 19:47:59.488526 containerd[1543]: time="2024-09-04T19:47:59.488514109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qgfts,Uid:8bff0692-b68d-4aa8-9961-ca2f51fb1464,Namespace:kube-system,Attempt:1,}" Sep 4 19:47:59.489523 systemd[1]: run-netns-cni\x2d73236171\x2d41ce\x2decaa\x2d9133\x2dd367d3aad2bb.mount: Deactivated successfully. Sep 4 19:47:59.542867 systemd-networkd[1335]: calie2eb9c2951d: Link UP Sep 4 19:47:59.542976 systemd-networkd[1335]: calie2eb9c2951d: Gained carrier Sep 4 19:47:59.548041 containerd[1543]: 2024-09-04 19:47:59.500 [INFO][4812] utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 19:47:59.548041 containerd[1543]: 2024-09-04 19:47:59.506 [INFO][4812] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--ggjf5-eth0 coredns-76f75df574- kube-system 4011d5c2-f944-4edf-b08f-6548b0f18f8f 659 0 2024-09-04 19:47:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4054.1.0-a-2707fc1066 coredns-76f75df574-ggjf5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie2eb9c2951d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="bec8c8177138c2f68ca731e6dbf18b7bd56762c99127a49724a7b32ebc595a6d" Namespace="kube-system" Pod="coredns-76f75df574-ggjf5" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--ggjf5-" Sep 4 19:47:59.548041 containerd[1543]: 2024-09-04 19:47:59.506 [INFO][4812] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bec8c8177138c2f68ca731e6dbf18b7bd56762c99127a49724a7b32ebc595a6d" Namespace="kube-system" Pod="coredns-76f75df574-ggjf5" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--ggjf5-eth0" Sep 4 19:47:59.548041 containerd[1543]: 2024-09-04 19:47:59.519 [INFO][4859] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bec8c8177138c2f68ca731e6dbf18b7bd56762c99127a49724a7b32ebc595a6d" HandleID="k8s-pod-network.bec8c8177138c2f68ca731e6dbf18b7bd56762c99127a49724a7b32ebc595a6d" Workload="ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--ggjf5-eth0" Sep 4 19:47:59.548041 containerd[1543]: 2024-09-04 19:47:59.526 [INFO][4859] ipam_plugin.go 270: Auto assigning IP ContainerID="bec8c8177138c2f68ca731e6dbf18b7bd56762c99127a49724a7b32ebc595a6d" HandleID="k8s-pod-network.bec8c8177138c2f68ca731e6dbf18b7bd56762c99127a49724a7b32ebc595a6d" Workload="ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--ggjf5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f4e70), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4054.1.0-a-2707fc1066", "pod":"coredns-76f75df574-ggjf5", "timestamp":"2024-09-04 19:47:59.519729364 +0000 UTC"}, Hostname:"ci-4054.1.0-a-2707fc1066", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 19:47:59.548041 containerd[1543]: 2024-09-04 19:47:59.526 [INFO][4859] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 19:47:59.548041 containerd[1543]: 2024-09-04 19:47:59.526 [INFO][4859] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 19:47:59.548041 containerd[1543]: 2024-09-04 19:47:59.526 [INFO][4859] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4054.1.0-a-2707fc1066' Sep 4 19:47:59.548041 containerd[1543]: 2024-09-04 19:47:59.526 [INFO][4859] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bec8c8177138c2f68ca731e6dbf18b7bd56762c99127a49724a7b32ebc595a6d" host="ci-4054.1.0-a-2707fc1066" Sep 4 19:47:59.548041 containerd[1543]: 2024-09-04 19:47:59.529 [INFO][4859] ipam.go 372: Looking up existing affinities for host host="ci-4054.1.0-a-2707fc1066" Sep 4 19:47:59.548041 containerd[1543]: 2024-09-04 19:47:59.531 [INFO][4859] ipam.go 489: Trying affinity for 192.168.22.64/26 host="ci-4054.1.0-a-2707fc1066" Sep 4 19:47:59.548041 containerd[1543]: 2024-09-04 19:47:59.532 [INFO][4859] ipam.go 155: Attempting to load block cidr=192.168.22.64/26 host="ci-4054.1.0-a-2707fc1066" Sep 4 19:47:59.548041 containerd[1543]: 2024-09-04 19:47:59.533 [INFO][4859] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.22.64/26 host="ci-4054.1.0-a-2707fc1066" Sep 4 19:47:59.548041 containerd[1543]: 2024-09-04 19:47:59.533 [INFO][4859] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.22.64/26 handle="k8s-pod-network.bec8c8177138c2f68ca731e6dbf18b7bd56762c99127a49724a7b32ebc595a6d" host="ci-4054.1.0-a-2707fc1066" Sep 4 19:47:59.548041 containerd[1543]: 2024-09-04 19:47:59.534 [INFO][4859] ipam.go 1685: Creating new handle: k8s-pod-network.bec8c8177138c2f68ca731e6dbf18b7bd56762c99127a49724a7b32ebc595a6d Sep 4 19:47:59.548041 containerd[1543]: 2024-09-04 19:47:59.536 [INFO][4859] ipam.go 1203: Writing block in order to claim IPs block=192.168.22.64/26 handle="k8s-pod-network.bec8c8177138c2f68ca731e6dbf18b7bd56762c99127a49724a7b32ebc595a6d" host="ci-4054.1.0-a-2707fc1066" Sep 4 19:47:59.548041 containerd[1543]: 2024-09-04 19:47:59.538 [INFO][4859] ipam.go 1216: Successfully claimed IPs: [192.168.22.65/26] block=192.168.22.64/26 handle="k8s-pod-network.bec8c8177138c2f68ca731e6dbf18b7bd56762c99127a49724a7b32ebc595a6d" host="ci-4054.1.0-a-2707fc1066" Sep 4 19:47:59.548041 containerd[1543]: 2024-09-04 19:47:59.538 [INFO][4859] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.22.65/26] handle="k8s-pod-network.bec8c8177138c2f68ca731e6dbf18b7bd56762c99127a49724a7b32ebc595a6d" host="ci-4054.1.0-a-2707fc1066" Sep 4 19:47:59.548041 containerd[1543]: 2024-09-04 19:47:59.538 [INFO][4859] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 19:47:59.548041 containerd[1543]: 2024-09-04 19:47:59.538 [INFO][4859] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.22.65/26] IPv6=[] ContainerID="bec8c8177138c2f68ca731e6dbf18b7bd56762c99127a49724a7b32ebc595a6d" HandleID="k8s-pod-network.bec8c8177138c2f68ca731e6dbf18b7bd56762c99127a49724a7b32ebc595a6d" Workload="ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--ggjf5-eth0" Sep 4 19:47:59.548467 containerd[1543]: 2024-09-04 19:47:59.539 [INFO][4812] k8s.go 386: Populated endpoint ContainerID="bec8c8177138c2f68ca731e6dbf18b7bd56762c99127a49724a7b32ebc595a6d" Namespace="kube-system" Pod="coredns-76f75df574-ggjf5" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--ggjf5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--ggjf5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4011d5c2-f944-4edf-b08f-6548b0f18f8f", ResourceVersion:"659", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 19, 47, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-2707fc1066", ContainerID:"", Pod:"coredns-76f75df574-ggjf5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie2eb9c2951d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 19:47:59.548467 containerd[1543]: 2024-09-04 19:47:59.539 [INFO][4812] k8s.go 387: Calico CNI using IPs: [192.168.22.65/32] ContainerID="bec8c8177138c2f68ca731e6dbf18b7bd56762c99127a49724a7b32ebc595a6d" Namespace="kube-system" Pod="coredns-76f75df574-ggjf5" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--ggjf5-eth0" Sep 4 19:47:59.548467 containerd[1543]: 2024-09-04 19:47:59.539 [INFO][4812] dataplane_linux.go 68: Setting the host side veth name to calie2eb9c2951d ContainerID="bec8c8177138c2f68ca731e6dbf18b7bd56762c99127a49724a7b32ebc595a6d" Namespace="kube-system" Pod="coredns-76f75df574-ggjf5" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--ggjf5-eth0" Sep 4 19:47:59.548467 containerd[1543]: 2024-09-04 19:47:59.542 [INFO][4812] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="bec8c8177138c2f68ca731e6dbf18b7bd56762c99127a49724a7b32ebc595a6d" Namespace="kube-system" Pod="coredns-76f75df574-ggjf5" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--ggjf5-eth0" Sep 4 19:47:59.548467 containerd[1543]: 2024-09-04 19:47:59.543 [INFO][4812] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bec8c8177138c2f68ca731e6dbf18b7bd56762c99127a49724a7b32ebc595a6d" Namespace="kube-system" Pod="coredns-76f75df574-ggjf5" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--ggjf5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--ggjf5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4011d5c2-f944-4edf-b08f-6548b0f18f8f", ResourceVersion:"659", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 19, 47, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-2707fc1066", ContainerID:"bec8c8177138c2f68ca731e6dbf18b7bd56762c99127a49724a7b32ebc595a6d", Pod:"coredns-76f75df574-ggjf5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie2eb9c2951d", MAC:"2a:37:a0:f9:85:04", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 19:47:59.548467 containerd[1543]: 2024-09-04 19:47:59.547 [INFO][4812] k8s.go 500: Wrote updated endpoint to datastore ContainerID="bec8c8177138c2f68ca731e6dbf18b7bd56762c99127a49724a7b32ebc595a6d" Namespace="kube-system" Pod="coredns-76f75df574-ggjf5" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--ggjf5-eth0" Sep 4 19:47:59.553108 systemd-networkd[1335]: cali1230f0a2e13: Link UP Sep 4 19:47:59.553234 systemd-networkd[1335]: cali1230f0a2e13: Gained carrier Sep 4 19:47:59.557495 containerd[1543]: time="2024-09-04T19:47:59.557402490Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 19:47:59.557495 containerd[1543]: time="2024-09-04T19:47:59.557441129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 19:47:59.557495 containerd[1543]: time="2024-09-04T19:47:59.557450039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 19:47:59.557604 containerd[1543]: time="2024-09-04T19:47:59.557506069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 19:47:59.557989 containerd[1543]: 2024-09-04 19:47:59.502 [INFO][4823] utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 19:47:59.557989 containerd[1543]: 2024-09-04 19:47:59.507 [INFO][4823] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--qgfts-eth0 coredns-76f75df574- kube-system 8bff0692-b68d-4aa8-9961-ca2f51fb1464 660 0 2024-09-04 19:47:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4054.1.0-a-2707fc1066 coredns-76f75df574-qgfts eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1230f0a2e13 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="1a00046b72b289fdafe5c8fbd72c2b8b8bd3310977f702e40be256b03942c082" Namespace="kube-system" Pod="coredns-76f75df574-qgfts" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--qgfts-" Sep 4 19:47:59.557989 containerd[1543]: 2024-09-04 19:47:59.508 [INFO][4823] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1a00046b72b289fdafe5c8fbd72c2b8b8bd3310977f702e40be256b03942c082" Namespace="kube-system" Pod="coredns-76f75df574-qgfts" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--qgfts-eth0" Sep 4 19:47:59.557989 containerd[1543]: 2024-09-04 19:47:59.519 [INFO][4860] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a00046b72b289fdafe5c8fbd72c2b8b8bd3310977f702e40be256b03942c082" HandleID="k8s-pod-network.1a00046b72b289fdafe5c8fbd72c2b8b8bd3310977f702e40be256b03942c082" Workload="ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--qgfts-eth0" Sep 4 19:47:59.557989 containerd[1543]: 2024-09-04 19:47:59.526 [INFO][4860] ipam_plugin.go 270: Auto assigning IP ContainerID="1a00046b72b289fdafe5c8fbd72c2b8b8bd3310977f702e40be256b03942c082" HandleID="k8s-pod-network.1a00046b72b289fdafe5c8fbd72c2b8b8bd3310977f702e40be256b03942c082" Workload="ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--qgfts-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000374db0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4054.1.0-a-2707fc1066", "pod":"coredns-76f75df574-qgfts", "timestamp":"2024-09-04 19:47:59.519984015 +0000 UTC"}, Hostname:"ci-4054.1.0-a-2707fc1066", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 19:47:59.557989 containerd[1543]: 2024-09-04 19:47:59.526 [INFO][4860] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 19:47:59.557989 containerd[1543]: 2024-09-04 19:47:59.538 [INFO][4860] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 19:47:59.557989 containerd[1543]: 2024-09-04 19:47:59.538 [INFO][4860] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4054.1.0-a-2707fc1066' Sep 4 19:47:59.557989 containerd[1543]: 2024-09-04 19:47:59.539 [INFO][4860] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1a00046b72b289fdafe5c8fbd72c2b8b8bd3310977f702e40be256b03942c082" host="ci-4054.1.0-a-2707fc1066" Sep 4 19:47:59.557989 containerd[1543]: 2024-09-04 19:47:59.541 [INFO][4860] ipam.go 372: Looking up existing affinities for host host="ci-4054.1.0-a-2707fc1066" Sep 4 19:47:59.557989 containerd[1543]: 2024-09-04 19:47:59.543 [INFO][4860] ipam.go 489: Trying affinity for 192.168.22.64/26 host="ci-4054.1.0-a-2707fc1066" Sep 4 19:47:59.557989 containerd[1543]: 2024-09-04 19:47:59.544 [INFO][4860] ipam.go 155: Attempting to load block cidr=192.168.22.64/26 host="ci-4054.1.0-a-2707fc1066" Sep 4 19:47:59.557989 containerd[1543]: 2024-09-04 19:47:59.546 [INFO][4860] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.22.64/26 host="ci-4054.1.0-a-2707fc1066" Sep 4 19:47:59.557989 containerd[1543]: 2024-09-04 19:47:59.546 [INFO][4860] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.22.64/26 handle="k8s-pod-network.1a00046b72b289fdafe5c8fbd72c2b8b8bd3310977f702e40be256b03942c082" host="ci-4054.1.0-a-2707fc1066" Sep 4 19:47:59.557989 containerd[1543]: 2024-09-04 19:47:59.547 [INFO][4860] ipam.go 1685: Creating new handle: k8s-pod-network.1a00046b72b289fdafe5c8fbd72c2b8b8bd3310977f702e40be256b03942c082 Sep 4 19:47:59.557989 containerd[1543]: 2024-09-04 19:47:59.549 [INFO][4860] ipam.go 1203: Writing block in order to claim IPs block=192.168.22.64/26 handle="k8s-pod-network.1a00046b72b289fdafe5c8fbd72c2b8b8bd3310977f702e40be256b03942c082" host="ci-4054.1.0-a-2707fc1066" Sep 4 19:47:59.557989 containerd[1543]: 2024-09-04 19:47:59.551 [INFO][4860] ipam.go 1216: Successfully claimed IPs: [192.168.22.66/26] block=192.168.22.64/26 handle="k8s-pod-network.1a00046b72b289fdafe5c8fbd72c2b8b8bd3310977f702e40be256b03942c082" host="ci-4054.1.0-a-2707fc1066" Sep 4 19:47:59.557989 containerd[1543]: 2024-09-04 19:47:59.551 [INFO][4860] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.22.66/26] handle="k8s-pod-network.1a00046b72b289fdafe5c8fbd72c2b8b8bd3310977f702e40be256b03942c082" host="ci-4054.1.0-a-2707fc1066" Sep 4 19:47:59.557989 containerd[1543]: 2024-09-04 19:47:59.551 [INFO][4860] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 19:47:59.557989 containerd[1543]: 2024-09-04 19:47:59.551 [INFO][4860] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.22.66/26] IPv6=[] ContainerID="1a00046b72b289fdafe5c8fbd72c2b8b8bd3310977f702e40be256b03942c082" HandleID="k8s-pod-network.1a00046b72b289fdafe5c8fbd72c2b8b8bd3310977f702e40be256b03942c082" Workload="ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--qgfts-eth0" Sep 4 19:47:59.558408 containerd[1543]: 2024-09-04 19:47:59.552 [INFO][4823] k8s.go 386: Populated endpoint ContainerID="1a00046b72b289fdafe5c8fbd72c2b8b8bd3310977f702e40be256b03942c082" Namespace="kube-system" Pod="coredns-76f75df574-qgfts" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--qgfts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--qgfts-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"8bff0692-b68d-4aa8-9961-ca2f51fb1464", ResourceVersion:"660", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 19, 47, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-2707fc1066", ContainerID:"", Pod:"coredns-76f75df574-qgfts", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1230f0a2e13", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 19:47:59.558408 containerd[1543]: 2024-09-04 19:47:59.552 [INFO][4823] k8s.go 387: Calico CNI using IPs: [192.168.22.66/32] ContainerID="1a00046b72b289fdafe5c8fbd72c2b8b8bd3310977f702e40be256b03942c082" Namespace="kube-system" Pod="coredns-76f75df574-qgfts" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--qgfts-eth0" Sep 4 19:47:59.558408 containerd[1543]: 2024-09-04 19:47:59.552 [INFO][4823] dataplane_linux.go 68: Setting the host side veth name to cali1230f0a2e13 ContainerID="1a00046b72b289fdafe5c8fbd72c2b8b8bd3310977f702e40be256b03942c082" Namespace="kube-system" Pod="coredns-76f75df574-qgfts" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--qgfts-eth0" Sep 4 19:47:59.558408 containerd[1543]: 2024-09-04 19:47:59.553 [INFO][4823] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="1a00046b72b289fdafe5c8fbd72c2b8b8bd3310977f702e40be256b03942c082" Namespace="kube-system" Pod="coredns-76f75df574-qgfts" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--qgfts-eth0" Sep 4 19:47:59.558408 containerd[1543]: 2024-09-04 19:47:59.553 [INFO][4823] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1a00046b72b289fdafe5c8fbd72c2b8b8bd3310977f702e40be256b03942c082" Namespace="kube-system" Pod="coredns-76f75df574-qgfts" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--qgfts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--qgfts-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"8bff0692-b68d-4aa8-9961-ca2f51fb1464", ResourceVersion:"660", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 19, 47, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-2707fc1066", ContainerID:"1a00046b72b289fdafe5c8fbd72c2b8b8bd3310977f702e40be256b03942c082", Pod:"coredns-76f75df574-qgfts", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1230f0a2e13", MAC:"9e:8e:b0:62:53:78", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 19:47:59.558408 containerd[1543]: 2024-09-04 19:47:59.557 [INFO][4823] k8s.go 500: Wrote updated endpoint to datastore ContainerID="1a00046b72b289fdafe5c8fbd72c2b8b8bd3310977f702e40be256b03942c082" Namespace="kube-system" Pod="coredns-76f75df574-qgfts" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--qgfts-eth0" Sep 4 19:47:59.578022 containerd[1543]: time="2024-09-04T19:47:59.577959253Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 19:47:59.578190 containerd[1543]: time="2024-09-04T19:47:59.577986569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 19:47:59.578274 containerd[1543]: time="2024-09-04T19:47:59.578194869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 19:47:59.578274 containerd[1543]: time="2024-09-04T19:47:59.578250357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 19:47:59.580356 systemd[1]: Started cri-containerd-bec8c8177138c2f68ca731e6dbf18b7bd56762c99127a49724a7b32ebc595a6d.scope - libcontainer container bec8c8177138c2f68ca731e6dbf18b7bd56762c99127a49724a7b32ebc595a6d. Sep 4 19:47:59.583931 systemd[1]: Started cri-containerd-1a00046b72b289fdafe5c8fbd72c2b8b8bd3310977f702e40be256b03942c082.scope - libcontainer container 1a00046b72b289fdafe5c8fbd72c2b8b8bd3310977f702e40be256b03942c082. Sep 4 19:47:59.603733 containerd[1543]: time="2024-09-04T19:47:59.603710523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ggjf5,Uid:4011d5c2-f944-4edf-b08f-6548b0f18f8f,Namespace:kube-system,Attempt:1,} returns sandbox id \"bec8c8177138c2f68ca731e6dbf18b7bd56762c99127a49724a7b32ebc595a6d\"" Sep 4 19:47:59.605056 containerd[1543]: time="2024-09-04T19:47:59.605040973Z" level=info msg="CreateContainer within sandbox \"bec8c8177138c2f68ca731e6dbf18b7bd56762c99127a49724a7b32ebc595a6d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 19:47:59.605175 containerd[1543]: time="2024-09-04T19:47:59.605161861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qgfts,Uid:8bff0692-b68d-4aa8-9961-ca2f51fb1464,Namespace:kube-system,Attempt:1,} returns sandbox id \"1a00046b72b289fdafe5c8fbd72c2b8b8bd3310977f702e40be256b03942c082\"" Sep 4 19:47:59.606891 containerd[1543]: time="2024-09-04T19:47:59.606874274Z" level=info msg="CreateContainer within sandbox \"1a00046b72b289fdafe5c8fbd72c2b8b8bd3310977f702e40be256b03942c082\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 19:47:59.610741 containerd[1543]: time="2024-09-04T19:47:59.610725690Z" level=info msg="CreateContainer within sandbox \"bec8c8177138c2f68ca731e6dbf18b7bd56762c99127a49724a7b32ebc595a6d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eb14d2f34a5f1976cb914abbf0e64ecebc4bffbbe7bb0adf422b7f17c4c80797\"" Sep 4 19:47:59.610958 containerd[1543]: time="2024-09-04T19:47:59.610946052Z" level=info msg="StartContainer for \"eb14d2f34a5f1976cb914abbf0e64ecebc4bffbbe7bb0adf422b7f17c4c80797\"" Sep 4 19:47:59.612478 containerd[1543]: time="2024-09-04T19:47:59.612461939Z" level=info msg="CreateContainer within sandbox \"1a00046b72b289fdafe5c8fbd72c2b8b8bd3310977f702e40be256b03942c082\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4659b31cf45d64d44f6c6a65a4866e1c92c53b2177988456ce2878b51a751b0d\"" Sep 4 19:47:59.612683 containerd[1543]: time="2024-09-04T19:47:59.612669253Z" level=info msg="StartContainer for \"4659b31cf45d64d44f6c6a65a4866e1c92c53b2177988456ce2878b51a751b0d\"" Sep 4 19:47:59.633476 systemd[1]: Started cri-containerd-eb14d2f34a5f1976cb914abbf0e64ecebc4bffbbe7bb0adf422b7f17c4c80797.scope - libcontainer container eb14d2f34a5f1976cb914abbf0e64ecebc4bffbbe7bb0adf422b7f17c4c80797. Sep 4 19:47:59.634865 systemd[1]: Started cri-containerd-4659b31cf45d64d44f6c6a65a4866e1c92c53b2177988456ce2878b51a751b0d.scope - libcontainer container 4659b31cf45d64d44f6c6a65a4866e1c92c53b2177988456ce2878b51a751b0d. Sep 4 19:47:59.645119 containerd[1543]: time="2024-09-04T19:47:59.645060629Z" level=info msg="StartContainer for \"eb14d2f34a5f1976cb914abbf0e64ecebc4bffbbe7bb0adf422b7f17c4c80797\" returns successfully" Sep 4 19:47:59.645119 containerd[1543]: time="2024-09-04T19:47:59.645060627Z" level=info msg="StartContainer for \"4659b31cf45d64d44f6c6a65a4866e1c92c53b2177988456ce2878b51a751b0d\" returns successfully" Sep 4 19:48:00.392951 containerd[1543]: time="2024-09-04T19:48:00.392872343Z" level=info msg="StopPodSandbox for \"d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3\"" Sep 4 19:48:00.516572 kubelet[2920]: I0904 19:48:00.516535 2920 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-qgfts" podStartSLOduration=27.51648617 podStartE2EDuration="27.51648617s" podCreationTimestamp="2024-09-04 19:47:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 19:48:00.515931663 +0000 UTC m=+40.173997640" watchObservedRunningTime="2024-09-04 19:48:00.51648617 +0000 UTC m=+40.174552140" Sep 4 19:48:00.529937 kubelet[2920]: I0904 19:48:00.529909 2920 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-ggjf5" podStartSLOduration=27.529860986 podStartE2EDuration="27.529860986s" podCreationTimestamp="2024-09-04 19:47:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 19:48:00.529767788 +0000 UTC m=+40.187833755" watchObservedRunningTime="2024-09-04 19:48:00.529860986 +0000 UTC m=+40.187926946" Sep 4 19:48:00.533018 containerd[1543]: 2024-09-04 19:48:00.493 [INFO][5157] k8s.go 608: Cleaning up netns ContainerID="d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3" Sep 4 19:48:00.533018 containerd[1543]: 2024-09-04 19:48:00.494 [INFO][5157] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3" iface="eth0" netns="/var/run/netns/cni-d88aa287-3103-5272-421c-92e73db94430" Sep 4 19:48:00.533018 containerd[1543]: 2024-09-04 19:48:00.494 [INFO][5157] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3" iface="eth0" netns="/var/run/netns/cni-d88aa287-3103-5272-421c-92e73db94430" Sep 4 19:48:00.533018 containerd[1543]: 2024-09-04 19:48:00.495 [INFO][5157] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3" iface="eth0" netns="/var/run/netns/cni-d88aa287-3103-5272-421c-92e73db94430" Sep 4 19:48:00.533018 containerd[1543]: 2024-09-04 19:48:00.495 [INFO][5157] k8s.go 615: Releasing IP address(es) ContainerID="d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3" Sep 4 19:48:00.533018 containerd[1543]: 2024-09-04 19:48:00.495 [INFO][5157] utils.go 188: Calico CNI releasing IP address ContainerID="d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3" Sep 4 19:48:00.533018 containerd[1543]: 2024-09-04 19:48:00.525 [INFO][5175] ipam_plugin.go 417: Releasing address using handleID ContainerID="d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3" HandleID="k8s-pod-network.d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3" Workload="ci--4054.1.0--a--2707fc1066-k8s-csi--node--driver--x4kn5-eth0" Sep 4 19:48:00.533018 containerd[1543]: 2024-09-04 19:48:00.525 [INFO][5175] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 19:48:00.533018 containerd[1543]: 2024-09-04 19:48:00.525 [INFO][5175] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 19:48:00.533018 containerd[1543]: 2024-09-04 19:48:00.530 [WARNING][5175] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3" HandleID="k8s-pod-network.d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3" Workload="ci--4054.1.0--a--2707fc1066-k8s-csi--node--driver--x4kn5-eth0" Sep 4 19:48:00.533018 containerd[1543]: 2024-09-04 19:48:00.530 [INFO][5175] ipam_plugin.go 445: Releasing address using workloadID ContainerID="d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3" HandleID="k8s-pod-network.d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3" Workload="ci--4054.1.0--a--2707fc1066-k8s-csi--node--driver--x4kn5-eth0" Sep 4 19:48:00.533018 containerd[1543]: 2024-09-04 19:48:00.531 [INFO][5175] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 19:48:00.533018 containerd[1543]: 2024-09-04 19:48:00.531 [INFO][5157] k8s.go 621: Teardown processing complete. ContainerID="d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3" Sep 4 19:48:00.533411 containerd[1543]: time="2024-09-04T19:48:00.533123968Z" level=info msg="TearDown network for sandbox \"d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3\" successfully" Sep 4 19:48:00.533411 containerd[1543]: time="2024-09-04T19:48:00.533147660Z" level=info msg="StopPodSandbox for \"d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3\" returns successfully" Sep 4 19:48:00.533683 containerd[1543]: time="2024-09-04T19:48:00.533667904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x4kn5,Uid:b5ea2e4f-7558-45fa-ba9d-79653786d20f,Namespace:calico-system,Attempt:1,}" Sep 4 19:48:00.534756 systemd[1]: run-netns-cni\x2dd88aa287\x2d3103\x2d5272\x2d421c\x2d92e73db94430.mount: Deactivated successfully. Sep 4 19:48:00.585465 systemd-networkd[1335]: cali200b85081f5: Link UP Sep 4 19:48:00.585579 systemd-networkd[1335]: cali200b85081f5: Gained carrier Sep 4 19:48:00.589902 containerd[1543]: 2024-09-04 19:48:00.546 [INFO][5197] utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 19:48:00.589902 containerd[1543]: 2024-09-04 19:48:00.552 [INFO][5197] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4054.1.0--a--2707fc1066-k8s-csi--node--driver--x4kn5-eth0 csi-node-driver- calico-system b5ea2e4f-7558-45fa-ba9d-79653786d20f 675 0 2024-09-04 19:47:38 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-4054.1.0-a-2707fc1066 csi-node-driver-x4kn5 eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali200b85081f5 [] []}} ContainerID="bcd718dfbb743832db53bf901a371ce4b123f99eba9e7c54403c0f7c70a482b1" Namespace="calico-system" Pod="csi-node-driver-x4kn5" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-csi--node--driver--x4kn5-" Sep 4 19:48:00.589902 containerd[1543]: 2024-09-04 19:48:00.552 [INFO][5197] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bcd718dfbb743832db53bf901a371ce4b123f99eba9e7c54403c0f7c70a482b1" Namespace="calico-system" Pod="csi-node-driver-x4kn5" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-csi--node--driver--x4kn5-eth0" Sep 4 19:48:00.589902 containerd[1543]: 2024-09-04 19:48:00.566 [INFO][5221] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bcd718dfbb743832db53bf901a371ce4b123f99eba9e7c54403c0f7c70a482b1" HandleID="k8s-pod-network.bcd718dfbb743832db53bf901a371ce4b123f99eba9e7c54403c0f7c70a482b1" Workload="ci--4054.1.0--a--2707fc1066-k8s-csi--node--driver--x4kn5-eth0" Sep 4 19:48:00.589902 containerd[1543]: 2024-09-04 19:48:00.571 [INFO][5221] ipam_plugin.go 270: Auto assigning IP ContainerID="bcd718dfbb743832db53bf901a371ce4b123f99eba9e7c54403c0f7c70a482b1" HandleID="k8s-pod-network.bcd718dfbb743832db53bf901a371ce4b123f99eba9e7c54403c0f7c70a482b1" Workload="ci--4054.1.0--a--2707fc1066-k8s-csi--node--driver--x4kn5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000380850), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4054.1.0-a-2707fc1066", "pod":"csi-node-driver-x4kn5", "timestamp":"2024-09-04 19:48:00.566749434 +0000 UTC"}, Hostname:"ci-4054.1.0-a-2707fc1066", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 19:48:00.589902 containerd[1543]: 2024-09-04 19:48:00.571 [INFO][5221] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 19:48:00.589902 containerd[1543]: 2024-09-04 19:48:00.571 [INFO][5221] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 19:48:00.589902 containerd[1543]: 2024-09-04 19:48:00.571 [INFO][5221] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4054.1.0-a-2707fc1066' Sep 4 19:48:00.589902 containerd[1543]: 2024-09-04 19:48:00.572 [INFO][5221] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bcd718dfbb743832db53bf901a371ce4b123f99eba9e7c54403c0f7c70a482b1" host="ci-4054.1.0-a-2707fc1066" Sep 4 19:48:00.589902 containerd[1543]: 2024-09-04 19:48:00.574 [INFO][5221] ipam.go 372: Looking up existing affinities for host host="ci-4054.1.0-a-2707fc1066" Sep 4 19:48:00.589902 containerd[1543]: 2024-09-04 19:48:00.576 [INFO][5221] ipam.go 489: Trying affinity for 192.168.22.64/26 host="ci-4054.1.0-a-2707fc1066" Sep 4 19:48:00.589902 containerd[1543]: 2024-09-04 19:48:00.577 [INFO][5221] ipam.go 155: Attempting to load block cidr=192.168.22.64/26 host="ci-4054.1.0-a-2707fc1066" Sep 4 19:48:00.589902 containerd[1543]: 2024-09-04 19:48:00.578 [INFO][5221] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.22.64/26 host="ci-4054.1.0-a-2707fc1066" Sep 4 19:48:00.589902 containerd[1543]: 2024-09-04 19:48:00.578 [INFO][5221] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.22.64/26 handle="k8s-pod-network.bcd718dfbb743832db53bf901a371ce4b123f99eba9e7c54403c0f7c70a482b1" host="ci-4054.1.0-a-2707fc1066" Sep 4 19:48:00.589902 containerd[1543]: 2024-09-04 19:48:00.579 [INFO][5221] ipam.go 1685: Creating new handle: k8s-pod-network.bcd718dfbb743832db53bf901a371ce4b123f99eba9e7c54403c0f7c70a482b1 Sep 4 19:48:00.589902 containerd[1543]: 2024-09-04 19:48:00.580 [INFO][5221] ipam.go 1203: Writing block in order to claim IPs block=192.168.22.64/26 handle="k8s-pod-network.bcd718dfbb743832db53bf901a371ce4b123f99eba9e7c54403c0f7c70a482b1" host="ci-4054.1.0-a-2707fc1066" Sep 4 19:48:00.589902 containerd[1543]: 2024-09-04 19:48:00.583 [INFO][5221] ipam.go 1216: Successfully claimed IPs: [192.168.22.67/26] block=192.168.22.64/26 handle="k8s-pod-network.bcd718dfbb743832db53bf901a371ce4b123f99eba9e7c54403c0f7c70a482b1" host="ci-4054.1.0-a-2707fc1066" Sep 4 19:48:00.589902 containerd[1543]: 2024-09-04 19:48:00.583 [INFO][5221] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.22.67/26] handle="k8s-pod-network.bcd718dfbb743832db53bf901a371ce4b123f99eba9e7c54403c0f7c70a482b1" host="ci-4054.1.0-a-2707fc1066" Sep 4 19:48:00.589902 containerd[1543]: 2024-09-04 19:48:00.584 [INFO][5221] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 19:48:00.589902 containerd[1543]: 2024-09-04 19:48:00.584 [INFO][5221] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.22.67/26] IPv6=[] ContainerID="bcd718dfbb743832db53bf901a371ce4b123f99eba9e7c54403c0f7c70a482b1" HandleID="k8s-pod-network.bcd718dfbb743832db53bf901a371ce4b123f99eba9e7c54403c0f7c70a482b1" Workload="ci--4054.1.0--a--2707fc1066-k8s-csi--node--driver--x4kn5-eth0" Sep 4 19:48:00.590305 containerd[1543]: 2024-09-04 19:48:00.584 [INFO][5197] k8s.go 386: Populated endpoint ContainerID="bcd718dfbb743832db53bf901a371ce4b123f99eba9e7c54403c0f7c70a482b1" Namespace="calico-system" Pod="csi-node-driver-x4kn5" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-csi--node--driver--x4kn5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--2707fc1066-k8s-csi--node--driver--x4kn5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b5ea2e4f-7558-45fa-ba9d-79653786d20f", ResourceVersion:"675", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 19, 47, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-2707fc1066", ContainerID:"", Pod:"csi-node-driver-x4kn5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.22.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali200b85081f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 19:48:00.590305 containerd[1543]: 2024-09-04 19:48:00.584 [INFO][5197] k8s.go 387: Calico CNI using IPs: [192.168.22.67/32] ContainerID="bcd718dfbb743832db53bf901a371ce4b123f99eba9e7c54403c0f7c70a482b1" Namespace="calico-system" Pod="csi-node-driver-x4kn5" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-csi--node--driver--x4kn5-eth0" Sep 4 19:48:00.590305 containerd[1543]: 2024-09-04 19:48:00.584 [INFO][5197] dataplane_linux.go 68: Setting the host side veth name to cali200b85081f5 ContainerID="bcd718dfbb743832db53bf901a371ce4b123f99eba9e7c54403c0f7c70a482b1" Namespace="calico-system" Pod="csi-node-driver-x4kn5" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-csi--node--driver--x4kn5-eth0" Sep 4 19:48:00.590305 containerd[1543]: 2024-09-04 19:48:00.585 [INFO][5197] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="bcd718dfbb743832db53bf901a371ce4b123f99eba9e7c54403c0f7c70a482b1" Namespace="calico-system" Pod="csi-node-driver-x4kn5" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-csi--node--driver--x4kn5-eth0" Sep 4 19:48:00.590305 containerd[1543]: 2024-09-04 19:48:00.585 [INFO][5197] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bcd718dfbb743832db53bf901a371ce4b123f99eba9e7c54403c0f7c70a482b1" Namespace="calico-system" Pod="csi-node-driver-x4kn5" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-csi--node--driver--x4kn5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--2707fc1066-k8s-csi--node--driver--x4kn5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b5ea2e4f-7558-45fa-ba9d-79653786d20f", ResourceVersion:"675", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 19, 47, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-2707fc1066", ContainerID:"bcd718dfbb743832db53bf901a371ce4b123f99eba9e7c54403c0f7c70a482b1", Pod:"csi-node-driver-x4kn5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.22.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali200b85081f5", MAC:"5e:07:46:47:9b:68", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 19:48:00.590305 containerd[1543]: 2024-09-04 19:48:00.589 [INFO][5197] k8s.go 500: Wrote updated endpoint to datastore ContainerID="bcd718dfbb743832db53bf901a371ce4b123f99eba9e7c54403c0f7c70a482b1" Namespace="calico-system" Pod="csi-node-driver-x4kn5" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-csi--node--driver--x4kn5-eth0" Sep 4 19:48:00.598494 containerd[1543]: time="2024-09-04T19:48:00.598454908Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 19:48:00.598494 containerd[1543]: time="2024-09-04T19:48:00.598484982Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 19:48:00.598494 containerd[1543]: time="2024-09-04T19:48:00.598492189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 19:48:00.598623 containerd[1543]: time="2024-09-04T19:48:00.598528824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 19:48:00.612520 systemd[1]: Started cri-containerd-bcd718dfbb743832db53bf901a371ce4b123f99eba9e7c54403c0f7c70a482b1.scope - libcontainer container bcd718dfbb743832db53bf901a371ce4b123f99eba9e7c54403c0f7c70a482b1. Sep 4 19:48:00.622141 containerd[1543]: time="2024-09-04T19:48:00.622119215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x4kn5,Uid:b5ea2e4f-7558-45fa-ba9d-79653786d20f,Namespace:calico-system,Attempt:1,} returns sandbox id \"bcd718dfbb743832db53bf901a371ce4b123f99eba9e7c54403c0f7c70a482b1\"" Sep 4 19:48:00.623223 containerd[1543]: time="2024-09-04T19:48:00.623033515Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Sep 4 19:48:01.129562 systemd-networkd[1335]: calie2eb9c2951d: Gained IPv6LL Sep 4 19:48:01.130311 systemd-networkd[1335]: cali1230f0a2e13: Gained IPv6LL Sep 4 19:48:01.391981 containerd[1543]: time="2024-09-04T19:48:01.391721338Z" level=info msg="StopPodSandbox for \"870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963\"" Sep 4 19:48:01.434443 containerd[1543]: 2024-09-04 19:48:01.417 [INFO][5347] k8s.go 608: Cleaning up netns ContainerID="870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963" Sep 4 19:48:01.434443 containerd[1543]: 2024-09-04 19:48:01.417 [INFO][5347] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963" iface="eth0" netns="/var/run/netns/cni-5f7d53d9-62f8-41e5-516d-1060777b4d1f" Sep 4 19:48:01.434443 containerd[1543]: 2024-09-04 19:48:01.417 [INFO][5347] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963" iface="eth0" netns="/var/run/netns/cni-5f7d53d9-62f8-41e5-516d-1060777b4d1f" Sep 4 19:48:01.434443 containerd[1543]: 2024-09-04 19:48:01.417 [INFO][5347] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963" iface="eth0" netns="/var/run/netns/cni-5f7d53d9-62f8-41e5-516d-1060777b4d1f" Sep 4 19:48:01.434443 containerd[1543]: 2024-09-04 19:48:01.417 [INFO][5347] k8s.go 615: Releasing IP address(es) ContainerID="870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963" Sep 4 19:48:01.434443 containerd[1543]: 2024-09-04 19:48:01.417 [INFO][5347] utils.go 188: Calico CNI releasing IP address ContainerID="870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963" Sep 4 19:48:01.434443 containerd[1543]: 2024-09-04 19:48:01.427 [INFO][5362] ipam_plugin.go 417: Releasing address using handleID ContainerID="870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963" HandleID="k8s-pod-network.870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963" Workload="ci--4054.1.0--a--2707fc1066-k8s-calico--kube--controllers--5865df48f4--bgn64-eth0" Sep 4 19:48:01.434443 containerd[1543]: 2024-09-04 19:48:01.428 [INFO][5362] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 19:48:01.434443 containerd[1543]: 2024-09-04 19:48:01.428 [INFO][5362] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 19:48:01.434443 containerd[1543]: 2024-09-04 19:48:01.432 [WARNING][5362] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963" HandleID="k8s-pod-network.870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963" Workload="ci--4054.1.0--a--2707fc1066-k8s-calico--kube--controllers--5865df48f4--bgn64-eth0" Sep 4 19:48:01.434443 containerd[1543]: 2024-09-04 19:48:01.432 [INFO][5362] ipam_plugin.go 445: Releasing address using workloadID ContainerID="870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963" HandleID="k8s-pod-network.870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963" Workload="ci--4054.1.0--a--2707fc1066-k8s-calico--kube--controllers--5865df48f4--bgn64-eth0" Sep 4 19:48:01.434443 containerd[1543]: 2024-09-04 19:48:01.433 [INFO][5362] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 19:48:01.434443 containerd[1543]: 2024-09-04 19:48:01.433 [INFO][5347] k8s.go 621: Teardown processing complete. ContainerID="870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963" Sep 4 19:48:01.435032 containerd[1543]: time="2024-09-04T19:48:01.434519097Z" level=info msg="TearDown network for sandbox \"870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963\" successfully" Sep 4 19:48:01.435032 containerd[1543]: time="2024-09-04T19:48:01.434537458Z" level=info msg="StopPodSandbox for \"870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963\" returns successfully" Sep 4 19:48:01.435084 containerd[1543]: time="2024-09-04T19:48:01.435065781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5865df48f4-bgn64,Uid:d8799ace-edd3-46ef-9fda-05184d5f7775,Namespace:calico-system,Attempt:1,}" Sep 4 19:48:01.487477 systemd[1]: run-netns-cni\x2d5f7d53d9\x2d62f8\x2d41e5\x2d516d\x2d1060777b4d1f.mount: Deactivated successfully. Sep 4 19:48:01.502187 systemd-networkd[1335]: calic66b248a4b4: Link UP Sep 4 19:48:01.502335 systemd-networkd[1335]: calic66b248a4b4: Gained carrier Sep 4 19:48:01.508292 containerd[1543]: 2024-09-04 19:48:01.460 [INFO][5380] utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 19:48:01.508292 containerd[1543]: 2024-09-04 19:48:01.466 [INFO][5380] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4054.1.0--a--2707fc1066-k8s-calico--kube--controllers--5865df48f4--bgn64-eth0 calico-kube-controllers-5865df48f4- calico-system d8799ace-edd3-46ef-9fda-05184d5f7775 697 0 2024-09-04 19:47:38 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5865df48f4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4054.1.0-a-2707fc1066 calico-kube-controllers-5865df48f4-bgn64 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic66b248a4b4 [] []}} ContainerID="9de8a5f912973671b84d10e2f774287c7bce2b2e874525f699863d2dd253bb90" Namespace="calico-system" Pod="calico-kube-controllers-5865df48f4-bgn64" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-calico--kube--controllers--5865df48f4--bgn64-" Sep 4 19:48:01.508292 containerd[1543]: 2024-09-04 19:48:01.466 [INFO][5380] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9de8a5f912973671b84d10e2f774287c7bce2b2e874525f699863d2dd253bb90" Namespace="calico-system" Pod="calico-kube-controllers-5865df48f4-bgn64" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-calico--kube--controllers--5865df48f4--bgn64-eth0" Sep 4 19:48:01.508292 containerd[1543]: 2024-09-04 19:48:01.480 [INFO][5400] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9de8a5f912973671b84d10e2f774287c7bce2b2e874525f699863d2dd253bb90" HandleID="k8s-pod-network.9de8a5f912973671b84d10e2f774287c7bce2b2e874525f699863d2dd253bb90" Workload="ci--4054.1.0--a--2707fc1066-k8s-calico--kube--controllers--5865df48f4--bgn64-eth0" Sep 4 19:48:01.508292 containerd[1543]: 2024-09-04 19:48:01.486 [INFO][5400] ipam_plugin.go 270: Auto assigning IP ContainerID="9de8a5f912973671b84d10e2f774287c7bce2b2e874525f699863d2dd253bb90" HandleID="k8s-pod-network.9de8a5f912973671b84d10e2f774287c7bce2b2e874525f699863d2dd253bb90" Workload="ci--4054.1.0--a--2707fc1066-k8s-calico--kube--controllers--5865df48f4--bgn64-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00050d650), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4054.1.0-a-2707fc1066", "pod":"calico-kube-controllers-5865df48f4-bgn64", "timestamp":"2024-09-04 19:48:01.480222874 +0000 UTC"}, Hostname:"ci-4054.1.0-a-2707fc1066", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 19:48:01.508292 containerd[1543]: 2024-09-04 19:48:01.486 [INFO][5400] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 19:48:01.508292 containerd[1543]: 2024-09-04 19:48:01.486 [INFO][5400] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 19:48:01.508292 containerd[1543]: 2024-09-04 19:48:01.486 [INFO][5400] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4054.1.0-a-2707fc1066' Sep 4 19:48:01.508292 containerd[1543]: 2024-09-04 19:48:01.487 [INFO][5400] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9de8a5f912973671b84d10e2f774287c7bce2b2e874525f699863d2dd253bb90" host="ci-4054.1.0-a-2707fc1066" Sep 4 19:48:01.508292 containerd[1543]: 2024-09-04 19:48:01.489 [INFO][5400] ipam.go 372: Looking up existing affinities for host host="ci-4054.1.0-a-2707fc1066" Sep 4 19:48:01.508292 containerd[1543]: 2024-09-04 19:48:01.492 [INFO][5400] ipam.go 489: Trying affinity for 192.168.22.64/26 host="ci-4054.1.0-a-2707fc1066" Sep 4 19:48:01.508292 containerd[1543]: 2024-09-04 19:48:01.493 [INFO][5400] ipam.go 155: Attempting to load block cidr=192.168.22.64/26 host="ci-4054.1.0-a-2707fc1066" Sep 4 19:48:01.508292 containerd[1543]: 2024-09-04 19:48:01.494 [INFO][5400] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.22.64/26 host="ci-4054.1.0-a-2707fc1066" Sep 4 19:48:01.508292 containerd[1543]: 2024-09-04 19:48:01.494 [INFO][5400] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.22.64/26 handle="k8s-pod-network.9de8a5f912973671b84d10e2f774287c7bce2b2e874525f699863d2dd253bb90" host="ci-4054.1.0-a-2707fc1066" Sep 4 19:48:01.508292 containerd[1543]: 2024-09-04 19:48:01.495 [INFO][5400] ipam.go 1685: Creating new handle: k8s-pod-network.9de8a5f912973671b84d10e2f774287c7bce2b2e874525f699863d2dd253bb90 Sep 4 19:48:01.508292 containerd[1543]: 2024-09-04 19:48:01.497 [INFO][5400] ipam.go 1203: Writing block in order to claim IPs block=192.168.22.64/26 handle="k8s-pod-network.9de8a5f912973671b84d10e2f774287c7bce2b2e874525f699863d2dd253bb90" host="ci-4054.1.0-a-2707fc1066" Sep 4 19:48:01.508292 containerd[1543]: 2024-09-04 19:48:01.500 [INFO][5400] ipam.go 1216: Successfully claimed IPs: [192.168.22.68/26] block=192.168.22.64/26 handle="k8s-pod-network.9de8a5f912973671b84d10e2f774287c7bce2b2e874525f699863d2dd253bb90" host="ci-4054.1.0-a-2707fc1066" Sep 4 19:48:01.508292 containerd[1543]: 2024-09-04 19:48:01.500 [INFO][5400] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.22.68/26] handle="k8s-pod-network.9de8a5f912973671b84d10e2f774287c7bce2b2e874525f699863d2dd253bb90" host="ci-4054.1.0-a-2707fc1066" Sep 4 19:48:01.508292 containerd[1543]: 2024-09-04 19:48:01.500 [INFO][5400] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 19:48:01.508292 containerd[1543]: 2024-09-04 19:48:01.500 [INFO][5400] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.22.68/26] IPv6=[] ContainerID="9de8a5f912973671b84d10e2f774287c7bce2b2e874525f699863d2dd253bb90" HandleID="k8s-pod-network.9de8a5f912973671b84d10e2f774287c7bce2b2e874525f699863d2dd253bb90" Workload="ci--4054.1.0--a--2707fc1066-k8s-calico--kube--controllers--5865df48f4--bgn64-eth0" Sep 4 19:48:01.508851 containerd[1543]: 2024-09-04 19:48:01.501 [INFO][5380] k8s.go 386: Populated endpoint ContainerID="9de8a5f912973671b84d10e2f774287c7bce2b2e874525f699863d2dd253bb90" Namespace="calico-system" Pod="calico-kube-controllers-5865df48f4-bgn64" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-calico--kube--controllers--5865df48f4--bgn64-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--2707fc1066-k8s-calico--kube--controllers--5865df48f4--bgn64-eth0", GenerateName:"calico-kube-controllers-5865df48f4-", Namespace:"calico-system", SelfLink:"", UID:"d8799ace-edd3-46ef-9fda-05184d5f7775", ResourceVersion:"697", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 19, 47, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5865df48f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-2707fc1066", ContainerID:"", Pod:"calico-kube-controllers-5865df48f4-bgn64", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.22.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic66b248a4b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 19:48:01.508851 containerd[1543]: 2024-09-04 19:48:01.501 [INFO][5380] k8s.go 387: Calico CNI using IPs: [192.168.22.68/32] ContainerID="9de8a5f912973671b84d10e2f774287c7bce2b2e874525f699863d2dd253bb90" Namespace="calico-system" Pod="calico-kube-controllers-5865df48f4-bgn64" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-calico--kube--controllers--5865df48f4--bgn64-eth0" Sep 4 19:48:01.508851 containerd[1543]: 2024-09-04 19:48:01.501 [INFO][5380] dataplane_linux.go 68: Setting the host side veth name to calic66b248a4b4 ContainerID="9de8a5f912973671b84d10e2f774287c7bce2b2e874525f699863d2dd253bb90" Namespace="calico-system" Pod="calico-kube-controllers-5865df48f4-bgn64" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-calico--kube--controllers--5865df48f4--bgn64-eth0" Sep 4 19:48:01.508851 containerd[1543]: 2024-09-04 19:48:01.502 [INFO][5380] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="9de8a5f912973671b84d10e2f774287c7bce2b2e874525f699863d2dd253bb90" Namespace="calico-system" Pod="calico-kube-controllers-5865df48f4-bgn64" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-calico--kube--controllers--5865df48f4--bgn64-eth0" Sep 4 19:48:01.508851 containerd[1543]: 2024-09-04 19:48:01.502 [INFO][5380] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9de8a5f912973671b84d10e2f774287c7bce2b2e874525f699863d2dd253bb90" Namespace="calico-system" Pod="calico-kube-controllers-5865df48f4-bgn64" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-calico--kube--controllers--5865df48f4--bgn64-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--2707fc1066-k8s-calico--kube--controllers--5865df48f4--bgn64-eth0", GenerateName:"calico-kube-controllers-5865df48f4-", Namespace:"calico-system", SelfLink:"", UID:"d8799ace-edd3-46ef-9fda-05184d5f7775", ResourceVersion:"697", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 19, 47, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5865df48f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-2707fc1066", ContainerID:"9de8a5f912973671b84d10e2f774287c7bce2b2e874525f699863d2dd253bb90", Pod:"calico-kube-controllers-5865df48f4-bgn64", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.22.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic66b248a4b4", MAC:"5a:71:34:9c:7a:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 19:48:01.508851 containerd[1543]: 2024-09-04 19:48:01.507 [INFO][5380] k8s.go 500: Wrote updated endpoint to datastore ContainerID="9de8a5f912973671b84d10e2f774287c7bce2b2e874525f699863d2dd253bb90" Namespace="calico-system" Pod="calico-kube-controllers-5865df48f4-bgn64" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-calico--kube--controllers--5865df48f4--bgn64-eth0" Sep 4 19:48:01.517964 containerd[1543]: time="2024-09-04T19:48:01.517923921Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 19:48:01.517964 containerd[1543]: time="2024-09-04T19:48:01.517953538Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 19:48:01.517964 containerd[1543]: time="2024-09-04T19:48:01.517960775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 19:48:01.518071 containerd[1543]: time="2024-09-04T19:48:01.517998891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 19:48:01.537535 systemd[1]: Started cri-containerd-9de8a5f912973671b84d10e2f774287c7bce2b2e874525f699863d2dd253bb90.scope - libcontainer container 9de8a5f912973671b84d10e2f774287c7bce2b2e874525f699863d2dd253bb90. Sep 4 19:48:01.560612 containerd[1543]: time="2024-09-04T19:48:01.560561877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5865df48f4-bgn64,Uid:d8799ace-edd3-46ef-9fda-05184d5f7775,Namespace:calico-system,Attempt:1,} returns sandbox id \"9de8a5f912973671b84d10e2f774287c7bce2b2e874525f699863d2dd253bb90\"" Sep 4 19:48:01.897365 systemd-networkd[1335]: cali200b85081f5: Gained IPv6LL Sep 4 19:48:01.911015 containerd[1543]: time="2024-09-04T19:48:01.910994522Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:48:01.911144 containerd[1543]: time="2024-09-04T19:48:01.911123501Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Sep 4 19:48:01.911588 containerd[1543]: time="2024-09-04T19:48:01.911575803Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:48:01.912532 containerd[1543]: time="2024-09-04T19:48:01.912519639Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:48:01.913211 containerd[1543]: time="2024-09-04T19:48:01.913195885Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 1.290138546s" Sep 4 19:48:01.913257 containerd[1543]: time="2024-09-04T19:48:01.913216590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Sep 4 19:48:01.913530 containerd[1543]: time="2024-09-04T19:48:01.913520845Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Sep 4 19:48:01.914269 containerd[1543]: time="2024-09-04T19:48:01.914239580Z" level=info msg="CreateContainer within sandbox \"bcd718dfbb743832db53bf901a371ce4b123f99eba9e7c54403c0f7c70a482b1\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 4 19:48:01.919308 containerd[1543]: time="2024-09-04T19:48:01.919295987Z" level=info msg="CreateContainer within sandbox \"bcd718dfbb743832db53bf901a371ce4b123f99eba9e7c54403c0f7c70a482b1\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"4a31d085b2d8cb87ae19ccc95b2cb22f076ee815a0e76b662800f934d4fe54a9\"" Sep 4 19:48:01.919641 containerd[1543]: time="2024-09-04T19:48:01.919627276Z" level=info msg="StartContainer for \"4a31d085b2d8cb87ae19ccc95b2cb22f076ee815a0e76b662800f934d4fe54a9\"" Sep 4 19:48:01.939369 systemd[1]: Started cri-containerd-4a31d085b2d8cb87ae19ccc95b2cb22f076ee815a0e76b662800f934d4fe54a9.scope - libcontainer container 4a31d085b2d8cb87ae19ccc95b2cb22f076ee815a0e76b662800f934d4fe54a9. Sep 4 19:48:01.952069 containerd[1543]: time="2024-09-04T19:48:01.952044216Z" level=info msg="StartContainer for \"4a31d085b2d8cb87ae19ccc95b2cb22f076ee815a0e76b662800f934d4fe54a9\" returns successfully" Sep 4 19:48:02.857370 systemd-networkd[1335]: calic66b248a4b4: Gained IPv6LL Sep 4 19:48:04.022412 containerd[1543]: time="2024-09-04T19:48:04.022346345Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:48:04.022701 containerd[1543]: time="2024-09-04T19:48:04.022593553Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Sep 4 19:48:04.023012 containerd[1543]: time="2024-09-04T19:48:04.022992425Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:48:04.024314 containerd[1543]: time="2024-09-04T19:48:04.024294540Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:48:04.024823 containerd[1543]: time="2024-09-04T19:48:04.024805480Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 2.111265957s" Sep 4 19:48:04.024861 containerd[1543]: time="2024-09-04T19:48:04.024829817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Sep 4 19:48:04.025177 containerd[1543]: time="2024-09-04T19:48:04.025161556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Sep 4 19:48:04.028969 containerd[1543]: time="2024-09-04T19:48:04.028947363Z" level=info msg="CreateContainer within sandbox \"9de8a5f912973671b84d10e2f774287c7bce2b2e874525f699863d2dd253bb90\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 4 19:48:04.033611 containerd[1543]: time="2024-09-04T19:48:04.033587073Z" level=info msg="CreateContainer within sandbox \"9de8a5f912973671b84d10e2f774287c7bce2b2e874525f699863d2dd253bb90\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"3a8e061b6966c5cea2dae082fcdcae31b39d071cd6765c86504973d0e4ebc7fe\"" Sep 4 19:48:04.033928 containerd[1543]: time="2024-09-04T19:48:04.033909259Z" level=info msg="StartContainer for \"3a8e061b6966c5cea2dae082fcdcae31b39d071cd6765c86504973d0e4ebc7fe\"" Sep 4 19:48:04.056356 systemd[1]: Started cri-containerd-3a8e061b6966c5cea2dae082fcdcae31b39d071cd6765c86504973d0e4ebc7fe.scope - libcontainer container 3a8e061b6966c5cea2dae082fcdcae31b39d071cd6765c86504973d0e4ebc7fe. Sep 4 19:48:04.079058 containerd[1543]: time="2024-09-04T19:48:04.079036665Z" level=info msg="StartContainer for \"3a8e061b6966c5cea2dae082fcdcae31b39d071cd6765c86504973d0e4ebc7fe\" returns successfully" Sep 4 19:48:04.551889 kubelet[2920]: I0904 19:48:04.551789 2920 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5865df48f4-bgn64" podStartSLOduration=24.087753222 podStartE2EDuration="26.551685392s" podCreationTimestamp="2024-09-04 19:47:38 +0000 UTC" firstStartedPulling="2024-09-04 19:48:01.561096738 +0000 UTC m=+41.219162697" lastFinishedPulling="2024-09-04 19:48:04.025028908 +0000 UTC m=+43.683094867" observedRunningTime="2024-09-04 19:48:04.550928861 +0000 UTC m=+44.208994951" watchObservedRunningTime="2024-09-04 19:48:04.551685392 +0000 UTC m=+44.209751404" Sep 4 19:48:05.381489 kubelet[2920]: I0904 19:48:05.381434 2920 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 19:48:05.382100 containerd[1543]: time="2024-09-04T19:48:05.382082995Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:48:05.382353 containerd[1543]: time="2024-09-04T19:48:05.382329467Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Sep 4 19:48:05.382755 containerd[1543]: time="2024-09-04T19:48:05.382744329Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:48:05.383765 containerd[1543]: time="2024-09-04T19:48:05.383753086Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:48:05.384528 containerd[1543]: time="2024-09-04T19:48:05.384491419Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 1.359309464s" Sep 4 19:48:05.384528 containerd[1543]: time="2024-09-04T19:48:05.384507577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Sep 4 19:48:05.385214 containerd[1543]: time="2024-09-04T19:48:05.385197517Z" level=info msg="CreateContainer within sandbox \"bcd718dfbb743832db53bf901a371ce4b123f99eba9e7c54403c0f7c70a482b1\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 4 19:48:05.391965 containerd[1543]: time="2024-09-04T19:48:05.391912783Z" level=info msg="CreateContainer within sandbox \"bcd718dfbb743832db53bf901a371ce4b123f99eba9e7c54403c0f7c70a482b1\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6e54ad81b32c943943482b395ae1081fc1e7fbd571e4465b639d121a3907f4b9\"" Sep 4 19:48:05.392195 containerd[1543]: time="2024-09-04T19:48:05.392183014Z" level=info msg="StartContainer for \"6e54ad81b32c943943482b395ae1081fc1e7fbd571e4465b639d121a3907f4b9\"" Sep 4 19:48:05.424740 systemd[1]: Started cri-containerd-6e54ad81b32c943943482b395ae1081fc1e7fbd571e4465b639d121a3907f4b9.scope - libcontainer container 6e54ad81b32c943943482b395ae1081fc1e7fbd571e4465b639d121a3907f4b9. Sep 4 19:48:05.479408 containerd[1543]: time="2024-09-04T19:48:05.479373631Z" level=info msg="StartContainer for \"6e54ad81b32c943943482b395ae1081fc1e7fbd571e4465b639d121a3907f4b9\" returns successfully" Sep 4 19:48:05.537705 kubelet[2920]: I0904 19:48:05.537657 2920 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 19:48:05.544152 kubelet[2920]: I0904 19:48:05.544130 2920 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-x4kn5" podStartSLOduration=22.782189454 podStartE2EDuration="27.544102973s" podCreationTimestamp="2024-09-04 19:47:38 +0000 UTC" firstStartedPulling="2024-09-04 19:48:00.622678628 +0000 UTC m=+40.280744586" lastFinishedPulling="2024-09-04 19:48:05.384592147 +0000 UTC m=+45.042658105" observedRunningTime="2024-09-04 19:48:05.543903092 +0000 UTC m=+45.201969056" watchObservedRunningTime="2024-09-04 19:48:05.544102973 +0000 UTC m=+45.202168937" Sep 4 19:48:06.110241 kernel: bpftool[5828]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 4 19:48:06.261327 systemd-networkd[1335]: vxlan.calico: Link UP Sep 4 19:48:06.261332 systemd-networkd[1335]: vxlan.calico: Gained carrier Sep 4 19:48:06.428744 kubelet[2920]: I0904 19:48:06.428701 2920 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 4 19:48:06.428744 kubelet[2920]: I0904 19:48:06.428720 2920 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 4 19:48:08.041523 systemd-networkd[1335]: vxlan.calico: Gained IPv6LL Sep 4 19:48:10.792765 kubelet[2920]: I0904 19:48:10.792654 2920 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 19:48:20.388813 containerd[1543]: time="2024-09-04T19:48:20.388682463Z" level=info msg="StopPodSandbox for \"d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2\"" Sep 4 19:48:20.428600 containerd[1543]: 2024-09-04 19:48:20.408 [WARNING][6036] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--qgfts-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"8bff0692-b68d-4aa8-9961-ca2f51fb1464", ResourceVersion:"679", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 19, 47, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-2707fc1066", ContainerID:"1a00046b72b289fdafe5c8fbd72c2b8b8bd3310977f702e40be256b03942c082", Pod:"coredns-76f75df574-qgfts", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1230f0a2e13", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 19:48:20.428600 containerd[1543]: 2024-09-04 19:48:20.408 [INFO][6036] k8s.go 608: Cleaning up netns ContainerID="d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2" Sep 4 19:48:20.428600 containerd[1543]: 2024-09-04 19:48:20.409 [INFO][6036] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2" iface="eth0" netns="" Sep 4 19:48:20.428600 containerd[1543]: 2024-09-04 19:48:20.409 [INFO][6036] k8s.go 615: Releasing IP address(es) ContainerID="d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2" Sep 4 19:48:20.428600 containerd[1543]: 2024-09-04 19:48:20.409 [INFO][6036] utils.go 188: Calico CNI releasing IP address ContainerID="d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2" Sep 4 19:48:20.428600 containerd[1543]: 2024-09-04 19:48:20.422 [INFO][6050] ipam_plugin.go 417: Releasing address using handleID ContainerID="d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2" HandleID="k8s-pod-network.d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2" Workload="ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--qgfts-eth0" Sep 4 19:48:20.428600 containerd[1543]: 2024-09-04 19:48:20.422 [INFO][6050] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 19:48:20.428600 containerd[1543]: 2024-09-04 19:48:20.422 [INFO][6050] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 19:48:20.428600 containerd[1543]: 2024-09-04 19:48:20.426 [WARNING][6050] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2" HandleID="k8s-pod-network.d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2" Workload="ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--qgfts-eth0" Sep 4 19:48:20.428600 containerd[1543]: 2024-09-04 19:48:20.426 [INFO][6050] ipam_plugin.go 445: Releasing address using workloadID ContainerID="d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2" HandleID="k8s-pod-network.d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2" Workload="ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--qgfts-eth0" Sep 4 19:48:20.428600 containerd[1543]: 2024-09-04 19:48:20.427 [INFO][6050] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 19:48:20.428600 containerd[1543]: 2024-09-04 19:48:20.427 [INFO][6036] k8s.go 621: Teardown processing complete. ContainerID="d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2" Sep 4 19:48:20.428956 containerd[1543]: time="2024-09-04T19:48:20.428622496Z" level=info msg="TearDown network for sandbox \"d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2\" successfully" Sep 4 19:48:20.428956 containerd[1543]: time="2024-09-04T19:48:20.428644226Z" level=info msg="StopPodSandbox for \"d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2\" returns successfully" Sep 4 19:48:20.428956 containerd[1543]: time="2024-09-04T19:48:20.428893974Z" level=info msg="RemovePodSandbox for \"d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2\"" Sep 4 19:48:20.428956 containerd[1543]: time="2024-09-04T19:48:20.428911560Z" level=info msg="Forcibly stopping sandbox \"d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2\"" Sep 4 19:48:20.466943 containerd[1543]: 2024-09-04 19:48:20.448 [WARNING][6080] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--qgfts-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"8bff0692-b68d-4aa8-9961-ca2f51fb1464", ResourceVersion:"679", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 19, 47, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-2707fc1066", ContainerID:"1a00046b72b289fdafe5c8fbd72c2b8b8bd3310977f702e40be256b03942c082", Pod:"coredns-76f75df574-qgfts", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1230f0a2e13", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 19:48:20.466943 containerd[1543]: 2024-09-04 19:48:20.448 [INFO][6080] k8s.go 608: Cleaning up netns ContainerID="d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2" Sep 4 19:48:20.466943 containerd[1543]: 2024-09-04 19:48:20.448 [INFO][6080] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2" iface="eth0" netns="" Sep 4 19:48:20.466943 containerd[1543]: 2024-09-04 19:48:20.448 [INFO][6080] k8s.go 615: Releasing IP address(es) ContainerID="d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2" Sep 4 19:48:20.466943 containerd[1543]: 2024-09-04 19:48:20.448 [INFO][6080] utils.go 188: Calico CNI releasing IP address ContainerID="d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2" Sep 4 19:48:20.466943 containerd[1543]: 2024-09-04 19:48:20.459 [INFO][6095] ipam_plugin.go 417: Releasing address using handleID ContainerID="d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2" HandleID="k8s-pod-network.d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2" Workload="ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--qgfts-eth0" Sep 4 19:48:20.466943 containerd[1543]: 2024-09-04 19:48:20.459 [INFO][6095] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 19:48:20.466943 containerd[1543]: 2024-09-04 19:48:20.459 [INFO][6095] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 19:48:20.466943 containerd[1543]: 2024-09-04 19:48:20.464 [WARNING][6095] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2" HandleID="k8s-pod-network.d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2" Workload="ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--qgfts-eth0" Sep 4 19:48:20.466943 containerd[1543]: 2024-09-04 19:48:20.464 [INFO][6095] ipam_plugin.go 445: Releasing address using workloadID ContainerID="d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2" HandleID="k8s-pod-network.d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2" Workload="ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--qgfts-eth0" Sep 4 19:48:20.466943 containerd[1543]: 2024-09-04 19:48:20.465 [INFO][6095] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 19:48:20.466943 containerd[1543]: 2024-09-04 19:48:20.466 [INFO][6080] k8s.go 621: Teardown processing complete. ContainerID="d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2" Sep 4 19:48:20.467276 containerd[1543]: time="2024-09-04T19:48:20.466944305Z" level=info msg="TearDown network for sandbox \"d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2\" successfully" Sep 4 19:48:20.468346 containerd[1543]: time="2024-09-04T19:48:20.468304914Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 19:48:20.468346 containerd[1543]: time="2024-09-04T19:48:20.468335524Z" level=info msg="RemovePodSandbox \"d2396789301b656ed7ce977b05828f23b7f3218a33faa1fd77942d7416288ed2\" returns successfully" Sep 4 19:48:20.468699 containerd[1543]: time="2024-09-04T19:48:20.468657112Z" level=info msg="StopPodSandbox for \"d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3\"" Sep 4 19:48:20.505795 containerd[1543]: 2024-09-04 19:48:20.487 [WARNING][6125] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--2707fc1066-k8s-csi--node--driver--x4kn5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b5ea2e4f-7558-45fa-ba9d-79653786d20f", ResourceVersion:"732", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 19, 47, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-2707fc1066", ContainerID:"bcd718dfbb743832db53bf901a371ce4b123f99eba9e7c54403c0f7c70a482b1", Pod:"csi-node-driver-x4kn5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.22.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali200b85081f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 19:48:20.505795 containerd[1543]: 2024-09-04 19:48:20.487 [INFO][6125] k8s.go 608: Cleaning up netns ContainerID="d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3" Sep 4 19:48:20.505795 containerd[1543]: 2024-09-04 19:48:20.487 [INFO][6125] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3" iface="eth0" netns="" Sep 4 19:48:20.505795 containerd[1543]: 2024-09-04 19:48:20.487 [INFO][6125] k8s.go 615: Releasing IP address(es) ContainerID="d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3" Sep 4 19:48:20.505795 containerd[1543]: 2024-09-04 19:48:20.487 [INFO][6125] utils.go 188: Calico CNI releasing IP address ContainerID="d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3" Sep 4 19:48:20.505795 containerd[1543]: 2024-09-04 19:48:20.498 [INFO][6142] ipam_plugin.go 417: Releasing address using handleID ContainerID="d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3" HandleID="k8s-pod-network.d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3" Workload="ci--4054.1.0--a--2707fc1066-k8s-csi--node--driver--x4kn5-eth0" Sep 4 19:48:20.505795 containerd[1543]: 2024-09-04 19:48:20.498 [INFO][6142] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 19:48:20.505795 containerd[1543]: 2024-09-04 19:48:20.498 [INFO][6142] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 19:48:20.505795 containerd[1543]: 2024-09-04 19:48:20.503 [WARNING][6142] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3" HandleID="k8s-pod-network.d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3" Workload="ci--4054.1.0--a--2707fc1066-k8s-csi--node--driver--x4kn5-eth0" Sep 4 19:48:20.505795 containerd[1543]: 2024-09-04 19:48:20.503 [INFO][6142] ipam_plugin.go 445: Releasing address using workloadID ContainerID="d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3" HandleID="k8s-pod-network.d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3" Workload="ci--4054.1.0--a--2707fc1066-k8s-csi--node--driver--x4kn5-eth0" Sep 4 19:48:20.505795 containerd[1543]: 2024-09-04 19:48:20.504 [INFO][6142] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 19:48:20.505795 containerd[1543]: 2024-09-04 19:48:20.505 [INFO][6125] k8s.go 621: Teardown processing complete. ContainerID="d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3" Sep 4 19:48:20.506115 containerd[1543]: time="2024-09-04T19:48:20.505835119Z" level=info msg="TearDown network for sandbox \"d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3\" successfully" Sep 4 19:48:20.506115 containerd[1543]: time="2024-09-04T19:48:20.505852148Z" level=info msg="StopPodSandbox for \"d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3\" returns successfully" Sep 4 19:48:20.506156 containerd[1543]: time="2024-09-04T19:48:20.506126181Z" level=info msg="RemovePodSandbox for \"d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3\"" Sep 4 19:48:20.506156 containerd[1543]: time="2024-09-04T19:48:20.506143128Z" level=info msg="Forcibly stopping sandbox \"d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3\"" Sep 4 19:48:20.543244 containerd[1543]: 2024-09-04 19:48:20.526 [WARNING][6172] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--2707fc1066-k8s-csi--node--driver--x4kn5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b5ea2e4f-7558-45fa-ba9d-79653786d20f", ResourceVersion:"732", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 19, 47, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-2707fc1066", ContainerID:"bcd718dfbb743832db53bf901a371ce4b123f99eba9e7c54403c0f7c70a482b1", Pod:"csi-node-driver-x4kn5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.22.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali200b85081f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 19:48:20.543244 containerd[1543]: 2024-09-04 19:48:20.526 [INFO][6172] k8s.go 608: Cleaning up netns ContainerID="d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3" Sep 4 19:48:20.543244 containerd[1543]: 2024-09-04 19:48:20.526 [INFO][6172] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3" iface="eth0" netns="" Sep 4 19:48:20.543244 containerd[1543]: 2024-09-04 19:48:20.526 [INFO][6172] k8s.go 615: Releasing IP address(es) ContainerID="d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3" Sep 4 19:48:20.543244 containerd[1543]: 2024-09-04 19:48:20.526 [INFO][6172] utils.go 188: Calico CNI releasing IP address ContainerID="d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3" Sep 4 19:48:20.543244 containerd[1543]: 2024-09-04 19:48:20.537 [INFO][6186] ipam_plugin.go 417: Releasing address using handleID ContainerID="d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3" HandleID="k8s-pod-network.d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3" Workload="ci--4054.1.0--a--2707fc1066-k8s-csi--node--driver--x4kn5-eth0" Sep 4 19:48:20.543244 containerd[1543]: 2024-09-04 19:48:20.537 [INFO][6186] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 19:48:20.543244 containerd[1543]: 2024-09-04 19:48:20.537 [INFO][6186] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 19:48:20.543244 containerd[1543]: 2024-09-04 19:48:20.540 [WARNING][6186] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3" HandleID="k8s-pod-network.d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3" Workload="ci--4054.1.0--a--2707fc1066-k8s-csi--node--driver--x4kn5-eth0" Sep 4 19:48:20.543244 containerd[1543]: 2024-09-04 19:48:20.540 [INFO][6186] ipam_plugin.go 445: Releasing address using workloadID ContainerID="d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3" HandleID="k8s-pod-network.d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3" Workload="ci--4054.1.0--a--2707fc1066-k8s-csi--node--driver--x4kn5-eth0" Sep 4 19:48:20.543244 containerd[1543]: 2024-09-04 19:48:20.542 [INFO][6186] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 19:48:20.543244 containerd[1543]: 2024-09-04 19:48:20.542 [INFO][6172] k8s.go 621: Teardown processing complete. ContainerID="d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3" Sep 4 19:48:20.543540 containerd[1543]: time="2024-09-04T19:48:20.543268218Z" level=info msg="TearDown network for sandbox \"d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3\" successfully" Sep 4 19:48:20.544571 containerd[1543]: time="2024-09-04T19:48:20.544559156Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 19:48:20.544607 containerd[1543]: time="2024-09-04T19:48:20.544585619Z" level=info msg="RemovePodSandbox \"d2496c28b9c7ca85e243f4c6723db800a5872cad76eb7b9571c69c3b01283cf3\" returns successfully" Sep 4 19:48:20.544878 containerd[1543]: time="2024-09-04T19:48:20.544838466Z" level=info msg="StopPodSandbox for \"870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963\"" Sep 4 19:48:20.579728 containerd[1543]: 2024-09-04 19:48:20.563 [WARNING][6212] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--2707fc1066-k8s-calico--kube--controllers--5865df48f4--bgn64-eth0", GenerateName:"calico-kube-controllers-5865df48f4-", Namespace:"calico-system", SelfLink:"", UID:"d8799ace-edd3-46ef-9fda-05184d5f7775", ResourceVersion:"745", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 19, 47, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5865df48f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-2707fc1066", ContainerID:"9de8a5f912973671b84d10e2f774287c7bce2b2e874525f699863d2dd253bb90", Pod:"calico-kube-controllers-5865df48f4-bgn64", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.22.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic66b248a4b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 19:48:20.579728 containerd[1543]: 2024-09-04 19:48:20.563 [INFO][6212] k8s.go 608: Cleaning up netns ContainerID="870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963" Sep 4 19:48:20.579728 containerd[1543]: 2024-09-04 19:48:20.563 [INFO][6212] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963" iface="eth0" netns="" Sep 4 19:48:20.579728 containerd[1543]: 2024-09-04 19:48:20.563 [INFO][6212] k8s.go 615: Releasing IP address(es) ContainerID="870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963" Sep 4 19:48:20.579728 containerd[1543]: 2024-09-04 19:48:20.563 [INFO][6212] utils.go 188: Calico CNI releasing IP address ContainerID="870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963" Sep 4 19:48:20.579728 containerd[1543]: 2024-09-04 19:48:20.573 [INFO][6227] ipam_plugin.go 417: Releasing address using handleID ContainerID="870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963" HandleID="k8s-pod-network.870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963" Workload="ci--4054.1.0--a--2707fc1066-k8s-calico--kube--controllers--5865df48f4--bgn64-eth0" Sep 4 19:48:20.579728 containerd[1543]: 2024-09-04 19:48:20.573 [INFO][6227] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 19:48:20.579728 containerd[1543]: 2024-09-04 19:48:20.573 [INFO][6227] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 19:48:20.579728 containerd[1543]: 2024-09-04 19:48:20.577 [WARNING][6227] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963" HandleID="k8s-pod-network.870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963" Workload="ci--4054.1.0--a--2707fc1066-k8s-calico--kube--controllers--5865df48f4--bgn64-eth0" Sep 4 19:48:20.579728 containerd[1543]: 2024-09-04 19:48:20.577 [INFO][6227] ipam_plugin.go 445: Releasing address using workloadID ContainerID="870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963" HandleID="k8s-pod-network.870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963" Workload="ci--4054.1.0--a--2707fc1066-k8s-calico--kube--controllers--5865df48f4--bgn64-eth0" Sep 4 19:48:20.579728 containerd[1543]: 2024-09-04 19:48:20.578 [INFO][6227] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 19:48:20.579728 containerd[1543]: 2024-09-04 19:48:20.579 [INFO][6212] k8s.go 621: Teardown processing complete. ContainerID="870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963" Sep 4 19:48:20.580018 containerd[1543]: time="2024-09-04T19:48:20.579749968Z" level=info msg="TearDown network for sandbox \"870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963\" successfully" Sep 4 19:48:20.580018 containerd[1543]: time="2024-09-04T19:48:20.579768510Z" level=info msg="StopPodSandbox for \"870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963\" returns successfully" Sep 4 19:48:20.580018 containerd[1543]: time="2024-09-04T19:48:20.579909634Z" level=info msg="RemovePodSandbox for \"870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963\"" Sep 4 19:48:20.580018 containerd[1543]: time="2024-09-04T19:48:20.579924043Z" level=info msg="Forcibly stopping sandbox \"870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963\"" Sep 4 19:48:20.614120 containerd[1543]: 2024-09-04 19:48:20.598 [WARNING][6259] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--2707fc1066-k8s-calico--kube--controllers--5865df48f4--bgn64-eth0", GenerateName:"calico-kube-controllers-5865df48f4-", Namespace:"calico-system", SelfLink:"", UID:"d8799ace-edd3-46ef-9fda-05184d5f7775", ResourceVersion:"745", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 19, 47, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5865df48f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-2707fc1066", ContainerID:"9de8a5f912973671b84d10e2f774287c7bce2b2e874525f699863d2dd253bb90", Pod:"calico-kube-controllers-5865df48f4-bgn64", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.22.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic66b248a4b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 19:48:20.614120 containerd[1543]: 2024-09-04 19:48:20.598 [INFO][6259] k8s.go 608: Cleaning up netns ContainerID="870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963" Sep 4 19:48:20.614120 containerd[1543]: 2024-09-04 19:48:20.598 [INFO][6259] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963" iface="eth0" netns="" Sep 4 19:48:20.614120 containerd[1543]: 2024-09-04 19:48:20.598 [INFO][6259] k8s.go 615: Releasing IP address(es) ContainerID="870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963" Sep 4 19:48:20.614120 containerd[1543]: 2024-09-04 19:48:20.598 [INFO][6259] utils.go 188: Calico CNI releasing IP address ContainerID="870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963" Sep 4 19:48:20.614120 containerd[1543]: 2024-09-04 19:48:20.607 [INFO][6273] ipam_plugin.go 417: Releasing address using handleID ContainerID="870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963" HandleID="k8s-pod-network.870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963" Workload="ci--4054.1.0--a--2707fc1066-k8s-calico--kube--controllers--5865df48f4--bgn64-eth0" Sep 4 19:48:20.614120 containerd[1543]: 2024-09-04 19:48:20.607 [INFO][6273] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 19:48:20.614120 containerd[1543]: 2024-09-04 19:48:20.607 [INFO][6273] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 19:48:20.614120 containerd[1543]: 2024-09-04 19:48:20.611 [WARNING][6273] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963" HandleID="k8s-pod-network.870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963" Workload="ci--4054.1.0--a--2707fc1066-k8s-calico--kube--controllers--5865df48f4--bgn64-eth0" Sep 4 19:48:20.614120 containerd[1543]: 2024-09-04 19:48:20.611 [INFO][6273] ipam_plugin.go 445: Releasing address using workloadID ContainerID="870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963" HandleID="k8s-pod-network.870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963" Workload="ci--4054.1.0--a--2707fc1066-k8s-calico--kube--controllers--5865df48f4--bgn64-eth0" Sep 4 19:48:20.614120 containerd[1543]: 2024-09-04 19:48:20.612 [INFO][6273] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 19:48:20.614120 containerd[1543]: 2024-09-04 19:48:20.613 [INFO][6259] k8s.go 621: Teardown processing complete. ContainerID="870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963" Sep 4 19:48:20.614424 containerd[1543]: time="2024-09-04T19:48:20.614152240Z" level=info msg="TearDown network for sandbox \"870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963\" successfully" Sep 4 19:48:20.615351 containerd[1543]: time="2024-09-04T19:48:20.615311142Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 19:48:20.615351 containerd[1543]: time="2024-09-04T19:48:20.615336344Z" level=info msg="RemovePodSandbox \"870623c4880a332b853f1e133c3b232c13aafeb282fe18f713477e05a01e7963\" returns successfully" Sep 4 19:48:20.615607 containerd[1543]: time="2024-09-04T19:48:20.615567586Z" level=info msg="StopPodSandbox for \"48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e\"" Sep 4 19:48:20.652469 containerd[1543]: 2024-09-04 19:48:20.633 [WARNING][6301] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--ggjf5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4011d5c2-f944-4edf-b08f-6548b0f18f8f", ResourceVersion:"683", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 19, 47, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-2707fc1066", ContainerID:"bec8c8177138c2f68ca731e6dbf18b7bd56762c99127a49724a7b32ebc595a6d", Pod:"coredns-76f75df574-ggjf5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie2eb9c2951d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 19:48:20.652469 containerd[1543]: 2024-09-04 19:48:20.634 [INFO][6301] k8s.go 608: Cleaning up netns ContainerID="48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e" Sep 4 19:48:20.652469 containerd[1543]: 2024-09-04 19:48:20.634 [INFO][6301] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e" iface="eth0" netns="" Sep 4 19:48:20.652469 containerd[1543]: 2024-09-04 19:48:20.634 [INFO][6301] k8s.go 615: Releasing IP address(es) ContainerID="48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e" Sep 4 19:48:20.652469 containerd[1543]: 2024-09-04 19:48:20.634 [INFO][6301] utils.go 188: Calico CNI releasing IP address ContainerID="48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e" Sep 4 19:48:20.652469 containerd[1543]: 2024-09-04 19:48:20.645 [INFO][6314] ipam_plugin.go 417: Releasing address using handleID ContainerID="48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e" HandleID="k8s-pod-network.48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e" Workload="ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--ggjf5-eth0" Sep 4 19:48:20.652469 containerd[1543]: 2024-09-04 19:48:20.645 [INFO][6314] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 19:48:20.652469 containerd[1543]: 2024-09-04 19:48:20.645 [INFO][6314] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 19:48:20.652469 containerd[1543]: 2024-09-04 19:48:20.649 [WARNING][6314] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e" HandleID="k8s-pod-network.48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e" Workload="ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--ggjf5-eth0" Sep 4 19:48:20.652469 containerd[1543]: 2024-09-04 19:48:20.649 [INFO][6314] ipam_plugin.go 445: Releasing address using workloadID ContainerID="48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e" HandleID="k8s-pod-network.48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e" Workload="ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--ggjf5-eth0" Sep 4 19:48:20.652469 containerd[1543]: 2024-09-04 19:48:20.650 [INFO][6314] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 19:48:20.652469 containerd[1543]: 2024-09-04 19:48:20.651 [INFO][6301] k8s.go 621: Teardown processing complete. ContainerID="48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e" Sep 4 19:48:20.652469 containerd[1543]: time="2024-09-04T19:48:20.652456818Z" level=info msg="TearDown network for sandbox \"48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e\" successfully" Sep 4 19:48:20.652846 containerd[1543]: time="2024-09-04T19:48:20.652475615Z" level=info msg="StopPodSandbox for \"48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e\" returns successfully" Sep 4 19:48:20.652846 containerd[1543]: time="2024-09-04T19:48:20.652769552Z" level=info msg="RemovePodSandbox for \"48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e\"" Sep 4 19:48:20.652846 containerd[1543]: time="2024-09-04T19:48:20.652789328Z" level=info msg="Forcibly stopping sandbox \"48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e\"" Sep 4 19:48:20.693440 containerd[1543]: 2024-09-04 19:48:20.674 [WARNING][6346] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--ggjf5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4011d5c2-f944-4edf-b08f-6548b0f18f8f", ResourceVersion:"683", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 19, 47, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-2707fc1066", ContainerID:"bec8c8177138c2f68ca731e6dbf18b7bd56762c99127a49724a7b32ebc595a6d", Pod:"coredns-76f75df574-ggjf5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie2eb9c2951d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 19:48:20.693440 containerd[1543]: 2024-09-04 19:48:20.674 [INFO][6346] k8s.go 608: Cleaning up netns ContainerID="48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e" Sep 4 19:48:20.693440 containerd[1543]: 2024-09-04 19:48:20.674 [INFO][6346] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e" iface="eth0" netns="" Sep 4 19:48:20.693440 containerd[1543]: 2024-09-04 19:48:20.674 [INFO][6346] k8s.go 615: Releasing IP address(es) ContainerID="48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e" Sep 4 19:48:20.693440 containerd[1543]: 2024-09-04 19:48:20.674 [INFO][6346] utils.go 188: Calico CNI releasing IP address ContainerID="48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e" Sep 4 19:48:20.693440 containerd[1543]: 2024-09-04 19:48:20.685 [INFO][6362] ipam_plugin.go 417: Releasing address using handleID ContainerID="48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e" HandleID="k8s-pod-network.48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e" Workload="ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--ggjf5-eth0" Sep 4 19:48:20.693440 containerd[1543]: 2024-09-04 19:48:20.686 [INFO][6362] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 19:48:20.693440 containerd[1543]: 2024-09-04 19:48:20.686 [INFO][6362] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 19:48:20.693440 containerd[1543]: 2024-09-04 19:48:20.690 [WARNING][6362] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e" HandleID="k8s-pod-network.48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e" Workload="ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--ggjf5-eth0" Sep 4 19:48:20.693440 containerd[1543]: 2024-09-04 19:48:20.690 [INFO][6362] ipam_plugin.go 445: Releasing address using workloadID ContainerID="48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e" HandleID="k8s-pod-network.48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e" Workload="ci--4054.1.0--a--2707fc1066-k8s-coredns--76f75df574--ggjf5-eth0" Sep 4 19:48:20.693440 containerd[1543]: 2024-09-04 19:48:20.692 [INFO][6362] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 19:48:20.693440 containerd[1543]: 2024-09-04 19:48:20.692 [INFO][6346] k8s.go 621: Teardown processing complete. ContainerID="48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e" Sep 4 19:48:20.693440 containerd[1543]: time="2024-09-04T19:48:20.693435830Z" level=info msg="TearDown network for sandbox \"48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e\" successfully" Sep 4 19:48:20.694879 containerd[1543]: time="2024-09-04T19:48:20.694865369Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 19:48:20.694919 containerd[1543]: time="2024-09-04T19:48:20.694891973Z" level=info msg="RemovePodSandbox \"48fa4bf535a3c0785aeffda4246d386052f248be7684f58c4d3ef9c1104bb87e\" returns successfully" Sep 4 19:48:23.377773 kubelet[2920]: I0904 19:48:23.375668 2920 topology_manager.go:215] "Topology Admit Handler" podUID="79ed44da-ec3b-40b2-9130-caf439b6c944" podNamespace="calico-apiserver" podName="calico-apiserver-74c6c654dc-sbf6t" Sep 4 19:48:23.378946 kubelet[2920]: I0904 19:48:23.378921 2920 topology_manager.go:215] "Topology Admit Handler" podUID="197c1636-ec5a-422f-80d1-e91c33de4fe8" podNamespace="calico-apiserver" podName="calico-apiserver-74c6c654dc-bj9wc" Sep 4 19:48:23.382675 systemd[1]: Created slice kubepods-besteffort-pod79ed44da_ec3b_40b2_9130_caf439b6c944.slice - libcontainer container kubepods-besteffort-pod79ed44da_ec3b_40b2_9130_caf439b6c944.slice. Sep 4 19:48:23.386462 systemd[1]: Created slice kubepods-besteffort-pod197c1636_ec5a_422f_80d1_e91c33de4fe8.slice - libcontainer container kubepods-besteffort-pod197c1636_ec5a_422f_80d1_e91c33de4fe8.slice. Sep 4 19:48:23.465557 kubelet[2920]: I0904 19:48:23.465505 2920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/197c1636-ec5a-422f-80d1-e91c33de4fe8-calico-apiserver-certs\") pod \"calico-apiserver-74c6c654dc-bj9wc\" (UID: \"197c1636-ec5a-422f-80d1-e91c33de4fe8\") " pod="calico-apiserver/calico-apiserver-74c6c654dc-bj9wc" Sep 4 19:48:23.465557 kubelet[2920]: I0904 19:48:23.465541 2920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/79ed44da-ec3b-40b2-9130-caf439b6c944-calico-apiserver-certs\") pod \"calico-apiserver-74c6c654dc-sbf6t\" (UID: \"79ed44da-ec3b-40b2-9130-caf439b6c944\") " pod="calico-apiserver/calico-apiserver-74c6c654dc-sbf6t" Sep 4 19:48:23.465682 kubelet[2920]: I0904 19:48:23.465631 2920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dskts\" (UniqueName: \"kubernetes.io/projected/79ed44da-ec3b-40b2-9130-caf439b6c944-kube-api-access-dskts\") pod \"calico-apiserver-74c6c654dc-sbf6t\" (UID: \"79ed44da-ec3b-40b2-9130-caf439b6c944\") " pod="calico-apiserver/calico-apiserver-74c6c654dc-sbf6t" Sep 4 19:48:23.465682 kubelet[2920]: I0904 19:48:23.465668 2920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hd2g\" (UniqueName: \"kubernetes.io/projected/197c1636-ec5a-422f-80d1-e91c33de4fe8-kube-api-access-7hd2g\") pod \"calico-apiserver-74c6c654dc-bj9wc\" (UID: \"197c1636-ec5a-422f-80d1-e91c33de4fe8\") " pod="calico-apiserver/calico-apiserver-74c6c654dc-bj9wc" Sep 4 19:48:23.566819 kubelet[2920]: E0904 19:48:23.566695 2920 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Sep 4 19:48:23.567056 kubelet[2920]: E0904 19:48:23.566695 2920 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Sep 4 19:48:23.567056 kubelet[2920]: E0904 19:48:23.566891 2920 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79ed44da-ec3b-40b2-9130-caf439b6c944-calico-apiserver-certs podName:79ed44da-ec3b-40b2-9130-caf439b6c944 nodeName:}" failed. No retries permitted until 2024-09-04 19:48:24.066835986 +0000 UTC m=+63.724902018 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/79ed44da-ec3b-40b2-9130-caf439b6c944-calico-apiserver-certs") pod "calico-apiserver-74c6c654dc-sbf6t" (UID: "79ed44da-ec3b-40b2-9130-caf439b6c944") : secret "calico-apiserver-certs" not found Sep 4 19:48:23.567056 kubelet[2920]: E0904 19:48:23.567003 2920 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/197c1636-ec5a-422f-80d1-e91c33de4fe8-calico-apiserver-certs podName:197c1636-ec5a-422f-80d1-e91c33de4fe8 nodeName:}" failed. No retries permitted until 2024-09-04 19:48:24.06695959 +0000 UTC m=+63.725025618 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/197c1636-ec5a-422f-80d1-e91c33de4fe8-calico-apiserver-certs") pod "calico-apiserver-74c6c654dc-bj9wc" (UID: "197c1636-ec5a-422f-80d1-e91c33de4fe8") : secret "calico-apiserver-certs" not found Sep 4 19:48:24.286746 containerd[1543]: time="2024-09-04T19:48:24.286613372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74c6c654dc-sbf6t,Uid:79ed44da-ec3b-40b2-9130-caf439b6c944,Namespace:calico-apiserver,Attempt:0,}" Sep 4 19:48:24.288975 containerd[1543]: time="2024-09-04T19:48:24.288960827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74c6c654dc-bj9wc,Uid:197c1636-ec5a-422f-80d1-e91c33de4fe8,Namespace:calico-apiserver,Attempt:0,}" Sep 4 19:48:24.345836 systemd-networkd[1335]: cali7cf055c6e43: Link UP Sep 4 19:48:24.345955 systemd-networkd[1335]: cali7cf055c6e43: Gained carrier Sep 4 19:48:24.350701 containerd[1543]: 2024-09-04 19:48:24.307 [INFO][6413] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4054.1.0--a--2707fc1066-k8s-calico--apiserver--74c6c654dc--sbf6t-eth0 calico-apiserver-74c6c654dc- calico-apiserver 79ed44da-ec3b-40b2-9130-caf439b6c944 822 0 2024-09-04 19:48:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74c6c654dc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4054.1.0-a-2707fc1066 calico-apiserver-74c6c654dc-sbf6t eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7cf055c6e43 [] []}} ContainerID="f7dafd11db0ef4a104b57772fbca84cab061505e0e1d68c003b17887e878ecb5" Namespace="calico-apiserver" Pod="calico-apiserver-74c6c654dc-sbf6t" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-calico--apiserver--74c6c654dc--sbf6t-" Sep 4 19:48:24.350701 containerd[1543]: 2024-09-04 19:48:24.307 [INFO][6413] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f7dafd11db0ef4a104b57772fbca84cab061505e0e1d68c003b17887e878ecb5" Namespace="calico-apiserver" Pod="calico-apiserver-74c6c654dc-sbf6t" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-calico--apiserver--74c6c654dc--sbf6t-eth0" Sep 4 19:48:24.350701 containerd[1543]: 2024-09-04 19:48:24.323 [INFO][6460] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f7dafd11db0ef4a104b57772fbca84cab061505e0e1d68c003b17887e878ecb5" HandleID="k8s-pod-network.f7dafd11db0ef4a104b57772fbca84cab061505e0e1d68c003b17887e878ecb5" Workload="ci--4054.1.0--a--2707fc1066-k8s-calico--apiserver--74c6c654dc--sbf6t-eth0" Sep 4 19:48:24.350701 containerd[1543]: 2024-09-04 19:48:24.328 [INFO][6460] ipam_plugin.go 270: Auto assigning IP ContainerID="f7dafd11db0ef4a104b57772fbca84cab061505e0e1d68c003b17887e878ecb5" HandleID="k8s-pod-network.f7dafd11db0ef4a104b57772fbca84cab061505e0e1d68c003b17887e878ecb5" Workload="ci--4054.1.0--a--2707fc1066-k8s-calico--apiserver--74c6c654dc--sbf6t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002acdf0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4054.1.0-a-2707fc1066", "pod":"calico-apiserver-74c6c654dc-sbf6t", "timestamp":"2024-09-04 19:48:24.32332981 +0000 UTC"}, Hostname:"ci-4054.1.0-a-2707fc1066", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 19:48:24.350701 containerd[1543]: 2024-09-04 19:48:24.328 [INFO][6460] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 19:48:24.350701 containerd[1543]: 2024-09-04 19:48:24.328 [INFO][6460] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 19:48:24.350701 containerd[1543]: 2024-09-04 19:48:24.329 [INFO][6460] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4054.1.0-a-2707fc1066' Sep 4 19:48:24.350701 containerd[1543]: 2024-09-04 19:48:24.330 [INFO][6460] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f7dafd11db0ef4a104b57772fbca84cab061505e0e1d68c003b17887e878ecb5" host="ci-4054.1.0-a-2707fc1066" Sep 4 19:48:24.350701 containerd[1543]: 2024-09-04 19:48:24.332 [INFO][6460] ipam.go 372: Looking up existing affinities for host host="ci-4054.1.0-a-2707fc1066" Sep 4 19:48:24.350701 containerd[1543]: 2024-09-04 19:48:24.335 [INFO][6460] ipam.go 489: Trying affinity for 192.168.22.64/26 host="ci-4054.1.0-a-2707fc1066" Sep 4 19:48:24.350701 containerd[1543]: 2024-09-04 19:48:24.336 [INFO][6460] ipam.go 155: Attempting to load block cidr=192.168.22.64/26 host="ci-4054.1.0-a-2707fc1066" Sep 4 19:48:24.350701 containerd[1543]: 2024-09-04 19:48:24.338 [INFO][6460] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.22.64/26 host="ci-4054.1.0-a-2707fc1066" Sep 4 19:48:24.350701 containerd[1543]: 2024-09-04 19:48:24.338 [INFO][6460] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.22.64/26 handle="k8s-pod-network.f7dafd11db0ef4a104b57772fbca84cab061505e0e1d68c003b17887e878ecb5" host="ci-4054.1.0-a-2707fc1066" Sep 4 19:48:24.350701 containerd[1543]: 2024-09-04 19:48:24.339 [INFO][6460] ipam.go 1685: Creating new handle: k8s-pod-network.f7dafd11db0ef4a104b57772fbca84cab061505e0e1d68c003b17887e878ecb5 Sep 4 19:48:24.350701 containerd[1543]: 2024-09-04 19:48:24.341 [INFO][6460] ipam.go 1203: Writing block in order to claim IPs block=192.168.22.64/26 handle="k8s-pod-network.f7dafd11db0ef4a104b57772fbca84cab061505e0e1d68c003b17887e878ecb5" host="ci-4054.1.0-a-2707fc1066" Sep 4 19:48:24.350701 containerd[1543]: 2024-09-04 19:48:24.343 [INFO][6460] ipam.go 1216: Successfully claimed IPs: [192.168.22.69/26] block=192.168.22.64/26 handle="k8s-pod-network.f7dafd11db0ef4a104b57772fbca84cab061505e0e1d68c003b17887e878ecb5" host="ci-4054.1.0-a-2707fc1066" Sep 4 19:48:24.350701 containerd[1543]: 2024-09-04 19:48:24.343 [INFO][6460] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.22.69/26] handle="k8s-pod-network.f7dafd11db0ef4a104b57772fbca84cab061505e0e1d68c003b17887e878ecb5" host="ci-4054.1.0-a-2707fc1066" Sep 4 19:48:24.350701 containerd[1543]: 2024-09-04 19:48:24.343 [INFO][6460] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 19:48:24.350701 containerd[1543]: 2024-09-04 19:48:24.343 [INFO][6460] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.22.69/26] IPv6=[] ContainerID="f7dafd11db0ef4a104b57772fbca84cab061505e0e1d68c003b17887e878ecb5" HandleID="k8s-pod-network.f7dafd11db0ef4a104b57772fbca84cab061505e0e1d68c003b17887e878ecb5" Workload="ci--4054.1.0--a--2707fc1066-k8s-calico--apiserver--74c6c654dc--sbf6t-eth0" Sep 4 19:48:24.351139 containerd[1543]: 2024-09-04 19:48:24.345 [INFO][6413] k8s.go 386: Populated endpoint ContainerID="f7dafd11db0ef4a104b57772fbca84cab061505e0e1d68c003b17887e878ecb5" Namespace="calico-apiserver" Pod="calico-apiserver-74c6c654dc-sbf6t" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-calico--apiserver--74c6c654dc--sbf6t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--2707fc1066-k8s-calico--apiserver--74c6c654dc--sbf6t-eth0", GenerateName:"calico-apiserver-74c6c654dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"79ed44da-ec3b-40b2-9130-caf439b6c944", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 19, 48, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74c6c654dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-2707fc1066", ContainerID:"", Pod:"calico-apiserver-74c6c654dc-sbf6t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.22.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7cf055c6e43", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 19:48:24.351139 containerd[1543]: 2024-09-04 19:48:24.345 [INFO][6413] k8s.go 387: Calico CNI using IPs: [192.168.22.69/32] ContainerID="f7dafd11db0ef4a104b57772fbca84cab061505e0e1d68c003b17887e878ecb5" Namespace="calico-apiserver" Pod="calico-apiserver-74c6c654dc-sbf6t" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-calico--apiserver--74c6c654dc--sbf6t-eth0" Sep 4 19:48:24.351139 containerd[1543]: 2024-09-04 19:48:24.345 [INFO][6413] dataplane_linux.go 68: Setting the host side veth name to cali7cf055c6e43 ContainerID="f7dafd11db0ef4a104b57772fbca84cab061505e0e1d68c003b17887e878ecb5" Namespace="calico-apiserver" Pod="calico-apiserver-74c6c654dc-sbf6t" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-calico--apiserver--74c6c654dc--sbf6t-eth0" Sep 4 19:48:24.351139 containerd[1543]: 2024-09-04 19:48:24.345 [INFO][6413] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="f7dafd11db0ef4a104b57772fbca84cab061505e0e1d68c003b17887e878ecb5" Namespace="calico-apiserver" Pod="calico-apiserver-74c6c654dc-sbf6t" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-calico--apiserver--74c6c654dc--sbf6t-eth0" Sep 4 19:48:24.351139 containerd[1543]: 2024-09-04 19:48:24.346 [INFO][6413] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f7dafd11db0ef4a104b57772fbca84cab061505e0e1d68c003b17887e878ecb5" Namespace="calico-apiserver" Pod="calico-apiserver-74c6c654dc-sbf6t" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-calico--apiserver--74c6c654dc--sbf6t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--2707fc1066-k8s-calico--apiserver--74c6c654dc--sbf6t-eth0", GenerateName:"calico-apiserver-74c6c654dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"79ed44da-ec3b-40b2-9130-caf439b6c944", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 19, 48, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74c6c654dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-2707fc1066", ContainerID:"f7dafd11db0ef4a104b57772fbca84cab061505e0e1d68c003b17887e878ecb5", Pod:"calico-apiserver-74c6c654dc-sbf6t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.22.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7cf055c6e43", MAC:"be:23:b4:66:7c:7e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 19:48:24.351139 containerd[1543]: 2024-09-04 19:48:24.349 [INFO][6413] k8s.go 500: Wrote updated endpoint to datastore ContainerID="f7dafd11db0ef4a104b57772fbca84cab061505e0e1d68c003b17887e878ecb5" Namespace="calico-apiserver" Pod="calico-apiserver-74c6c654dc-sbf6t" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-calico--apiserver--74c6c654dc--sbf6t-eth0" Sep 4 19:48:24.360853 containerd[1543]: time="2024-09-04T19:48:24.360809973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 19:48:24.360853 containerd[1543]: time="2024-09-04T19:48:24.360841964Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 19:48:24.360953 containerd[1543]: time="2024-09-04T19:48:24.360852930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 19:48:24.360953 containerd[1543]: time="2024-09-04T19:48:24.360924564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 19:48:24.360896 systemd-networkd[1335]: cali0ddc402118d: Link UP Sep 4 19:48:24.361012 systemd-networkd[1335]: cali0ddc402118d: Gained carrier Sep 4 19:48:24.365638 containerd[1543]: 2024-09-04 19:48:24.308 [INFO][6424] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4054.1.0--a--2707fc1066-k8s-calico--apiserver--74c6c654dc--bj9wc-eth0 calico-apiserver-74c6c654dc- calico-apiserver 197c1636-ec5a-422f-80d1-e91c33de4fe8 824 0 2024-09-04 19:48:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74c6c654dc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4054.1.0-a-2707fc1066 calico-apiserver-74c6c654dc-bj9wc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0ddc402118d [] []}} ContainerID="a61ee11a30dec1610e95eb063e4b571e0b417ba7fb2533994467e93175487025" Namespace="calico-apiserver" Pod="calico-apiserver-74c6c654dc-bj9wc" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-calico--apiserver--74c6c654dc--bj9wc-" Sep 4 19:48:24.365638 containerd[1543]: 2024-09-04 19:48:24.309 [INFO][6424] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a61ee11a30dec1610e95eb063e4b571e0b417ba7fb2533994467e93175487025" Namespace="calico-apiserver" Pod="calico-apiserver-74c6c654dc-bj9wc" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-calico--apiserver--74c6c654dc--bj9wc-eth0" Sep 4 19:48:24.365638 containerd[1543]: 2024-09-04 19:48:24.323 [INFO][6465] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a61ee11a30dec1610e95eb063e4b571e0b417ba7fb2533994467e93175487025" HandleID="k8s-pod-network.a61ee11a30dec1610e95eb063e4b571e0b417ba7fb2533994467e93175487025" Workload="ci--4054.1.0--a--2707fc1066-k8s-calico--apiserver--74c6c654dc--bj9wc-eth0" Sep 4 19:48:24.365638 containerd[1543]: 2024-09-04 19:48:24.329 [INFO][6465] ipam_plugin.go 270: Auto assigning IP ContainerID="a61ee11a30dec1610e95eb063e4b571e0b417ba7fb2533994467e93175487025" HandleID="k8s-pod-network.a61ee11a30dec1610e95eb063e4b571e0b417ba7fb2533994467e93175487025" Workload="ci--4054.1.0--a--2707fc1066-k8s-calico--apiserver--74c6c654dc--bj9wc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000391150), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4054.1.0-a-2707fc1066", "pod":"calico-apiserver-74c6c654dc-bj9wc", "timestamp":"2024-09-04 19:48:24.323846079 +0000 UTC"}, Hostname:"ci-4054.1.0-a-2707fc1066", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 19:48:24.365638 containerd[1543]: 2024-09-04 19:48:24.329 [INFO][6465] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 19:48:24.365638 containerd[1543]: 2024-09-04 19:48:24.343 [INFO][6465] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 19:48:24.365638 containerd[1543]: 2024-09-04 19:48:24.344 [INFO][6465] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4054.1.0-a-2707fc1066' Sep 4 19:48:24.365638 containerd[1543]: 2024-09-04 19:48:24.345 [INFO][6465] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a61ee11a30dec1610e95eb063e4b571e0b417ba7fb2533994467e93175487025" host="ci-4054.1.0-a-2707fc1066" Sep 4 19:48:24.365638 containerd[1543]: 2024-09-04 19:48:24.347 [INFO][6465] ipam.go 372: Looking up existing affinities for host host="ci-4054.1.0-a-2707fc1066" Sep 4 19:48:24.365638 containerd[1543]: 2024-09-04 19:48:24.350 [INFO][6465] ipam.go 489: Trying affinity for 192.168.22.64/26 host="ci-4054.1.0-a-2707fc1066" Sep 4 19:48:24.365638 containerd[1543]: 2024-09-04 19:48:24.351 [INFO][6465] ipam.go 155: Attempting to load block cidr=192.168.22.64/26 host="ci-4054.1.0-a-2707fc1066" Sep 4 19:48:24.365638 containerd[1543]: 2024-09-04 19:48:24.353 [INFO][6465] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.22.64/26 host="ci-4054.1.0-a-2707fc1066" Sep 4 19:48:24.365638 containerd[1543]: 2024-09-04 19:48:24.353 [INFO][6465] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.22.64/26 handle="k8s-pod-network.a61ee11a30dec1610e95eb063e4b571e0b417ba7fb2533994467e93175487025" host="ci-4054.1.0-a-2707fc1066" Sep 4 19:48:24.365638 containerd[1543]: 2024-09-04 19:48:24.354 [INFO][6465] ipam.go 1685: Creating new handle: k8s-pod-network.a61ee11a30dec1610e95eb063e4b571e0b417ba7fb2533994467e93175487025 Sep 4 19:48:24.365638 containerd[1543]: 2024-09-04 19:48:24.355 [INFO][6465] ipam.go 1203: Writing block in order to claim IPs block=192.168.22.64/26 handle="k8s-pod-network.a61ee11a30dec1610e95eb063e4b571e0b417ba7fb2533994467e93175487025" host="ci-4054.1.0-a-2707fc1066" Sep 4 19:48:24.365638 containerd[1543]: 2024-09-04 19:48:24.358 [INFO][6465] ipam.go 1216: Successfully claimed IPs: [192.168.22.70/26] block=192.168.22.64/26 handle="k8s-pod-network.a61ee11a30dec1610e95eb063e4b571e0b417ba7fb2533994467e93175487025" host="ci-4054.1.0-a-2707fc1066" Sep 4 19:48:24.365638 containerd[1543]: 2024-09-04 19:48:24.359 [INFO][6465] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.22.70/26] handle="k8s-pod-network.a61ee11a30dec1610e95eb063e4b571e0b417ba7fb2533994467e93175487025" host="ci-4054.1.0-a-2707fc1066" Sep 4 19:48:24.365638 containerd[1543]: 2024-09-04 19:48:24.359 [INFO][6465] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 19:48:24.365638 containerd[1543]: 2024-09-04 19:48:24.359 [INFO][6465] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.22.70/26] IPv6=[] ContainerID="a61ee11a30dec1610e95eb063e4b571e0b417ba7fb2533994467e93175487025" HandleID="k8s-pod-network.a61ee11a30dec1610e95eb063e4b571e0b417ba7fb2533994467e93175487025" Workload="ci--4054.1.0--a--2707fc1066-k8s-calico--apiserver--74c6c654dc--bj9wc-eth0" Sep 4 19:48:24.366100 containerd[1543]: 2024-09-04 19:48:24.360 [INFO][6424] k8s.go 386: Populated endpoint ContainerID="a61ee11a30dec1610e95eb063e4b571e0b417ba7fb2533994467e93175487025" Namespace="calico-apiserver" Pod="calico-apiserver-74c6c654dc-bj9wc" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-calico--apiserver--74c6c654dc--bj9wc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--2707fc1066-k8s-calico--apiserver--74c6c654dc--bj9wc-eth0", GenerateName:"calico-apiserver-74c6c654dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"197c1636-ec5a-422f-80d1-e91c33de4fe8", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 19, 48, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74c6c654dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-2707fc1066", ContainerID:"", Pod:"calico-apiserver-74c6c654dc-bj9wc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.22.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0ddc402118d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 19:48:24.366100 containerd[1543]: 2024-09-04 19:48:24.360 [INFO][6424] k8s.go 387: Calico CNI using IPs: [192.168.22.70/32] ContainerID="a61ee11a30dec1610e95eb063e4b571e0b417ba7fb2533994467e93175487025" Namespace="calico-apiserver" Pod="calico-apiserver-74c6c654dc-bj9wc" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-calico--apiserver--74c6c654dc--bj9wc-eth0" Sep 4 19:48:24.366100 containerd[1543]: 2024-09-04 19:48:24.360 [INFO][6424] dataplane_linux.go 68: Setting the host side veth name to cali0ddc402118d ContainerID="a61ee11a30dec1610e95eb063e4b571e0b417ba7fb2533994467e93175487025" Namespace="calico-apiserver" Pod="calico-apiserver-74c6c654dc-bj9wc" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-calico--apiserver--74c6c654dc--bj9wc-eth0" Sep 4 19:48:24.366100 containerd[1543]: 2024-09-04 19:48:24.361 [INFO][6424] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a61ee11a30dec1610e95eb063e4b571e0b417ba7fb2533994467e93175487025" Namespace="calico-apiserver" Pod="calico-apiserver-74c6c654dc-bj9wc" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-calico--apiserver--74c6c654dc--bj9wc-eth0" Sep 4 19:48:24.366100 containerd[1543]: 2024-09-04 19:48:24.361 [INFO][6424] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a61ee11a30dec1610e95eb063e4b571e0b417ba7fb2533994467e93175487025" Namespace="calico-apiserver" Pod="calico-apiserver-74c6c654dc-bj9wc" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-calico--apiserver--74c6c654dc--bj9wc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--2707fc1066-k8s-calico--apiserver--74c6c654dc--bj9wc-eth0", GenerateName:"calico-apiserver-74c6c654dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"197c1636-ec5a-422f-80d1-e91c33de4fe8", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 19, 48, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74c6c654dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-2707fc1066", ContainerID:"a61ee11a30dec1610e95eb063e4b571e0b417ba7fb2533994467e93175487025", Pod:"calico-apiserver-74c6c654dc-bj9wc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.22.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0ddc402118d", MAC:"fa:27:59:1b:4d:d5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 19:48:24.366100 containerd[1543]: 2024-09-04 19:48:24.364 [INFO][6424] k8s.go 500: Wrote updated endpoint to datastore ContainerID="a61ee11a30dec1610e95eb063e4b571e0b417ba7fb2533994467e93175487025" Namespace="calico-apiserver" Pod="calico-apiserver-74c6c654dc-bj9wc" WorkloadEndpoint="ci--4054.1.0--a--2707fc1066-k8s-calico--apiserver--74c6c654dc--bj9wc-eth0" Sep 4 19:48:24.374610 containerd[1543]: time="2024-09-04T19:48:24.374395540Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 19:48:24.374610 containerd[1543]: time="2024-09-04T19:48:24.374598709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 19:48:24.374610 containerd[1543]: time="2024-09-04T19:48:24.374607192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 19:48:24.374716 containerd[1543]: time="2024-09-04T19:48:24.374652162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 19:48:24.378377 systemd[1]: Started cri-containerd-f7dafd11db0ef4a104b57772fbca84cab061505e0e1d68c003b17887e878ecb5.scope - libcontainer container f7dafd11db0ef4a104b57772fbca84cab061505e0e1d68c003b17887e878ecb5. Sep 4 19:48:24.381152 systemd[1]: Started cri-containerd-a61ee11a30dec1610e95eb063e4b571e0b417ba7fb2533994467e93175487025.scope - libcontainer container a61ee11a30dec1610e95eb063e4b571e0b417ba7fb2533994467e93175487025. Sep 4 19:48:24.402035 containerd[1543]: time="2024-09-04T19:48:24.402012825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74c6c654dc-sbf6t,Uid:79ed44da-ec3b-40b2-9130-caf439b6c944,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f7dafd11db0ef4a104b57772fbca84cab061505e0e1d68c003b17887e878ecb5\"" Sep 4 19:48:24.402787 containerd[1543]: time="2024-09-04T19:48:24.402772352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Sep 4 19:48:24.403150 containerd[1543]: time="2024-09-04T19:48:24.403136625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74c6c654dc-bj9wc,Uid:197c1636-ec5a-422f-80d1-e91c33de4fe8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a61ee11a30dec1610e95eb063e4b571e0b417ba7fb2533994467e93175487025\"" Sep 4 19:48:25.898393 systemd-networkd[1335]: cali0ddc402118d: Gained IPv6LL Sep 4 19:48:26.028767 containerd[1543]: time="2024-09-04T19:48:26.028714927Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:48:26.028999 containerd[1543]: time="2024-09-04T19:48:26.028943194Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Sep 4 19:48:26.029318 containerd[1543]: time="2024-09-04T19:48:26.029276676Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:48:26.030320 containerd[1543]: time="2024-09-04T19:48:26.030266433Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:48:26.030719 containerd[1543]: time="2024-09-04T19:48:26.030679299Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 1.627888177s" Sep 4 19:48:26.030719 containerd[1543]: time="2024-09-04T19:48:26.030693618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Sep 4 19:48:26.031011 containerd[1543]: time="2024-09-04T19:48:26.030976736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Sep 4 19:48:26.031593 containerd[1543]: time="2024-09-04T19:48:26.031576325Z" level=info msg="CreateContainer within sandbox \"f7dafd11db0ef4a104b57772fbca84cab061505e0e1d68c003b17887e878ecb5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 19:48:26.035449 containerd[1543]: time="2024-09-04T19:48:26.035432500Z" level=info msg="CreateContainer within sandbox \"f7dafd11db0ef4a104b57772fbca84cab061505e0e1d68c003b17887e878ecb5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"56e07e5c47fb5cdd36a1345ac0c41d647341a45ca363aaae5effede330d896ab\"" Sep 4 19:48:26.035700 containerd[1543]: time="2024-09-04T19:48:26.035686274Z" level=info msg="StartContainer for \"56e07e5c47fb5cdd36a1345ac0c41d647341a45ca363aaae5effede330d896ab\"" Sep 4 19:48:26.063491 systemd[1]: Started cri-containerd-56e07e5c47fb5cdd36a1345ac0c41d647341a45ca363aaae5effede330d896ab.scope - libcontainer container 56e07e5c47fb5cdd36a1345ac0c41d647341a45ca363aaae5effede330d896ab. Sep 4 19:48:26.086180 containerd[1543]: time="2024-09-04T19:48:26.086156593Z" level=info msg="StartContainer for \"56e07e5c47fb5cdd36a1345ac0c41d647341a45ca363aaae5effede330d896ab\" returns successfully" Sep 4 19:48:26.089330 systemd-networkd[1335]: cali7cf055c6e43: Gained IPv6LL Sep 4 19:48:26.380954 containerd[1543]: time="2024-09-04T19:48:26.380902243Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 19:48:26.381021 containerd[1543]: time="2024-09-04T19:48:26.380998782Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=77" Sep 4 19:48:26.382735 containerd[1543]: time="2024-09-04T19:48:26.382689577Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 351.695485ms" Sep 4 19:48:26.382735 containerd[1543]: time="2024-09-04T19:48:26.382711379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Sep 4 19:48:26.383893 containerd[1543]: time="2024-09-04T19:48:26.383863534Z" level=info msg="CreateContainer within sandbox \"a61ee11a30dec1610e95eb063e4b571e0b417ba7fb2533994467e93175487025\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 19:48:26.389192 containerd[1543]: time="2024-09-04T19:48:26.389153481Z" level=info msg="CreateContainer within sandbox \"a61ee11a30dec1610e95eb063e4b571e0b417ba7fb2533994467e93175487025\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6003f556b24cf2317de5e4fe73deab04f45a42c32934c1d91edab8450ea931c9\"" Sep 4 19:48:26.389546 containerd[1543]: time="2024-09-04T19:48:26.389509897Z" level=info msg="StartContainer for \"6003f556b24cf2317de5e4fe73deab04f45a42c32934c1d91edab8450ea931c9\"" Sep 4 19:48:26.411398 systemd[1]: Started cri-containerd-6003f556b24cf2317de5e4fe73deab04f45a42c32934c1d91edab8450ea931c9.scope - libcontainer container 6003f556b24cf2317de5e4fe73deab04f45a42c32934c1d91edab8450ea931c9. Sep 4 19:48:26.438706 containerd[1543]: time="2024-09-04T19:48:26.438659707Z" level=info msg="StartContainer for \"6003f556b24cf2317de5e4fe73deab04f45a42c32934c1d91edab8450ea931c9\" returns successfully" Sep 4 19:48:26.621443 kubelet[2920]: I0904 19:48:26.621426 2920 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-74c6c654dc-bj9wc" podStartSLOduration=1.642074237 podStartE2EDuration="3.621398637s" podCreationTimestamp="2024-09-04 19:48:23 +0000 UTC" firstStartedPulling="2024-09-04 19:48:24.403558675 +0000 UTC m=+64.061624633" lastFinishedPulling="2024-09-04 19:48:26.382883072 +0000 UTC m=+66.040949033" observedRunningTime="2024-09-04 19:48:26.620942946 +0000 UTC m=+66.279008905" watchObservedRunningTime="2024-09-04 19:48:26.621398637 +0000 UTC m=+66.279464592" Sep 4 19:48:26.625430 kubelet[2920]: I0904 19:48:26.625409 2920 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-74c6c654dc-sbf6t" podStartSLOduration=1.9970921019999999 podStartE2EDuration="3.625372955s" podCreationTimestamp="2024-09-04 19:48:23 +0000 UTC" firstStartedPulling="2024-09-04 19:48:24.402611236 +0000 UTC m=+64.060677194" lastFinishedPulling="2024-09-04 19:48:26.030892088 +0000 UTC m=+65.688958047" observedRunningTime="2024-09-04 19:48:26.625172838 +0000 UTC m=+66.283238796" watchObservedRunningTime="2024-09-04 19:48:26.625372955 +0000 UTC m=+66.283438915" Sep 4 19:50:23.254418 systemd[1]: Started sshd@7-147.75.90.143:22-41.193.50.163:39556.service - OpenSSH per-connection server daemon (41.193.50.163:39556). Sep 4 19:50:25.549974 sshd[7075]: Invalid user centos from 41.193.50.163 port 39556 Sep 4 19:50:26.146018 sshd[7077]: pam_faillock(sshd:auth): User unknown Sep 4 19:50:26.148725 sshd[7075]: Postponed keyboard-interactive for invalid user centos from 41.193.50.163 port 39556 ssh2 [preauth] Sep 4 19:50:26.695472 sshd[7077]: pam_unix(sshd:auth): check pass; user unknown Sep 4 19:50:26.695538 sshd[7077]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=41.193.50.163 Sep 4 19:50:26.696813 sshd[7077]: pam_faillock(sshd:auth): User unknown Sep 4 19:50:28.776579 sshd[7075]: PAM: Permission denied for illegal user centos from 41.193.50.163 Sep 4 19:50:28.777568 sshd[7075]: Failed keyboard-interactive/pam for invalid user centos from 41.193.50.163 port 39556 ssh2 Sep 4 19:50:29.443910 sshd[7075]: Connection closed by invalid user centos 41.193.50.163 port 39556 [preauth] Sep 4 19:50:29.447396 systemd[1]: sshd@7-147.75.90.143:22-41.193.50.163:39556.service: Deactivated successfully. Sep 4 19:50:41.934724 systemd[1]: Started sshd@8-147.75.90.143:22-14.155.94.189:21687.service - OpenSSH per-connection server daemon (14.155.94.189:21687). Sep 4 19:50:44.734685 sshd[7108]: Invalid user test1 from 14.155.94.189 port 21687 Sep 4 19:50:45.311446 sshd[7115]: pam_faillock(sshd:auth): User unknown Sep 4 19:50:45.314543 sshd[7108]: Postponed keyboard-interactive for invalid user test1 from 14.155.94.189 port 21687 ssh2 [preauth] Sep 4 19:50:45.995740 sshd[7115]: pam_unix(sshd:auth): check pass; user unknown Sep 4 19:50:45.995837 sshd[7115]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=14.155.94.189 Sep 4 19:50:45.997258 sshd[7115]: pam_faillock(sshd:auth): User unknown Sep 4 19:50:48.353150 sshd[7108]: PAM: Permission denied for illegal user test1 from 14.155.94.189 Sep 4 19:50:48.354452 sshd[7108]: Failed keyboard-interactive/pam for invalid user test1 from 14.155.94.189 port 21687 ssh2 Sep 4 19:50:49.093537 sshd[7108]: Connection closed by invalid user test1 14.155.94.189 port 21687 [preauth] Sep 4 19:50:49.096931 systemd[1]: sshd@8-147.75.90.143:22-14.155.94.189:21687.service: Deactivated successfully. Sep 4 19:52:35.084458 systemd[1]: Started sshd@9-147.75.90.143:22-65.20.174.63:49511.service - OpenSSH per-connection server daemon (65.20.174.63:49511). Sep 4 19:52:37.078334 sshd[7417]: Invalid user supervisor from 65.20.174.63 port 49511 Sep 4 19:52:37.591266 sshd[7419]: pam_faillock(sshd:auth): User unknown Sep 4 19:52:37.595290 sshd[7417]: Postponed keyboard-interactive for invalid user supervisor from 65.20.174.63 port 49511 ssh2 [preauth] Sep 4 19:52:38.100030 sshd[7419]: pam_unix(sshd:auth): check pass; user unknown Sep 4 19:52:38.100099 sshd[7419]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=65.20.174.63 Sep 4 19:52:38.101436 sshd[7419]: pam_faillock(sshd:auth): User unknown Sep 4 19:52:39.834702 sshd[7417]: PAM: Permission denied for illegal user supervisor from 65.20.174.63 Sep 4 19:52:39.835604 sshd[7417]: Failed keyboard-interactive/pam for invalid user supervisor from 65.20.174.63 port 49511 ssh2 Sep 4 19:52:40.206188 sshd[7417]: Connection closed by invalid user supervisor 65.20.174.63 port 49511 [preauth] Sep 4 19:52:40.208352 systemd[1]: sshd@9-147.75.90.143:22-65.20.174.63:49511.service: Deactivated successfully. Sep 4 19:52:45.315995 systemd[1]: Started sshd@10-147.75.90.143:22-194.169.175.37:51650.service - OpenSSH per-connection server daemon (194.169.175.37:51650). Sep 4 19:52:46.783594 sshd[7448]: Connection closed by authenticating user root 194.169.175.37 port 51650 [preauth] Sep 4 19:52:46.786747 systemd[1]: sshd@10-147.75.90.143:22-194.169.175.37:51650.service: Deactivated successfully. Sep 4 19:58:41.283432 update_engine[1530]: I0904 19:58:41.283319 1530 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 4 19:58:41.283432 update_engine[1530]: I0904 19:58:41.283399 1530 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 4 19:58:41.285704 update_engine[1530]: I0904 19:58:41.283782 1530 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 4 19:58:41.285704 update_engine[1530]: I0904 19:58:41.285005 1530 omaha_request_params.cc:62] Current group set to beta Sep 4 19:58:41.285704 update_engine[1530]: I0904 19:58:41.285247 1530 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 4 19:58:41.285704 update_engine[1530]: I0904 19:58:41.285266 1530 update_attempter.cc:643] Scheduling an action processor start. Sep 4 19:58:41.285704 update_engine[1530]: I0904 19:58:41.285296 1530 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 4 19:58:41.285704 update_engine[1530]: I0904 19:58:41.285365 1530 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 4 19:58:41.285704 update_engine[1530]: I0904 19:58:41.285492 1530 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 4 19:58:41.285704 update_engine[1530]: I0904 19:58:41.285499 1530 omaha_request_action.cc:272] Request: Sep 4 19:58:41.285704 update_engine[1530]: Sep 4 19:58:41.285704 update_engine[1530]: Sep 4 19:58:41.285704 update_engine[1530]: Sep 4 19:58:41.285704 update_engine[1530]: Sep 4 19:58:41.285704 update_engine[1530]: Sep 4 19:58:41.285704 update_engine[1530]: Sep 4 19:58:41.285704 update_engine[1530]: Sep 4 19:58:41.285704 update_engine[1530]: Sep 4 19:58:41.285704 update_engine[1530]: I0904 19:58:41.285503 1530 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 4 19:58:41.286262 locksmithd[1564]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 4 19:58:41.286925 update_engine[1530]: I0904 19:58:41.286880 1530 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 4 19:58:41.287192 update_engine[1530]: I0904 19:58:41.287154 1530 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 4 19:58:41.420721 update_engine[1530]: E0904 19:58:41.420611 1530 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 4 19:58:41.420976 update_engine[1530]: I0904 19:58:41.420776 1530 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 4 19:58:51.192531 update_engine[1530]: I0904 19:58:51.192315 1530 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 4 19:58:51.193494 update_engine[1530]: I0904 19:58:51.192914 1530 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 4 19:58:51.193494 update_engine[1530]: I0904 19:58:51.193444 1530 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 4 19:58:51.194292 update_engine[1530]: E0904 19:58:51.194177 1530 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 4 19:58:51.194492 update_engine[1530]: I0904 19:58:51.194329 1530 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 4 19:59:01.192588 update_engine[1530]: I0904 19:59:01.192466 1530 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 4 19:59:01.193589 update_engine[1530]: I0904 19:59:01.192977 1530 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 4 19:59:01.193589 update_engine[1530]: I0904 19:59:01.193513 1530 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 4 19:59:01.194302 update_engine[1530]: E0904 19:59:01.194192 1530 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 4 19:59:01.194485 update_engine[1530]: I0904 19:59:01.194315 1530 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 4 19:59:11.192568 update_engine[1530]: I0904 19:59:11.192445 1530 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 4 19:59:11.193572 update_engine[1530]: I0904 19:59:11.192970 1530 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 4 19:59:11.193572 update_engine[1530]: I0904 19:59:11.193526 1530 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 4 19:59:11.194390 update_engine[1530]: E0904 19:59:11.194301 1530 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 4 19:59:11.194606 update_engine[1530]: I0904 19:59:11.194409 1530 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 4 19:59:11.194606 update_engine[1530]: I0904 19:59:11.194427 1530 omaha_request_action.cc:617] Omaha request response: Sep 4 19:59:11.194606 update_engine[1530]: E0904 19:59:11.194586 1530 omaha_request_action.cc:636] Omaha request network transfer failed. Sep 4 19:59:11.194911 update_engine[1530]: I0904 19:59:11.194624 1530 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 4 19:59:11.194911 update_engine[1530]: I0904 19:59:11.194634 1530 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 4 19:59:11.194911 update_engine[1530]: I0904 19:59:11.194643 1530 update_attempter.cc:306] Processing Done. Sep 4 19:59:11.194911 update_engine[1530]: E0904 19:59:11.194670 1530 update_attempter.cc:619] Update failed. Sep 4 19:59:11.194911 update_engine[1530]: I0904 19:59:11.194679 1530 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 4 19:59:11.194911 update_engine[1530]: I0904 19:59:11.194689 1530 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 4 19:59:11.194911 update_engine[1530]: I0904 19:59:11.194696 1530 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 4 19:59:11.194911 update_engine[1530]: I0904 19:59:11.194844 1530 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 4 19:59:11.194911 update_engine[1530]: I0904 19:59:11.194890 1530 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 4 19:59:11.194911 update_engine[1530]: I0904 19:59:11.194900 1530 omaha_request_action.cc:272] Request: Sep 4 19:59:11.194911 update_engine[1530]: Sep 4 19:59:11.194911 update_engine[1530]: Sep 4 19:59:11.194911 update_engine[1530]: Sep 4 19:59:11.194911 update_engine[1530]: Sep 4 19:59:11.194911 update_engine[1530]: Sep 4 19:59:11.194911 update_engine[1530]: Sep 4 19:59:11.194911 update_engine[1530]: I0904 19:59:11.194909 1530 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 4 19:59:11.196841 update_engine[1530]: I0904 19:59:11.195325 1530 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 4 19:59:11.196841 update_engine[1530]: I0904 19:59:11.195736 1530 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 4 19:59:11.196841 update_engine[1530]: E0904 19:59:11.196540 1530 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 4 19:59:11.196841 update_engine[1530]: I0904 19:59:11.196642 1530 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 4 19:59:11.196841 update_engine[1530]: I0904 19:59:11.196657 1530 omaha_request_action.cc:617] Omaha request response: Sep 4 19:59:11.196841 update_engine[1530]: I0904 19:59:11.196668 1530 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 4 19:59:11.196841 update_engine[1530]: I0904 19:59:11.196676 1530 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 4 19:59:11.196841 update_engine[1530]: I0904 19:59:11.196683 1530 update_attempter.cc:306] Processing Done. Sep 4 19:59:11.196841 update_engine[1530]: I0904 19:59:11.196694 1530 update_attempter.cc:310] Error event sent. Sep 4 19:59:11.196841 update_engine[1530]: I0904 19:59:11.196709 1530 update_check_scheduler.cc:74] Next update check in 46m58s Sep 4 19:59:11.197916 locksmithd[1564]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 4 19:59:11.197916 locksmithd[1564]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 4 20:01:38.865159 systemd[1]: Started sshd@11-147.75.90.143:22-139.178.89.65:56112.service - OpenSSH per-connection server daemon (139.178.89.65:56112). Sep 4 20:01:38.870760 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Sep 4 20:01:38.907772 systemd-tmpfiles[8876]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 20:01:38.908610 systemd-tmpfiles[8876]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 20:01:38.909748 systemd-tmpfiles[8876]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 20:01:38.910078 systemd-tmpfiles[8876]: ACLs are not supported, ignoring. Sep 4 20:01:38.910148 systemd-tmpfiles[8876]: ACLs are not supported, ignoring. Sep 4 20:01:38.917987 systemd-tmpfiles[8876]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 20:01:38.917996 systemd-tmpfiles[8876]: Skipping /boot Sep 4 20:01:38.919486 sshd[8875]: Accepted publickey for core from 139.178.89.65 port 56112 ssh2: RSA SHA256:oG4reMkBNmhhM3S4s7jiXj/Hzc3svZDvUO25x06ttt4 Sep 4 20:01:38.920864 sshd[8875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 20:01:38.923803 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Sep 4 20:01:38.924061 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Sep 4 20:01:38.929110 systemd-logind[1525]: New session 10 of user core. Sep 4 20:01:38.949406 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 20:01:39.070171 sshd[8875]: pam_unix(sshd:session): session closed for user core Sep 4 20:01:39.072003 systemd[1]: sshd@11-147.75.90.143:22-139.178.89.65:56112.service: Deactivated successfully. Sep 4 20:01:39.073108 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 20:01:39.073991 systemd-logind[1525]: Session 10 logged out. Waiting for processes to exit. Sep 4 20:01:39.074849 systemd-logind[1525]: Removed session 10. Sep 4 20:01:44.099498 systemd[1]: Started sshd@12-147.75.90.143:22-139.178.89.65:56116.service - OpenSSH per-connection server daemon (139.178.89.65:56116). Sep 4 20:01:44.123250 sshd[8929]: Accepted publickey for core from 139.178.89.65 port 56116 ssh2: RSA SHA256:oG4reMkBNmhhM3S4s7jiXj/Hzc3svZDvUO25x06ttt4 Sep 4 20:01:44.124005 sshd[8929]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 20:01:44.126437 systemd-logind[1525]: New session 11 of user core. Sep 4 20:01:44.127292 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 20:01:44.212280 sshd[8929]: pam_unix(sshd:session): session closed for user core Sep 4 20:01:44.214320 systemd[1]: sshd@12-147.75.90.143:22-139.178.89.65:56116.service: Deactivated successfully. Sep 4 20:01:44.215195 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 20:01:44.215609 systemd-logind[1525]: Session 11 logged out. Waiting for processes to exit. Sep 4 20:01:44.216091 systemd-logind[1525]: Removed session 11. Sep 4 20:01:49.240036 systemd[1]: Started sshd@13-147.75.90.143:22-139.178.89.65:55754.service - OpenSSH per-connection server daemon (139.178.89.65:55754). Sep 4 20:01:49.266943 sshd[8968]: Accepted publickey for core from 139.178.89.65 port 55754 ssh2: RSA SHA256:oG4reMkBNmhhM3S4s7jiXj/Hzc3svZDvUO25x06ttt4 Sep 4 20:01:49.267882 sshd[8968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 20:01:49.270795 systemd-logind[1525]: New session 12 of user core. Sep 4 20:01:49.276374 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 20:01:49.358058 sshd[8968]: pam_unix(sshd:session): session closed for user core Sep 4 20:01:49.376247 systemd[1]: sshd@13-147.75.90.143:22-139.178.89.65:55754.service: Deactivated successfully. Sep 4 20:01:49.377118 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 20:01:49.377908 systemd-logind[1525]: Session 12 logged out. Waiting for processes to exit. Sep 4 20:01:49.378753 systemd[1]: Started sshd@14-147.75.90.143:22-139.178.89.65:55762.service - OpenSSH per-connection server daemon (139.178.89.65:55762). Sep 4 20:01:49.379413 systemd-logind[1525]: Removed session 12. Sep 4 20:01:49.408359 sshd[9014]: Accepted publickey for core from 139.178.89.65 port 55762 ssh2: RSA SHA256:oG4reMkBNmhhM3S4s7jiXj/Hzc3svZDvUO25x06ttt4 Sep 4 20:01:49.409556 sshd[9014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 20:01:49.413876 systemd-logind[1525]: New session 13 of user core. Sep 4 20:01:49.428538 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 20:01:49.581121 sshd[9014]: pam_unix(sshd:session): session closed for user core Sep 4 20:01:49.593212 systemd[1]: sshd@14-147.75.90.143:22-139.178.89.65:55762.service: Deactivated successfully. Sep 4 20:01:49.594025 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 20:01:49.594671 systemd-logind[1525]: Session 13 logged out. Waiting for processes to exit. Sep 4 20:01:49.595304 systemd[1]: Started sshd@15-147.75.90.143:22-139.178.89.65:55776.service - OpenSSH per-connection server daemon (139.178.89.65:55776). Sep 4 20:01:49.595863 systemd-logind[1525]: Removed session 13. Sep 4 20:01:49.621178 sshd[9038]: Accepted publickey for core from 139.178.89.65 port 55776 ssh2: RSA SHA256:oG4reMkBNmhhM3S4s7jiXj/Hzc3svZDvUO25x06ttt4 Sep 4 20:01:49.621878 sshd[9038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 20:01:49.624635 systemd-logind[1525]: New session 14 of user core. Sep 4 20:01:49.635471 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 20:01:49.721716 sshd[9038]: pam_unix(sshd:session): session closed for user core Sep 4 20:01:49.723344 systemd[1]: sshd@15-147.75.90.143:22-139.178.89.65:55776.service: Deactivated successfully. Sep 4 20:01:49.724279 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 20:01:49.724992 systemd-logind[1525]: Session 14 logged out. Waiting for processes to exit. Sep 4 20:01:49.725701 systemd-logind[1525]: Removed session 14. Sep 4 20:01:54.736519 systemd[1]: Started sshd@16-147.75.90.143:22-139.178.89.65:55786.service - OpenSSH per-connection server daemon (139.178.89.65:55786). Sep 4 20:01:54.765587 sshd[9096]: Accepted publickey for core from 139.178.89.65 port 55786 ssh2: RSA SHA256:oG4reMkBNmhhM3S4s7jiXj/Hzc3svZDvUO25x06ttt4 Sep 4 20:01:54.766375 sshd[9096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 20:01:54.769106 systemd-logind[1525]: New session 15 of user core. Sep 4 20:01:54.782723 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 20:01:54.870559 sshd[9096]: pam_unix(sshd:session): session closed for user core Sep 4 20:01:54.872415 systemd[1]: sshd@16-147.75.90.143:22-139.178.89.65:55786.service: Deactivated successfully. Sep 4 20:01:54.873304 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 20:01:54.873714 systemd-logind[1525]: Session 15 logged out. Waiting for processes to exit. Sep 4 20:01:54.874289 systemd-logind[1525]: Removed session 15. Sep 4 20:01:59.920036 systemd[1]: Started sshd@17-147.75.90.143:22-139.178.89.65:33664.service - OpenSSH per-connection server daemon (139.178.89.65:33664). Sep 4 20:01:59.970159 sshd[9126]: Accepted publickey for core from 139.178.89.65 port 33664 ssh2: RSA SHA256:oG4reMkBNmhhM3S4s7jiXj/Hzc3svZDvUO25x06ttt4 Sep 4 20:01:59.970846 sshd[9126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 20:01:59.973289 systemd-logind[1525]: New session 16 of user core. Sep 4 20:01:59.987329 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 20:02:00.076187 sshd[9126]: pam_unix(sshd:session): session closed for user core Sep 4 20:02:00.077823 systemd[1]: sshd@17-147.75.90.143:22-139.178.89.65:33664.service: Deactivated successfully. Sep 4 20:02:00.078769 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 20:02:00.079442 systemd-logind[1525]: Session 16 logged out. Waiting for processes to exit. Sep 4 20:02:00.079979 systemd-logind[1525]: Removed session 16. Sep 4 20:02:05.107669 systemd[1]: Started sshd@18-147.75.90.143:22-139.178.89.65:33666.service - OpenSSH per-connection server daemon (139.178.89.65:33666). Sep 4 20:02:05.133510 sshd[9157]: Accepted publickey for core from 139.178.89.65 port 33666 ssh2: RSA SHA256:oG4reMkBNmhhM3S4s7jiXj/Hzc3svZDvUO25x06ttt4 Sep 4 20:02:05.134701 sshd[9157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 20:02:05.138755 systemd-logind[1525]: New session 17 of user core. Sep 4 20:02:05.152547 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 20:02:05.245118 sshd[9157]: pam_unix(sshd:session): session closed for user core Sep 4 20:02:05.246689 systemd[1]: sshd@18-147.75.90.143:22-139.178.89.65:33666.service: Deactivated successfully. Sep 4 20:02:05.247650 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 20:02:05.248390 systemd-logind[1525]: Session 17 logged out. Waiting for processes to exit. Sep 4 20:02:05.249064 systemd-logind[1525]: Removed session 17. Sep 4 20:02:10.265835 systemd[1]: Started sshd@19-147.75.90.143:22-139.178.89.65:52764.service - OpenSSH per-connection server daemon (139.178.89.65:52764). Sep 4 20:02:10.291701 sshd[9188]: Accepted publickey for core from 139.178.89.65 port 52764 ssh2: RSA SHA256:oG4reMkBNmhhM3S4s7jiXj/Hzc3svZDvUO25x06ttt4 Sep 4 20:02:10.292556 sshd[9188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 20:02:10.295383 systemd-logind[1525]: New session 18 of user core. Sep 4 20:02:10.304382 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 20:02:10.389647 sshd[9188]: pam_unix(sshd:session): session closed for user core Sep 4 20:02:10.399998 systemd[1]: sshd@19-147.75.90.143:22-139.178.89.65:52764.service: Deactivated successfully. Sep 4 20:02:10.400849 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 20:02:10.401632 systemd-logind[1525]: Session 18 logged out. Waiting for processes to exit. Sep 4 20:02:10.402352 systemd[1]: Started sshd@20-147.75.90.143:22-139.178.89.65:52766.service - OpenSSH per-connection server daemon (139.178.89.65:52766). Sep 4 20:02:10.402919 systemd-logind[1525]: Removed session 18. Sep 4 20:02:10.432132 sshd[9214]: Accepted publickey for core from 139.178.89.65 port 52766 ssh2: RSA SHA256:oG4reMkBNmhhM3S4s7jiXj/Hzc3svZDvUO25x06ttt4 Sep 4 20:02:10.435774 sshd[9214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 20:02:10.447154 systemd-logind[1525]: New session 19 of user core. Sep 4 20:02:10.461654 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 20:02:10.625997 sshd[9214]: pam_unix(sshd:session): session closed for user core Sep 4 20:02:10.638907 systemd[1]: sshd@20-147.75.90.143:22-139.178.89.65:52766.service: Deactivated successfully. Sep 4 20:02:10.639687 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 20:02:10.640298 systemd-logind[1525]: Session 19 logged out. Waiting for processes to exit. Sep 4 20:02:10.640973 systemd[1]: Started sshd@21-147.75.90.143:22-139.178.89.65:52774.service - OpenSSH per-connection server daemon (139.178.89.65:52774). Sep 4 20:02:10.641407 systemd-logind[1525]: Removed session 19. Sep 4 20:02:10.668635 sshd[9239]: Accepted publickey for core from 139.178.89.65 port 52774 ssh2: RSA SHA256:oG4reMkBNmhhM3S4s7jiXj/Hzc3svZDvUO25x06ttt4 Sep 4 20:02:10.669743 sshd[9239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 20:02:10.673935 systemd-logind[1525]: New session 20 of user core. Sep 4 20:02:10.688506 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 20:02:11.698448 sshd[9239]: pam_unix(sshd:session): session closed for user core Sep 4 20:02:11.707823 systemd[1]: sshd@21-147.75.90.143:22-139.178.89.65:52774.service: Deactivated successfully. Sep 4 20:02:11.708660 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 20:02:11.709421 systemd-logind[1525]: Session 20 logged out. Waiting for processes to exit. Sep 4 20:02:11.710095 systemd[1]: Started sshd@22-147.75.90.143:22-139.178.89.65:52778.service - OpenSSH per-connection server daemon (139.178.89.65:52778). Sep 4 20:02:11.710694 systemd-logind[1525]: Removed session 20. Sep 4 20:02:11.736752 sshd[9291]: Accepted publickey for core from 139.178.89.65 port 52778 ssh2: RSA SHA256:oG4reMkBNmhhM3S4s7jiXj/Hzc3svZDvUO25x06ttt4 Sep 4 20:02:11.737544 sshd[9291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 20:02:11.740480 systemd-logind[1525]: New session 21 of user core. Sep 4 20:02:11.763422 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 20:02:11.961846 sshd[9291]: pam_unix(sshd:session): session closed for user core Sep 4 20:02:11.971469 systemd[1]: sshd@22-147.75.90.143:22-139.178.89.65:52778.service: Deactivated successfully. Sep 4 20:02:11.972440 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 20:02:11.973238 systemd-logind[1525]: Session 21 logged out. Waiting for processes to exit. Sep 4 20:02:11.974077 systemd[1]: Started sshd@23-147.75.90.143:22-139.178.89.65:52788.service - OpenSSH per-connection server daemon (139.178.89.65:52788). Sep 4 20:02:11.974729 systemd-logind[1525]: Removed session 21. Sep 4 20:02:12.003309 sshd[9318]: Accepted publickey for core from 139.178.89.65 port 52788 ssh2: RSA SHA256:oG4reMkBNmhhM3S4s7jiXj/Hzc3svZDvUO25x06ttt4 Sep 4 20:02:12.004263 sshd[9318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 20:02:12.007658 systemd-logind[1525]: New session 22 of user core. Sep 4 20:02:12.021440 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 20:02:12.148846 sshd[9318]: pam_unix(sshd:session): session closed for user core Sep 4 20:02:12.150469 systemd[1]: sshd@23-147.75.90.143:22-139.178.89.65:52788.service: Deactivated successfully. Sep 4 20:02:12.151350 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 20:02:12.152048 systemd-logind[1525]: Session 22 logged out. Waiting for processes to exit. Sep 4 20:02:12.152757 systemd-logind[1525]: Removed session 22. Sep 4 20:02:17.182926 systemd[1]: Started sshd@24-147.75.90.143:22-139.178.89.65:52800.service - OpenSSH per-connection server daemon (139.178.89.65:52800). Sep 4 20:02:17.211002 sshd[9364]: Accepted publickey for core from 139.178.89.65 port 52800 ssh2: RSA SHA256:oG4reMkBNmhhM3S4s7jiXj/Hzc3svZDvUO25x06ttt4 Sep 4 20:02:17.211735 sshd[9364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 20:02:17.214570 systemd-logind[1525]: New session 23 of user core. Sep 4 20:02:17.221388 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 20:02:17.351552 sshd[9364]: pam_unix(sshd:session): session closed for user core Sep 4 20:02:17.353124 systemd[1]: sshd@24-147.75.90.143:22-139.178.89.65:52800.service: Deactivated successfully. Sep 4 20:02:17.354077 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 20:02:17.354779 systemd-logind[1525]: Session 23 logged out. Waiting for processes to exit. Sep 4 20:02:17.355312 systemd-logind[1525]: Removed session 23. Sep 4 20:02:22.371536 systemd[1]: Started sshd@25-147.75.90.143:22-139.178.89.65:58658.service - OpenSSH per-connection server daemon (139.178.89.65:58658). Sep 4 20:02:22.398431 sshd[9400]: Accepted publickey for core from 139.178.89.65 port 58658 ssh2: RSA SHA256:oG4reMkBNmhhM3S4s7jiXj/Hzc3svZDvUO25x06ttt4 Sep 4 20:02:22.401925 sshd[9400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 20:02:22.414029 systemd-logind[1525]: New session 24 of user core. Sep 4 20:02:22.427669 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 20:02:22.520063 sshd[9400]: pam_unix(sshd:session): session closed for user core Sep 4 20:02:22.521833 systemd[1]: sshd@25-147.75.90.143:22-139.178.89.65:58658.service: Deactivated successfully. Sep 4 20:02:22.522805 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 20:02:22.523556 systemd-logind[1525]: Session 24 logged out. Waiting for processes to exit. Sep 4 20:02:22.524117 systemd-logind[1525]: Removed session 24. Sep 4 20:02:27.565422 systemd[1]: Started sshd@26-147.75.90.143:22-139.178.89.65:58666.service - OpenSSH per-connection server daemon (139.178.89.65:58666). Sep 4 20:02:27.590974 sshd[9453]: Accepted publickey for core from 139.178.89.65 port 58666 ssh2: RSA SHA256:oG4reMkBNmhhM3S4s7jiXj/Hzc3svZDvUO25x06ttt4 Sep 4 20:02:27.594366 sshd[9453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 20:02:27.605381 systemd-logind[1525]: New session 25 of user core. Sep 4 20:02:27.622779 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 20:02:27.717266 sshd[9453]: pam_unix(sshd:session): session closed for user core Sep 4 20:02:27.718968 systemd[1]: sshd@26-147.75.90.143:22-139.178.89.65:58666.service: Deactivated successfully. Sep 4 20:02:27.719952 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 20:02:27.720752 systemd-logind[1525]: Session 25 logged out. Waiting for processes to exit. Sep 4 20:02:27.721436 systemd-logind[1525]: Removed session 25.